id
stringlengths 40
40
| pid
stringlengths 42
42
| input
stringlengths 8.37k
169k
| output
stringlengths 1
1.63k
|
---|---|---|---|
b3bd217287b8c765b0d461dc283afec779dbf039 | b3bd217287b8c765b0d461dc283afec779dbf039_0 | Q: Which query explanation method was preffered by the users in terms of correctness?
Text: Introduction
Natural language interfaces have been gaining significant popularity, enabling ordinary users to write and execute complex queries. One of the prominent paradigms for developing NL interfaces is semantic parsing, which is the mapping of NL phrases into a formal language. As Machine Learning techniques are standardly used in semantic parsing, a training set of question-answer pairs is provided alongside a target database BIBREF0 , BIBREF1 , BIBREF2 . The parser is a parameterized function that is trained by updating its parameters such that questions from the training set are translated into queries that yield the correct answers.
A crucial challenge for using semantic parsers is their reliability. Flawless translation from NL to formal language is an open problem, and even state-of-the-art parsers are not always right. With no explanation of the executed query, users are left wondering if the result is actually correct. Consider the example in Figure FIGREF1 , displaying a table of Olympic games and the question "Greece held its last Olympics in what year?". A semantic parser parsing the question generates multiple candidate queries and returns the evaluation result of its top ranked query. The user is only presented with the evaluation result, 2004. Although the end result is correct, she has no clear indication whether the question was correctly parsed. In fact, the interface might have chosen any candidate query yielding 2004. Ensuring the system has executed a correct query (rather than simply returning a correct answer in a particular instance) is essential, as it enables reusing the query as the data evolves over time. For example, a user might wish for a query such as "The average price of the top 5 stocks on Wall Street" to be run on a daily basis. Only its correct translation into SQL will consistently return accurate results.
Our approach is to design provenance-based BIBREF3 , BIBREF4 query explanations that are extensible, domain-independent and immediately understandable by non-expert users. We devise a cell-based provenance model for explaining formal queries over web tables and implement it with our query explanations, (see Figure FIGREF1 ). We enhance an existing NL interface for querying tables BIBREF5 by introducing a novel component featuring our query explanations. Following the parsing of an input NL question, our component explains the candidate queries to users, allowing non-experts to choose the one that best fits their intention. The immediate application is to improve the quality of obtained queries at deployment time over simply choosing the parser's top query (without user feedback). Furthermore, we show how query explanations can be used to obtain user feedback which is used to retrain the Machine Learning system, thereby improving its performance.
System Overview
We review our system architecture from Figure FIGREF7 and describe its general workflow.
Preliminaries
We begin by formally defining our task of querying tables. Afterwards, we discuss the formal query language and show how lambda DCS queries can be translated directly into SQL.
Data Model
An NL interface for querying tables receives a question INLINEFORM0 on a table INLINEFORM1 and outputs a set of values INLINEFORM2 as the answer (where each value is either the content of a cell, or the result of an aggregate function on cells). As discussed in the introduction, we make the assumption that a query concerns a single table.
Following the model presented in BIBREF1 , all table records are ordered from top to bottom with each record possessing a unique INLINEFORM0 (0, 1, 2, ...). In addition, every record has a pointer INLINEFORM1 to the record above it. The values of table cells can be either strings, numbers or dates. While we view the table as a relation, it is common BIBREF1 , BIBREF5 to describe it as a knowledge base (KB) INLINEFORM2 where INLINEFORM3 is a set of entities and INLINEFORM4 a set of binary properties. The entity set, INLINEFORM5 is comprised of all table cells (e.g., INLINEFORM6 ) and all table records, while INLINEFORM7 contains all column headers, serving as binary relations from an entity to the table records it appears in. In the example of Figure FIGREF1 , column Country is a binary relation such that Country.Greece returns all table records where the value of column Country is Greece (see definition of composition operators below). If the table in Figure FIGREF1 has INLINEFORM8 records, the returned records indices will be INLINEFORM9 .
Query Language
Following the definition of our data model we introduce our formal query language, lambda dependency-based compositional semantics (lambda DCS) BIBREF6 , BIBREF0 , which is a language inspired by lambda calculus, that revolves around sets. Lambda DCS was originally designed for building an NL interface over Freebase BIBREF9 .
Lambda DCS is a highly expressive language, designed to represent complex NL questions involving sorting, aggregation intersection and more. It has been considered a standard language for performing semantic parsing over knowledge bases BIBREF6 , BIBREF0 , BIBREF1 , BIBREF5 . A lambda DCS formula is executed against a target table and returns either a set of values (string, number or date) or a set of table records. We describe here a simplified version of lambda DCS that will be sufficient for understanding the examples presented in this paper. For a full description of lambda DCS, the reader should refer to BIBREF6 . The basic constructs of lambda DCS are as follows:
Unary: a set of values. The simplest type of unary in a table is a table cell, e.g., Greece, which denotes the set of cells containing the entity 'Greece'.
Binary: A binary relation describes a relation between sets of objects. The simplest type of a binary relation is a table column INLINEFORM0 , mapping table entities to the records where they appear, e.g., Country.
Join: For a binary relation INLINEFORM0 and unary relation INLINEFORM1 , INLINEFORM2 operates as a selection and projection. INLINEFORM3 denotes all table records where the value of column Country is Greece.
Prev: Given records INLINEFORM0 the INLINEFORM1 operator will return the set of preceding table records, INLINEFORM2 .
Reverse: Given a binary relation INLINEFORM0 from INLINEFORM1 to INLINEFORM2 , there is a reversed binary relation R[ INLINEFORM3 ] from INLINEFORM4 to INLINEFORM5 . E.g., for a column binary relation INLINEFORM6 from table values to their records, R[ INLINEFORM7 ] is a relation from records to values. R[Year].Country.Greece takes all the record indices of Country.Greece and returns the values of column Year in these records. Similarly, R[Prev] denotes a relation from a set of records, to the set of following (reverse of previous) table records.
Intersection: Intersection of sets. E.g., the set of records where Country is Greece and also where Year is 2004, Country.Greece INLINEFORM0 Year.2004.
Union: Union of sets. E.g., records where the value of column Country is Greece or China, Country.Greece INLINEFORM0 Country.China.
Aggregation: Aggregate functions min, max, avg, sum, count that take a unary and return a unary with one number. E.g., INLINEFORM0 returns the number of records where the value of City is Athens.
Superlatives: argmax, argmin. For unary INLINEFORM0 and binary INLINEFORM1 , INLINEFORM2 is the set of all values INLINEFORM3 .
In this paper we use a group of predefined operators specifically designed for the task of querying tables BIBREF1 . The language operators are compositional in nature, allowing the semantic parser to compose several sub-formulas into a single formula representing complex query operations.
Example 3.1 Consider the following lambda DCS query on the table from Figure FIGREF1 , INLINEFORM0
it returns values of column City (binary) appearing in records (Record unary) that have the lowest value in column Year.
To position our work in the context of relational queries we show lambda DCS to be an expressive fragment of SQL. The translation into SQL proves useful when introducing our provenance model by aligning our model with previous work BIBREF10 , BIBREF4 . Table TABREF69 (presented at the end of the paper) describes all lambda DCS operators with their corresponding translation into SQL.
Example 3.2 Returning to the lambda DCS query from the previous example, it can be easily translated to SQL as,
SELECT City FROM T
WHERE Index IN (
SELECT Index FROM T
WHERE Year = ( SELECT MIN(Year) FROM T ) );
where Index denotes the attribute of record indices in table INLINEFORM0 . The query first computes the set of record indices containing the minimum value in column Year, which in our running example table is {0}. It then returns the values of column City in these records, which is Athens as it is the value of column City at record 0.
Provenance
The tracking and presentation of provenance data has been extensively studied in the context of relational queries BIBREF10 , BIBREF4 . In addition to explaining query results BIBREF4 , we can use provenance information for explaining the query execution on a given web table. We design a model for multilevel cell-based provenance over tables, with three levels of granularity. The model enables us to distinguish between different types of table cells involved in the execution process. This categorization of provenance cells serves as a form of query explanation that is later implemented in our provenance-based highlights (Section SECREF34 ).
Model Definitions
Given query INLINEFORM0 and table INLINEFORM1 , the execution result, denoted by INLINEFORM2 , is either a collection of table cells, or a numeric result of an aggregate or arithmetic operation.
We define INLINEFORM0 to be the infinite domain of possible queries over INLINEFORM1 , INLINEFORM2 to be the set of table records, INLINEFORM3 to be the set of table cells and denote by INLINEFORM4 the set of aggregate functions, {min, max, avg, count, sum}.
Our cell-based provenance takes as input a query and its corresponding table and returns the set of cells and aggregate functions involved in the query execution. The model distinguishes between three types of provenance cells. There are the cells returned as the query output INLINEFORM0 , cells that are examined during the execution, and also the cells in columns that are projected or aggregated on by the query. We formally define the following three cell-based provenance functions.
Definition 4.1 Let INLINEFORM0 be a formal query and INLINEFORM1 its corresponding table. We define three cell-based provenance functions, INLINEFORM2 . Given INLINEFORM3 the functions output a set of table cells and aggregate functions. INLINEFORM4
We use INLINEFORM0 to denote an aggregate function or arithmetic operation on tables cells. Given the compositional nature of the lambda DCS query language, we define INLINEFORM1 as the set of all sub-queries composing INLINEFORM2 . We have used INLINEFORM3 to denote the table columns that are either projected by the query, or that are aggregated on by it. DISPLAYFORM0 DISPLAYFORM1
Function INLINEFORM0 returns all cells output by INLINEFORM1 or, if INLINEFORM2 is the result of an arithmetic or aggregate operation, returns all table cells involved in that operation in addition to the aggregate function itself. INLINEFORM3 returns cells and aggregate functions used during the query execution. INLINEFORM4 returns all table cells in columns that are either projected or aggregated on by INLINEFORM5 . These cell-based provenance functions have a hierarchical relation, where the cells output by each function are a subset of those output by the following function. Therefore, the three provenance sets constitute an ordered chain, where INLINEFORM6 .
Having described our three levels of cell-based provenance, we combine them into a single multilevel cell-based model for querying tables.
Definition 4.2 Given formal query INLINEFORM0 and table INLINEFORM1 , the multilevel cell-based provenance of INLINEFORM2 executed on INLINEFORM3 is a function, INLINEFORM4
Returning the provenance chain, INLINEFORM0
Query Operators
Using our model, we describe the multilevel cell-based provenance of several lambda DCS operator in Table TABREF21 . Provenance descriptions of all lambda DCS operators are provided in Table TABREF69 (at the end of the paper). For simplicity, we omit the table parameter INLINEFORM0 from provenance expressions, writing INLINEFORM1 instead of INLINEFORM2 . We also denote both cells and aggregate functions as belonging to the same set.
We use INLINEFORM0 to denote a table cell with value INLINEFORM1 , while denoting specific cell values by INLINEFORM2 . Each cell INLINEFORM3 belongs to a table record, INLINEFORM4 with a unique index, INLINEFORM5 (Section SECREF8 ). We distinguish between two types of lambda DCS formulas: formulas returning values are denoted by INLINEFORM6 while those returning table records by INLINEFORM7 .
Example 4.3 We explain the provenance of the following lambda DCS query, INLINEFORM0
It returns the values of column Year in records where column City is Athens, thus INLINEFORM0 will return all cells containing these values. INLINEFORM1
The cells involved in the execution of INLINEFORM0 include the output cells INLINEFORM1 in addition to the provenance of the sub-formula City.Athens, defined as all cells of column City with value Athens. INLINEFORM2
Where, INLINEFORM0
The provenance of the columns of INLINEFORM0 is simply all cells appearing in columns Year and City. INLINEFORM1
The provenance rules used in the examples regard the lambda DCS operators of "column records" and of "column values". The definition of the relevant provenance rules are described in the first two rows of Table TABREF69 .
Explaining Queries
To allow users to understand formal queries we must provide them with effective explanations. We describe the two methods of our system for explaining its generated queries to non-experts. Our first method translates formal queries into NL, deriving a detailed utterance representing the query. The second method implements the multilevel provenance model introduced in Section SECREF4 . For each provenance function ( INLINEFORM0 ) we uniquely highlight its cells, creating a visual explanation of the query execution.
Query to Utterance
Given a formal query in lambda DCS we provide a domain independent method for converting it into a detailed NL utterance. Drawing on the work in BIBREF7 we use a similar technique of deriving an NL utterance alongside the formal query. We introduce new NL templates describing complex lambda DCS operations for querying tables.
Example 5.1 The lambda DCS query, INLINEFORM0
is mapped to the utterance, "value in column Year where column Country is Greece". If we compose it with an aggregate function, INLINEFORM0
its respective utterance will be composed as well, being "maximum of values in column Year where column Country is Greece". The full derivation trees are presented in Figure FIGREF32 , where the original query parse tree is shown on the left, while our derived NL explanation is presented on the right.
We implement query to utterance as part of the semantic parser of our interface (Section SECREF42 ). The actual parsing of questions into formal queries is achieved using a context-free grammar (CFG). As shown in Figure FIGREF32 , formal queries are derived recursively by repeatedly applying the grammar deduction rules. Using the CYK BIBREF11 algorithm, the semantic parser returns derivation trees that maximize its objective (Section SECREF42 ). To generate an NL utterance for any formal query, we change the right-hand-side of each grammar rule to be a sequence of both non-terminals and NL phrases. For example, grammar rule: ("maximum of" Values INLINEFORM0 Entity) where Values, Entity and "maximum of" are its non-terminals and NL phrase respectively. Table TABREF33 describes the rules of the CFG augmented with our NL utterances. At the end of the derivation, the full query utterance can be read as the yield of the parse tree.
To utilize utterances as query explanations, we design them to be as clear and understandable as possible, albeit having a somewhat clumsy syntax. The references to table columns, rows as part of the NL utterance helps to clarify the actual semantics of the query to the non-expert users.
As the utterances are descriptions of formal queries, reading the utterance of each candidate query to determine its correctness might take some time. As user work-time is expensive, explanation methods that allow to quickly target correct results are necessary. We enhance utterances by employing provenance-based explanations, used for quickly identifying correct queries.
Provenance to Highlights
The understanding of a table query can be achieved by examining the cells on which it is executed. We explain a query by highlighting its multilevel cell-based provenance (Section SECREF4 ).
Using our provenance model, we define a procedure that takes a query as input and returns all cells involved in its execution on the corresponding table. These cells are then highlighted in the table, illustrating the query execution. Given a query INLINEFORM0 and table INLINEFORM1 , the INLINEFORM2 procedure divides cells into four types, based on their multilevel provenance functions. To help illustrate the query, each type of its provenance cells is highlighted differently: Colored cells are equivalent to INLINEFORM3 and are the cells returned by INLINEFORM4 as output, or used to compute the final output. Framed cells are equivalent to INLINEFORM5 and are the cells and aggregate functions used during query execution. Lit cells are equivalent to INLINEFORM6 , and are the cells of columns projected by the query. All other cells are unrelated to the query, hence no highlights are applied to them.
Example 5.2 Consider the lambda DCS query, INLINEFORM0
The utterance of this query is, "difference in column Total between rows where Nation is Fiji and Tonga". Figure FIGREF38 displays the highlights generated for this query, lighting all of the query's columns, framing its provenance cells and coloring the cells that comprise its output. In this example, all cells in columns Nation and Total are lit. The cells Fiji and Tonga are part of INLINEFORM0 and are therefore framed. The cells in INLINEFORM1 , containing 130 and 20, are colored as they contain the values used to compute the final result.
To highlight a query over the input table we call the procedure INLINEFORM0 with INLINEFORM1 . We describe our implementation in Algorithm SECREF34 . It is a recursive procedure which leverages the compositional nature of lambda DCS formulas. It decomposes the query INLINEFORM2 into its set of sub-formulas INLINEFORM3 , recursively computing the multilevel provenance. When reaching an atomic formula the algorithm will execute it and return its output. Cells returned by a sub-formula are both lit and framed, being part of INLINEFORM4 and INLINEFORM5 . Finally, all of the cells in INLINEFORM6 (Equation EQREF24 ) are colored.
Examples of provenance-based highlights are provided for several lambda DCS operators in Figures FIGREF38 - FIGREF38 . We display highlight examples for all lambda DCS operators in Figures TABREF70 - TABREF70 (at the end of the paper). Highlighting query cell-based provenance [1] Highlight INLINEFORM0 , INLINEFORM1 , INLINEFORM2 INLINEFORM3 provenance sets INLINEFORM4 INLINEFORM5 aggregate function INLINEFORM6 INLINEFORM7 is atomic INLINEFORM8 INLINEFORM9 INLINEFORM10 INLINEFORM11 INLINEFORM12 INLINEFORM13 INLINEFORM14 INLINEFORM15 INLINEFORM16 INLINEFORM17 ; INLINEFORM18 INLINEFORM19 INLINEFORM20
We note that different queries may possess identical provenance-based highlights. Consider Figure FIGREF38 and the following query utterances,
"values in column Games that are more than 4."
"values in column Games that are at least 5 and also less than 17."
The highlights displayed on Figure FIGREF38 will be the same for both of the above queries. In such cases the user should refer to the NL utterances of the queries in order to distinguish between them. Thus our query explanation methods are complementary, with the provenance-based highlights providing quick visual feedback while the NL utterances serve as detailed descriptions.
Scaling to Large Tables
We elaborate on how our query explanations can be easily extended to tables with numerous records. Given the nature of the NL utterances, this form of explanation is independent of a table's given size. The utterance will still provide an informed explanation of the query regardless of the table size or its present relations.
When employing our provenance-based highlights to large tables it might seem intractable to display them to the user. However, the highlights are meant to explain the candidate query itself, and not the final answer returned by it. Thus we can precisely indicate to the user what are the semantics of the query by employing highlights to a subsample of the table.
An intuitive solution can be used to achieve a succinct sample. First we use Algorithm SECREF34 to compute the cell-based provenance sets INLINEFORM0 and to mark the aggregation operators on relevant table headers. We can then map each provenance cell to its relevant record (table row), enabling us to build corresponding record sets, INLINEFORM1 . To illustrate the query highlights we sample one record from each of the three sets: INLINEFORM2 , INLINEFORM3 and INLINEFORM4 . In the special case of a query containing arithmetic difference (Figure FIGREF38 ), we select two records from INLINEFORM5 , one for each subtracted value. Sampled records are ordered according to their order in the original table. The example in Figure FIGREF40 contains three table rows selected from a large web table BIBREF12 .
Concrete Applications
So far we have described our methods for query explanations (Sections SECREF30 , SECREF34 ) and we now harness these methods to enhance an existing NL interface for querying tables.
Implementation
We return to our system architecture from Figure FIGREF7 . Presented with an NL question and corresponding table, our interface parses the question into lambda DCS queries using the state-of-the-art parser in BIBREF5 . The parser is trained for the task of querying web tables using the WikiTableQuestions dataset BIBREF1 .
Following the mapping of a question to a set of candidate queries, our interface will generate relevant query explanations for each of the queries, displaying a detailed NL utterance and highlighting the provenance data. The explanations are presented to non-technical users to assist in selecting the correct formal-query representing the question.
User feedback in the form of question-query pairs is also used offline in order to retrain the semantic parser.
We briefly describe the benchmark dataset used in our framework and its relation to the task of querying web tables.
WikiTableQuestions BIBREF1 is a question answering dataset over semi-structured tables. It is comprised of question-answer pairs on HTML tables, and was constructed by selecting data tables from Wikipedia that contained at least 8 rows and 5 columns. Amazon Mechanical Turk workers were then tasked with writing trivia questions about each table. In contrast to common NLIDB benchmarks BIBREF2 , BIBREF0 , BIBREF15 , WikiTableQuestions contains 22,033 questions and is an order of magnitude larger than previous state-of-the-art datasets. Its questions were not designed by predefined templates but were hand crafted by users, demonstrating high linguistic variance. Compared to previous datasets on knowledge bases it covers nearly 4,000 unique column headers, containing far more relations than closed domain datasets BIBREF15 , BIBREF2 and datasets for querying knowledge bases BIBREF16 . Its questions cover a wide range of domains, requiring operations such as table lookup, aggregation, superlatives (argmax, argmin), arithmetic operations, joins and unions. The complexity of its questions can be shown in Tables TABREF6 and TABREF66 .
The complete dataset contains 22,033 examples on 2,108 tables. As the test set, 20% of the tables and their associated questions were set aside, while the remaining tables and questions serve as the training set. The separation between tables in the training and test sets forces the question answering system to handle new tables with previously unseen relations and entities.
Training on Feedback
The goal of the semantic parser is to translate natural language questions into equivalent formal queries. Thus, in order to ideally train the parser, we should train it on questions annotated with their respective queries. However, annotating NL questions with formal queries is a costly operation, hence recent works have trained semantic parsers on examples labeled solely with their answer BIBREF17 , BIBREF18 , BIBREF0 , BIBREF1 . This weak supervision facilitates the training process at the cost of learning from incorrect queries. Figure FIGREF48 presents two candidate queries for the question "What was the last year the team was a part of the USL A-league?". Note that both queries output the correct answer to the question, which is 2004. However, the second query is clearly incorrect given its utterance is "minimum value in column Year in rows that have the highest value in column Open Cup".
The WikiTableQuestions dataset, on which the parser is trained, is comprised of question-answer pairs. Thus by retraining the parser on question-query pairs, that are provided as feedback, we can improve its overall correctness. We address this in our work by explaining queries to non-experts, enabling them to select the correct candidate query or mark None when all are incorrect.
These annotations are then used to retrain the semantic parser. Given a question, its annotations are the queries marked as correct by users. We note that a question may have more than one correct annotation.
Semantic Parsing is the task of mapping natural language questions to formal language queries (SQL, lambda DCS, etc.) that are executed against a target database. The semantic parser is a parameterized function, trained by updating its parameter vector such that questions from the training set are translated to formal queries yielding the correct answer.
We denote the table by INLINEFORM0 and the NL question by INLINEFORM1 . The semantic parser aims to generate a query INLINEFORM2 which executes to the correct answer of INLINEFORM3 on INLINEFORM4 , denoted by INLINEFORM5 . In our running example from Figure FIGREF1 , the parser tries to generate queries which execute to the value 2004. We define INLINEFORM6 as the set of candidate queries generated by parsing INLINEFORM7 . For each INLINEFORM8 we extract a feature vector INLINEFORM9 and define a log-linear distribution over candidates: DISPLAYFORM0
where INLINEFORM0 is the parameter vector. We formally define the parser distribution of yielding the correct answer, DISPLAYFORM0
where INLINEFORM0 is 1 when INLINEFORM1 and zero otherwise.
The parser is trained using examples INLINEFORM0 , optimizing the parameter vector INLINEFORM1 using AdaGrad BIBREF19 in order to maximize the following objective BIBREF1 , DISPLAYFORM0
where INLINEFORM0 is a hyperparameter vector obtained from cross-validation. To train a semantic parser that is unconstrained to any specific domain we deploy the parser in BIBREF5 , trained end-to-end on the WikiTableQuestions dataset BIBREF1 .
We modify the original parser so that annotated questions are trained using question-query pairs while all other questions are trained as before. The set of annotated examples is denoted by INLINEFORM0 . Given annotated example INLINEFORM1 , its set of valid queries is INLINEFORM2 . We define the distribution for an annotated example to yield the correct answer by, DISPLAYFORM0
Where INLINEFORM0 is 1 when INLINEFORM1 and zero otherwise. Our new objective for retraining the semantic parser, DISPLAYFORM0
the first sum denoting the set of annotated examples, while the second sum denotes all other examples.
This enables the parser to update its parameters so that questions are translated into correct queries, rather than merely into queries that yield the correct answer.
Deployment
At deployment, user interaction is used to ensure that the system returns formal-queries that are correct.
We have constructed a web interface allowing users to pose NL questions on tables and by using our query explanations, to choose the correct query from the top-k generated candidates. Normally, a semantic parser receives an NL question as input and displays to the user only the result of its top ranked query. The user receives no explanation as to why was she returned this specific result or whether the parser had managed to correctly parse her question into formal language. In contrast to the baseline parser, our system displays to users its top-k candidates, allowing them to modify the parser's top query.
Example 6.1 Figure FIGREF51 shows an example from the WikitableQuestions test set with the question "How many more ships were wrecked in lake Huron than in Erie". Note that the original table contains many more records than those displayed in the figure. Given the explanations of the parser's top candidates, our provenance-based highlights make it clear that the first query is correct as it compares the table occurrences of lakes Huron and Erie. The second result is incorrect, comparing lakes Huron and Superior, while the third query does not compare occurrences.
Experiments
Following the presentation of concrete applications for our methods we have designed an experimental study to measure the effect of our query explanation mechanism. We conducted experiments to evaluate both the quality of our explanations, as well as their contribution to the baseline parser. This section is comprised of two main parts:
The experimental results show our query explanations to be effective, allowing non-experts to easily understand generated queries and to disqualify incorrect ones. Training on user feedback further improves the system correctness, allowing it to learn from user experience.
Evaluation Metrics
We begin by defining the system correctness, used as our main evaluation metric. Recall that the semantic parser is given an NL question INLINEFORM0 and table INLINEFORM1 and generates a set INLINEFORM2 of candidate queries. Each query INLINEFORM3 is then executed against the table, yielding result INLINEFORM4 . We define the parser correctness as the percentage of questions where the top-ranked query is a correct translation of INLINEFORM5 from NL to lambda DCS. In addition to correctness, we also measured the mean reciprocal rank (MRR), used for evaluating the average correctness of all candidate queries generated, rather than only that of the top-1.
Example 7.1 To illustrate the difference between correct answers and correct queries let us consider the example in Figure FIGREF48 . The parser generates the following candidate queries (we present only their utterances):
maximum value in column Year in rows where value of column League is USL A-League.
minimum value in column Year in rows that have the highest value in column Open Cup.
Both return the correct answer 2004, however only the first query conveys the correct translation of the NL question.
Interactive Parsing at Deployment
We use query explanations to improve the real-time performance of the semantic parser. Given any NL question on a (never before seen) table, the parser will generate a set of candidate queries. Using our explanations, the user will interactively select the correct query (when generated) from the parser's top-k results. We compare the correctness scores of our interactive method with that of the baseline parser.
Our user study was conducted using anonymous workers recruited through the the Amazon Mechanical Turk (AMT) crowdsourcing platform. Focusing on non-experts, our only requirements were that participants be over 18 years old and reside in a native English speaking country. Our study included 35 distinct workers, a significant number of participants compared to previous works on NL interfaces BIBREF4 , BIBREF15 , BIBREF20 . Rather than relying on a small set of NL test questions BIBREF4 , BIBREF15 we presented each worker with 20 distinct questions that were randomly selected from the WikiTableQuestions benchmark dataset (Section SECREF41 ). A total of 405 distinct questions were presented (as described in Table TABREF59 ). For each question, workers were shown explanations (utterances, highlights) of the top-7 candidate queries generated. Candidates were randomly ordered, rather than ranked by the parser scores, so that users will not be biased towards the parser's top query. Given a question, participants were asked to mark the correct candidate query, or None if no correct query was generated.
Displaying the top-k results allowed workers to improve the baseline parser in cases where the correct query was generated, but not ranked at the top. After examining different values of INLINEFORM0 , we chose to display top-k queries with INLINEFORM1 . We made sure to validate that our choice of INLINEFORM2 was sufficiently large, so that it included the correct query (when generated). We randomly selected 100 examples where no correct query was generated in the top-7 and examined whether one was generated within the top-14 queries. Results had shown that for INLINEFORM3 only 5% of the examples contained a correct query, a minor improvement at the cost of doubling user effort. Thus a choice of INLINEFORM4 appears to be reasonable.
To verify that our query explanations were understandable to non-experts we measured each worker's success. Results in Table TABREF59 show that in 78.4% of the cases, workers had succeeded in identifying the correct query or identifying that no candidate query was correct. The average success rate for all 35 workers being 15.7/20 questions. When comparing our explanation approach (utterances + highlights) to a baseline of no explanations, non-expert users failed to identify correct queries when shown only lambda DCS queries. This demonstrates that utterances and provenance-based highlights serve as effective explanations of formal queries to the layperson. We now show that using them jointly is superior to using only utterances.
When introducing our two explanation methods, we noted their complementary nature. NL utterances serve as highly detailed phrases describing the query, while highlighting provenance cells allows to quickly single out the correct queries. We put this claim to the test by measuring the impact our novel provenance-based highlights had on the average work-time of users. We measured the work-time of 20 distinct AMT workers, divided into two separate groups, each containing half of the participants. Workers from both groups were presented with 20 questions from WikiTableQuestions. The first group of workers were presented both with highlights and utterances as their query explanations, while the second group had to rely solely on NL utterances. Though both groups achieved identical correctness results, the group employing table highlights performed significantly faster. Results in Table TABREF60 show our provenance-based explanations cut the average and median work-time by 34% and 20% respectively. Since user work-time is valuable, the introduction of visual explanations such as table highlights may lead to significant savings in worker costs.
We have examined the effect to which our query explanations can help users improve the correctness of a baseline NL interface. Our user study compares the correctness of three scenarios:
Parser correctness - our baseline is the percentage of examples where the top query returned by the semantic parser was correct.
User correctness - the percentage of examples where the user selected a correct query from the top-7 generated by the parser.
Hybrid correctness - correctness of queries returned by a combination of the previous two scenarios. The system returns the query marked by the user as correct; if the user marks all queries as incorrect it will return the parser's top candidate.
Results in Table TABREF64 show the correctness rates of these scenarios. User correctness score is superior to that of the baseline parser by 7.5% (from 37.1% to 44.6%), while the hybrid approach outscores both with a correctness of 48.7% improving the baseline by 11.6%. For the user and hybrid correctness we used a INLINEFORM0 test to measure significance. Random queries and tables included in the experiment are presented in Table TABREF66 . We also include a comparison of the top ranked query of the baseline parser compared to that of the user.
We define the correctness bound as the percentage of examples where the top-k candidate queries actually contain a correct result. This bound serves as the optimal correctness score that workers can achieve. The 56% correctness-bound of the baseline parser stems from the sheer complexity of the WikiTableQuestions benchmark. Given the training and test tables are disjoint, the parser is tested on relations and entities unobserved during its training. This task of generalizing to unseen domains is an established challenge in semantic parsing BIBREF1 , BIBREF21 . Using the correctness-bound as an upper bound on our results shows the hybrid approach achieves 87% of its full potential. Though there is some room for improvement, it seems reasonable given that our non-expert workers possess no prior experience of their given task.
We describe the execution times for generating our query explanations in Table TABREF65 . We trained the semantic parser using the SMEPRE toolkit BIBREF0 on a machine with Xeon 2.20GHz CPU and 256GB RAM running Linux Ubuntu 14.04 LTS. We report the average generation times of candidate queries, utterances and highlights over the entire WikiTableQuestions test set, numbering 4,344 questions.
Training on User Feedback
We measure our system's ability to learn from user feedback in the form of question-query pairs. Given a question, the user is shown explanations of the parser's top-7 queries, using them to annotate the question, i.e. assign to it correct formal queries (e.g., the first query in Figure FIGREF48 ). Annotations were collected by displaying users with questions from the WikiTableQuestions training set along with query explanations of the parser results. To enhance the annotation quality, each question was presented to three distinct users, taking only the annotations marked by at least two of them as correct. Data collection was done using AMT and in total, 2,068 annotated questions were collected. Following a standard methodology, we split the annotated data into train and development sets. Out of our 2,068 annotated examples, 418 were selected as the development set, and 1,650 as the training set. The annotated development examples were used to evaluate the effect of our annotations on the parser correctness.
We experiment on two scenarios: (1) training the parser solely on 1,650 annotated examples; (2) integrating our training examples into the entire WikiTableQuestions training set of 11K examples. For each scenario we trained two parsers, one trained using annotations and the other without any use of annotations. To gain more robust results we ran our experiments on three different train/dev splits of our data, averaging the results. Table TABREF68 displays the results of our experiments. When training solely on the annotated examples, parser correctness on development examples increased by 8% (41.8% to 49.8%). The spike in correctness shows that feedback acquired using our explanations is high-quality input for the semantic parser, hence the parser achieves better correctness when trained on it compared to training on the original WikiTableQuestions benchmark.
When training on all 11K train examples using our 1,650 annotations we also saw an increase (of 2.1%), albeit being more modest due to the percentage of annotated examples. We witnessed an increase in both correctness and MRR (mean reciprocal rank) that grows in the number of annotated train examples. This further asserts the significance of annotated training data BIBREF22 , BIBREF14 and shows that our system can learn from quality feedback collected by non-experts.
Conclusion and Future Work
We have studied in this paper the problem of explaining complex NL queries to non expert users. We introduced visual query explanations in the form of table highlights, based on a novel cell-based provenance model tested on web tables from hundreds of distinct domains. Table highlights provide immediate visual feedback for identifying correct candidate queries. We combine table highlights with utterance based query explanations, significantly improving their effectiveness. Using our query explanations we enhanced an NL interface for querying tables by providing it with feedback at both deployment and training time. Feedback is procured through query explanations, allowing users with no technical background to query tables with confidence, while simultaneously providing feedback to enhance the interface itself. We implement a human in the loop paradigm, where our users both exploit the underlying Machine Learning algorithm while providing it with further data to train on.
We have put our methods to the test, having conducted an extensive user study to determine the clarity of our explanations. Experimenting with explanations for hundreds of formal queries, users proved to be successful in interactively choosing correct queries, easily topping the baseline parser correctness. The addition of provenance-based highlights helps boost the efficacy of user feedback, cutting average work-time by a third compared to the utterances baseline. | hybrid approach |
e8647f9dc0986048694c34ab9ce763b3167c3deb | e8647f9dc0986048694c34ab9ce763b3167c3deb_0 | Q: Do they conduct a user study where they show an NL interface with and without their explanation?
Text: Introduction
Natural language interfaces have been gaining significant popularity, enabling ordinary users to write and execute complex queries. One of the prominent paradigms for developing NL interfaces is semantic parsing, which is the mapping of NL phrases into a formal language. As Machine Learning techniques are standardly used in semantic parsing, a training set of question-answer pairs is provided alongside a target database BIBREF0 , BIBREF1 , BIBREF2 . The parser is a parameterized function that is trained by updating its parameters such that questions from the training set are translated into queries that yield the correct answers.
A crucial challenge for using semantic parsers is their reliability. Flawless translation from NL to formal language is an open problem, and even state-of-the-art parsers are not always right. With no explanation of the executed query, users are left wondering if the result is actually correct. Consider the example in Figure FIGREF1 , displaying a table of Olympic games and the question "Greece held its last Olympics in what year?". A semantic parser parsing the question generates multiple candidate queries and returns the evaluation result of its top ranked query. The user is only presented with the evaluation result, 2004. Although the end result is correct, she has no clear indication whether the question was correctly parsed. In fact, the interface might have chosen any candidate query yielding 2004. Ensuring the system has executed a correct query (rather than simply returning a correct answer in a particular instance) is essential, as it enables reusing the query as the data evolves over time. For example, a user might wish for a query such as "The average price of the top 5 stocks on Wall Street" to be run on a daily basis. Only its correct translation into SQL will consistently return accurate results.
Our approach is to design provenance-based BIBREF3 , BIBREF4 query explanations that are extensible, domain-independent and immediately understandable by non-expert users. We devise a cell-based provenance model for explaining formal queries over web tables and implement it with our query explanations, (see Figure FIGREF1 ). We enhance an existing NL interface for querying tables BIBREF5 by introducing a novel component featuring our query explanations. Following the parsing of an input NL question, our component explains the candidate queries to users, allowing non-experts to choose the one that best fits their intention. The immediate application is to improve the quality of obtained queries at deployment time over simply choosing the parser's top query (without user feedback). Furthermore, we show how query explanations can be used to obtain user feedback which is used to retrain the Machine Learning system, thereby improving its performance.
System Overview
We review our system architecture from Figure FIGREF7 and describe its general workflow.
Preliminaries
We begin by formally defining our task of querying tables. Afterwards, we discuss the formal query language and show how lambda DCS queries can be translated directly into SQL.
Data Model
An NL interface for querying tables receives a question INLINEFORM0 on a table INLINEFORM1 and outputs a set of values INLINEFORM2 as the answer (where each value is either the content of a cell, or the result of an aggregate function on cells). As discussed in the introduction, we make the assumption that a query concerns a single table.
Following the model presented in BIBREF1 , all table records are ordered from top to bottom with each record possessing a unique INLINEFORM0 (0, 1, 2, ...). In addition, every record has a pointer INLINEFORM1 to the record above it. The values of table cells can be either strings, numbers or dates. While we view the table as a relation, it is common BIBREF1 , BIBREF5 to describe it as a knowledge base (KB) INLINEFORM2 where INLINEFORM3 is a set of entities and INLINEFORM4 a set of binary properties. The entity set, INLINEFORM5 is comprised of all table cells (e.g., INLINEFORM6 ) and all table records, while INLINEFORM7 contains all column headers, serving as binary relations from an entity to the table records it appears in. In the example of Figure FIGREF1 , column Country is a binary relation such that Country.Greece returns all table records where the value of column Country is Greece (see definition of composition operators below). If the table in Figure FIGREF1 has INLINEFORM8 records, the returned records indices will be INLINEFORM9 .
Query Language
Following the definition of our data model we introduce our formal query language, lambda dependency-based compositional semantics (lambda DCS) BIBREF6 , BIBREF0 , which is a language inspired by lambda calculus, that revolves around sets. Lambda DCS was originally designed for building an NL interface over Freebase BIBREF9 .
Lambda DCS is a highly expressive language, designed to represent complex NL questions involving sorting, aggregation intersection and more. It has been considered a standard language for performing semantic parsing over knowledge bases BIBREF6 , BIBREF0 , BIBREF1 , BIBREF5 . A lambda DCS formula is executed against a target table and returns either a set of values (string, number or date) or a set of table records. We describe here a simplified version of lambda DCS that will be sufficient for understanding the examples presented in this paper. For a full description of lambda DCS, the reader should refer to BIBREF6 . The basic constructs of lambda DCS are as follows:
Unary: a set of values. The simplest type of unary in a table is a table cell, e.g., Greece, which denotes the set of cells containing the entity 'Greece'.
Binary: A binary relation describes a relation between sets of objects. The simplest type of a binary relation is a table column INLINEFORM0 , mapping table entities to the records where they appear, e.g., Country.
Join: For a binary relation INLINEFORM0 and unary relation INLINEFORM1 , INLINEFORM2 operates as a selection and projection. INLINEFORM3 denotes all table records where the value of column Country is Greece.
Prev: Given records INLINEFORM0 the INLINEFORM1 operator will return the set of preceding table records, INLINEFORM2 .
Reverse: Given a binary relation INLINEFORM0 from INLINEFORM1 to INLINEFORM2 , there is a reversed binary relation R[ INLINEFORM3 ] from INLINEFORM4 to INLINEFORM5 . E.g., for a column binary relation INLINEFORM6 from table values to their records, R[ INLINEFORM7 ] is a relation from records to values. R[Year].Country.Greece takes all the record indices of Country.Greece and returns the values of column Year in these records. Similarly, R[Prev] denotes a relation from a set of records, to the set of following (reverse of previous) table records.
Intersection: Intersection of sets. E.g., the set of records where Country is Greece and also where Year is 2004, Country.Greece INLINEFORM0 Year.2004.
Union: Union of sets. E.g., records where the value of column Country is Greece or China, Country.Greece INLINEFORM0 Country.China.
Aggregation: Aggregate functions min, max, avg, sum, count that take a unary and return a unary with one number. E.g., INLINEFORM0 returns the number of records where the value of City is Athens.
Superlatives: argmax, argmin. For unary INLINEFORM0 and binary INLINEFORM1 , INLINEFORM2 is the set of all values INLINEFORM3 .
In this paper we use a group of predefined operators specifically designed for the task of querying tables BIBREF1 . The language operators are compositional in nature, allowing the semantic parser to compose several sub-formulas into a single formula representing complex query operations.
Example 3.1 Consider the following lambda DCS query on the table from Figure FIGREF1 , INLINEFORM0
it returns values of column City (binary) appearing in records (Record unary) that have the lowest value in column Year.
To position our work in the context of relational queries we show lambda DCS to be an expressive fragment of SQL. The translation into SQL proves useful when introducing our provenance model by aligning our model with previous work BIBREF10 , BIBREF4 . Table TABREF69 (presented at the end of the paper) describes all lambda DCS operators with their corresponding translation into SQL.
Example 3.2 Returning to the lambda DCS query from the previous example, it can be easily translated to SQL as,
SELECT City FROM T
WHERE Index IN (
SELECT Index FROM T
WHERE Year = ( SELECT MIN(Year) FROM T ) );
where Index denotes the attribute of record indices in table INLINEFORM0 . The query first computes the set of record indices containing the minimum value in column Year, which in our running example table is {0}. It then returns the values of column City in these records, which is Athens as it is the value of column City at record 0.
Provenance
The tracking and presentation of provenance data has been extensively studied in the context of relational queries BIBREF10 , BIBREF4 . In addition to explaining query results BIBREF4 , we can use provenance information for explaining the query execution on a given web table. We design a model for multilevel cell-based provenance over tables, with three levels of granularity. The model enables us to distinguish between different types of table cells involved in the execution process. This categorization of provenance cells serves as a form of query explanation that is later implemented in our provenance-based highlights (Section SECREF34 ).
Model Definitions
Given query INLINEFORM0 and table INLINEFORM1 , the execution result, denoted by INLINEFORM2 , is either a collection of table cells, or a numeric result of an aggregate or arithmetic operation.
We define INLINEFORM0 to be the infinite domain of possible queries over INLINEFORM1 , INLINEFORM2 to be the set of table records, INLINEFORM3 to be the set of table cells and denote by INLINEFORM4 the set of aggregate functions, {min, max, avg, count, sum}.
Our cell-based provenance takes as input a query and its corresponding table and returns the set of cells and aggregate functions involved in the query execution. The model distinguishes between three types of provenance cells. There are the cells returned as the query output INLINEFORM0 , cells that are examined during the execution, and also the cells in columns that are projected or aggregated on by the query. We formally define the following three cell-based provenance functions.
Definition 4.1 Let INLINEFORM0 be a formal query and INLINEFORM1 its corresponding table. We define three cell-based provenance functions, INLINEFORM2 . Given INLINEFORM3 the functions output a set of table cells and aggregate functions. INLINEFORM4
We use INLINEFORM0 to denote an aggregate function or arithmetic operation on tables cells. Given the compositional nature of the lambda DCS query language, we define INLINEFORM1 as the set of all sub-queries composing INLINEFORM2 . We have used INLINEFORM3 to denote the table columns that are either projected by the query, or that are aggregated on by it. DISPLAYFORM0 DISPLAYFORM1
Function INLINEFORM0 returns all cells output by INLINEFORM1 or, if INLINEFORM2 is the result of an arithmetic or aggregate operation, returns all table cells involved in that operation in addition to the aggregate function itself. INLINEFORM3 returns cells and aggregate functions used during the query execution. INLINEFORM4 returns all table cells in columns that are either projected or aggregated on by INLINEFORM5 . These cell-based provenance functions have a hierarchical relation, where the cells output by each function are a subset of those output by the following function. Therefore, the three provenance sets constitute an ordered chain, where INLINEFORM6 .
Having described our three levels of cell-based provenance, we combine them into a single multilevel cell-based model for querying tables.
Definition 4.2 Given formal query INLINEFORM0 and table INLINEFORM1 , the multilevel cell-based provenance of INLINEFORM2 executed on INLINEFORM3 is a function, INLINEFORM4
Returning the provenance chain, INLINEFORM0
Query Operators
Using our model, we describe the multilevel cell-based provenance of several lambda DCS operator in Table TABREF21 . Provenance descriptions of all lambda DCS operators are provided in Table TABREF69 (at the end of the paper). For simplicity, we omit the table parameter INLINEFORM0 from provenance expressions, writing INLINEFORM1 instead of INLINEFORM2 . We also denote both cells and aggregate functions as belonging to the same set.
We use INLINEFORM0 to denote a table cell with value INLINEFORM1 , while denoting specific cell values by INLINEFORM2 . Each cell INLINEFORM3 belongs to a table record, INLINEFORM4 with a unique index, INLINEFORM5 (Section SECREF8 ). We distinguish between two types of lambda DCS formulas: formulas returning values are denoted by INLINEFORM6 while those returning table records by INLINEFORM7 .
Example 4.3 We explain the provenance of the following lambda DCS query, INLINEFORM0
It returns the values of column Year in records where column City is Athens, thus INLINEFORM0 will return all cells containing these values. INLINEFORM1
The cells involved in the execution of INLINEFORM0 include the output cells INLINEFORM1 in addition to the provenance of the sub-formula City.Athens, defined as all cells of column City with value Athens. INLINEFORM2
Where, INLINEFORM0
The provenance of the columns of INLINEFORM0 is simply all cells appearing in columns Year and City. INLINEFORM1
The provenance rules used in the examples regard the lambda DCS operators of "column records" and of "column values". The definition of the relevant provenance rules are described in the first two rows of Table TABREF69 .
Explaining Queries
To allow users to understand formal queries we must provide them with effective explanations. We describe the two methods of our system for explaining its generated queries to non-experts. Our first method translates formal queries into NL, deriving a detailed utterance representing the query. The second method implements the multilevel provenance model introduced in Section SECREF4 . For each provenance function ( INLINEFORM0 ) we uniquely highlight its cells, creating a visual explanation of the query execution.
Query to Utterance
Given a formal query in lambda DCS we provide a domain independent method for converting it into a detailed NL utterance. Drawing on the work in BIBREF7 we use a similar technique of deriving an NL utterance alongside the formal query. We introduce new NL templates describing complex lambda DCS operations for querying tables.
Example 5.1 The lambda DCS query, INLINEFORM0
is mapped to the utterance, "value in column Year where column Country is Greece". If we compose it with an aggregate function, INLINEFORM0
its respective utterance will be composed as well, being "maximum of values in column Year where column Country is Greece". The full derivation trees are presented in Figure FIGREF32 , where the original query parse tree is shown on the left, while our derived NL explanation is presented on the right.
We implement query to utterance as part of the semantic parser of our interface (Section SECREF42 ). The actual parsing of questions into formal queries is achieved using a context-free grammar (CFG). As shown in Figure FIGREF32 , formal queries are derived recursively by repeatedly applying the grammar deduction rules. Using the CYK BIBREF11 algorithm, the semantic parser returns derivation trees that maximize its objective (Section SECREF42 ). To generate an NL utterance for any formal query, we change the right-hand-side of each grammar rule to be a sequence of both non-terminals and NL phrases. For example, grammar rule: ("maximum of" Values INLINEFORM0 Entity) where Values, Entity and "maximum of" are its non-terminals and NL phrase respectively. Table TABREF33 describes the rules of the CFG augmented with our NL utterances. At the end of the derivation, the full query utterance can be read as the yield of the parse tree.
To utilize utterances as query explanations, we design them to be as clear and understandable as possible, albeit having a somewhat clumsy syntax. The references to table columns, rows as part of the NL utterance helps to clarify the actual semantics of the query to the non-expert users.
As the utterances are descriptions of formal queries, reading the utterance of each candidate query to determine its correctness might take some time. As user work-time is expensive, explanation methods that allow to quickly target correct results are necessary. We enhance utterances by employing provenance-based explanations, used for quickly identifying correct queries.
Provenance to Highlights
The understanding of a table query can be achieved by examining the cells on which it is executed. We explain a query by highlighting its multilevel cell-based provenance (Section SECREF4 ).
Using our provenance model, we define a procedure that takes a query as input and returns all cells involved in its execution on the corresponding table. These cells are then highlighted in the table, illustrating the query execution. Given a query INLINEFORM0 and table INLINEFORM1 , the INLINEFORM2 procedure divides cells into four types, based on their multilevel provenance functions. To help illustrate the query, each type of its provenance cells is highlighted differently: Colored cells are equivalent to INLINEFORM3 and are the cells returned by INLINEFORM4 as output, or used to compute the final output. Framed cells are equivalent to INLINEFORM5 and are the cells and aggregate functions used during query execution. Lit cells are equivalent to INLINEFORM6 , and are the cells of columns projected by the query. All other cells are unrelated to the query, hence no highlights are applied to them.
Example 5.2 Consider the lambda DCS query, INLINEFORM0
The utterance of this query is, "difference in column Total between rows where Nation is Fiji and Tonga". Figure FIGREF38 displays the highlights generated for this query, lighting all of the query's columns, framing its provenance cells and coloring the cells that comprise its output. In this example, all cells in columns Nation and Total are lit. The cells Fiji and Tonga are part of INLINEFORM0 and are therefore framed. The cells in INLINEFORM1 , containing 130 and 20, are colored as they contain the values used to compute the final result.
To highlight a query over the input table we call the procedure INLINEFORM0 with INLINEFORM1 . We describe our implementation in Algorithm SECREF34 . It is a recursive procedure which leverages the compositional nature of lambda DCS formulas. It decomposes the query INLINEFORM2 into its set of sub-formulas INLINEFORM3 , recursively computing the multilevel provenance. When reaching an atomic formula the algorithm will execute it and return its output. Cells returned by a sub-formula are both lit and framed, being part of INLINEFORM4 and INLINEFORM5 . Finally, all of the cells in INLINEFORM6 (Equation EQREF24 ) are colored.
Examples of provenance-based highlights are provided for several lambda DCS operators in Figures FIGREF38 - FIGREF38 . We display highlight examples for all lambda DCS operators in Figures TABREF70 - TABREF70 (at the end of the paper). Highlighting query cell-based provenance [1] Highlight INLINEFORM0 , INLINEFORM1 , INLINEFORM2 INLINEFORM3 provenance sets INLINEFORM4 INLINEFORM5 aggregate function INLINEFORM6 INLINEFORM7 is atomic INLINEFORM8 INLINEFORM9 INLINEFORM10 INLINEFORM11 INLINEFORM12 INLINEFORM13 INLINEFORM14 INLINEFORM15 INLINEFORM16 INLINEFORM17 ; INLINEFORM18 INLINEFORM19 INLINEFORM20
We note that different queries may possess identical provenance-based highlights. Consider Figure FIGREF38 and the following query utterances,
"values in column Games that are more than 4."
"values in column Games that are at least 5 and also less than 17."
The highlights displayed on Figure FIGREF38 will be the same for both of the above queries. In such cases the user should refer to the NL utterances of the queries in order to distinguish between them. Thus our query explanation methods are complementary, with the provenance-based highlights providing quick visual feedback while the NL utterances serve as detailed descriptions.
Scaling to Large Tables
We elaborate on how our query explanations can be easily extended to tables with numerous records. Given the nature of the NL utterances, this form of explanation is independent of a table's given size. The utterance will still provide an informed explanation of the query regardless of the table size or its present relations.
When employing our provenance-based highlights to large tables it might seem intractable to display them to the user. However, the highlights are meant to explain the candidate query itself, and not the final answer returned by it. Thus we can precisely indicate to the user what are the semantics of the query by employing highlights to a subsample of the table.
An intuitive solution can be used to achieve a succinct sample. First we use Algorithm SECREF34 to compute the cell-based provenance sets INLINEFORM0 and to mark the aggregation operators on relevant table headers. We can then map each provenance cell to its relevant record (table row), enabling us to build corresponding record sets, INLINEFORM1 . To illustrate the query highlights we sample one record from each of the three sets: INLINEFORM2 , INLINEFORM3 and INLINEFORM4 . In the special case of a query containing arithmetic difference (Figure FIGREF38 ), we select two records from INLINEFORM5 , one for each subtracted value. Sampled records are ordered according to their order in the original table. The example in Figure FIGREF40 contains three table rows selected from a large web table BIBREF12 .
Concrete Applications
So far we have described our methods for query explanations (Sections SECREF30 , SECREF34 ) and we now harness these methods to enhance an existing NL interface for querying tables.
Implementation
We return to our system architecture from Figure FIGREF7 . Presented with an NL question and corresponding table, our interface parses the question into lambda DCS queries using the state-of-the-art parser in BIBREF5 . The parser is trained for the task of querying web tables using the WikiTableQuestions dataset BIBREF1 .
Following the mapping of a question to a set of candidate queries, our interface will generate relevant query explanations for each of the queries, displaying a detailed NL utterance and highlighting the provenance data. The explanations are presented to non-technical users to assist in selecting the correct formal-query representing the question.
User feedback in the form of question-query pairs is also used offline in order to retrain the semantic parser.
We briefly describe the benchmark dataset used in our framework and its relation to the task of querying web tables.
WikiTableQuestions BIBREF1 is a question answering dataset over semi-structured tables. It is comprised of question-answer pairs on HTML tables, and was constructed by selecting data tables from Wikipedia that contained at least 8 rows and 5 columns. Amazon Mechanical Turk workers were then tasked with writing trivia questions about each table. In contrast to common NLIDB benchmarks BIBREF2 , BIBREF0 , BIBREF15 , WikiTableQuestions contains 22,033 questions and is an order of magnitude larger than previous state-of-the-art datasets. Its questions were not designed by predefined templates but were hand crafted by users, demonstrating high linguistic variance. Compared to previous datasets on knowledge bases it covers nearly 4,000 unique column headers, containing far more relations than closed domain datasets BIBREF15 , BIBREF2 and datasets for querying knowledge bases BIBREF16 . Its questions cover a wide range of domains, requiring operations such as table lookup, aggregation, superlatives (argmax, argmin), arithmetic operations, joins and unions. The complexity of its questions can be shown in Tables TABREF6 and TABREF66 .
The complete dataset contains 22,033 examples on 2,108 tables. As the test set, 20% of the tables and their associated questions were set aside, while the remaining tables and questions serve as the training set. The separation between tables in the training and test sets forces the question answering system to handle new tables with previously unseen relations and entities.
Training on Feedback
The goal of the semantic parser is to translate natural language questions into equivalent formal queries. Thus, in order to ideally train the parser, we should train it on questions annotated with their respective queries. However, annotating NL questions with formal queries is a costly operation, hence recent works have trained semantic parsers on examples labeled solely with their answer BIBREF17 , BIBREF18 , BIBREF0 , BIBREF1 . This weak supervision facilitates the training process at the cost of learning from incorrect queries. Figure FIGREF48 presents two candidate queries for the question "What was the last year the team was a part of the USL A-league?". Note that both queries output the correct answer to the question, which is 2004. However, the second query is clearly incorrect given its utterance is "minimum value in column Year in rows that have the highest value in column Open Cup".
The WikiTableQuestions dataset, on which the parser is trained, is comprised of question-answer pairs. Thus by retraining the parser on question-query pairs, that are provided as feedback, we can improve its overall correctness. We address this in our work by explaining queries to non-experts, enabling them to select the correct candidate query or mark None when all are incorrect.
These annotations are then used to retrain the semantic parser. Given a question, its annotations are the queries marked as correct by users. We note that a question may have more than one correct annotation.
Semantic Parsing is the task of mapping natural language questions to formal language queries (SQL, lambda DCS, etc.) that are executed against a target database. The semantic parser is a parameterized function, trained by updating its parameter vector such that questions from the training set are translated to formal queries yielding the correct answer.
We denote the table by INLINEFORM0 and the NL question by INLINEFORM1 . The semantic parser aims to generate a query INLINEFORM2 which executes to the correct answer of INLINEFORM3 on INLINEFORM4 , denoted by INLINEFORM5 . In our running example from Figure FIGREF1 , the parser tries to generate queries which execute to the value 2004. We define INLINEFORM6 as the set of candidate queries generated by parsing INLINEFORM7 . For each INLINEFORM8 we extract a feature vector INLINEFORM9 and define a log-linear distribution over candidates: DISPLAYFORM0
where INLINEFORM0 is the parameter vector. We formally define the parser distribution of yielding the correct answer, DISPLAYFORM0
where INLINEFORM0 is 1 when INLINEFORM1 and zero otherwise.
The parser is trained using examples INLINEFORM0 , optimizing the parameter vector INLINEFORM1 using AdaGrad BIBREF19 in order to maximize the following objective BIBREF1 , DISPLAYFORM0
where INLINEFORM0 is a hyperparameter vector obtained from cross-validation. To train a semantic parser that is unconstrained to any specific domain we deploy the parser in BIBREF5 , trained end-to-end on the WikiTableQuestions dataset BIBREF1 .
We modify the original parser so that annotated questions are trained using question-query pairs while all other questions are trained as before. The set of annotated examples is denoted by INLINEFORM0 . Given annotated example INLINEFORM1 , its set of valid queries is INLINEFORM2 . We define the distribution for an annotated example to yield the correct answer by, DISPLAYFORM0
Where INLINEFORM0 is 1 when INLINEFORM1 and zero otherwise. Our new objective for retraining the semantic parser, DISPLAYFORM0
the first sum denoting the set of annotated examples, while the second sum denotes all other examples.
This enables the parser to update its parameters so that questions are translated into correct queries, rather than merely into queries that yield the correct answer.
Deployment
At deployment, user interaction is used to ensure that the system returns formal-queries that are correct.
We have constructed a web interface allowing users to pose NL questions on tables and by using our query explanations, to choose the correct query from the top-k generated candidates. Normally, a semantic parser receives an NL question as input and displays to the user only the result of its top ranked query. The user receives no explanation as to why was she returned this specific result or whether the parser had managed to correctly parse her question into formal language. In contrast to the baseline parser, our system displays to users its top-k candidates, allowing them to modify the parser's top query.
Example 6.1 Figure FIGREF51 shows an example from the WikitableQuestions test set with the question "How many more ships were wrecked in lake Huron than in Erie". Note that the original table contains many more records than those displayed in the figure. Given the explanations of the parser's top candidates, our provenance-based highlights make it clear that the first query is correct as it compares the table occurrences of lakes Huron and Erie. The second result is incorrect, comparing lakes Huron and Superior, while the third query does not compare occurrences.
Experiments
Following the presentation of concrete applications for our methods we have designed an experimental study to measure the effect of our query explanation mechanism. We conducted experiments to evaluate both the quality of our explanations, as well as their contribution to the baseline parser. This section is comprised of two main parts:
The experimental results show our query explanations to be effective, allowing non-experts to easily understand generated queries and to disqualify incorrect ones. Training on user feedback further improves the system correctness, allowing it to learn from user experience.
Evaluation Metrics
We begin by defining the system correctness, used as our main evaluation metric. Recall that the semantic parser is given an NL question INLINEFORM0 and table INLINEFORM1 and generates a set INLINEFORM2 of candidate queries. Each query INLINEFORM3 is then executed against the table, yielding result INLINEFORM4 . We define the parser correctness as the percentage of questions where the top-ranked query is a correct translation of INLINEFORM5 from NL to lambda DCS. In addition to correctness, we also measured the mean reciprocal rank (MRR), used for evaluating the average correctness of all candidate queries generated, rather than only that of the top-1.
Example 7.1 To illustrate the difference between correct answers and correct queries let us consider the example in Figure FIGREF48 . The parser generates the following candidate queries (we present only their utterances):
maximum value in column Year in rows where value of column League is USL A-League.
minimum value in column Year in rows that have the highest value in column Open Cup.
Both return the correct answer 2004, however only the first query conveys the correct translation of the NL question.
Interactive Parsing at Deployment
We use query explanations to improve the real-time performance of the semantic parser. Given any NL question on a (never before seen) table, the parser will generate a set of candidate queries. Using our explanations, the user will interactively select the correct query (when generated) from the parser's top-k results. We compare the correctness scores of our interactive method with that of the baseline parser.
Our user study was conducted using anonymous workers recruited through the the Amazon Mechanical Turk (AMT) crowdsourcing platform. Focusing on non-experts, our only requirements were that participants be over 18 years old and reside in a native English speaking country. Our study included 35 distinct workers, a significant number of participants compared to previous works on NL interfaces BIBREF4 , BIBREF15 , BIBREF20 . Rather than relying on a small set of NL test questions BIBREF4 , BIBREF15 we presented each worker with 20 distinct questions that were randomly selected from the WikiTableQuestions benchmark dataset (Section SECREF41 ). A total of 405 distinct questions were presented (as described in Table TABREF59 ). For each question, workers were shown explanations (utterances, highlights) of the top-7 candidate queries generated. Candidates were randomly ordered, rather than ranked by the parser scores, so that users will not be biased towards the parser's top query. Given a question, participants were asked to mark the correct candidate query, or None if no correct query was generated.
Displaying the top-k results allowed workers to improve the baseline parser in cases where the correct query was generated, but not ranked at the top. After examining different values of INLINEFORM0 , we chose to display top-k queries with INLINEFORM1 . We made sure to validate that our choice of INLINEFORM2 was sufficiently large, so that it included the correct query (when generated). We randomly selected 100 examples where no correct query was generated in the top-7 and examined whether one was generated within the top-14 queries. Results had shown that for INLINEFORM3 only 5% of the examples contained a correct query, a minor improvement at the cost of doubling user effort. Thus a choice of INLINEFORM4 appears to be reasonable.
To verify that our query explanations were understandable to non-experts we measured each worker's success. Results in Table TABREF59 show that in 78.4% of the cases, workers had succeeded in identifying the correct query or identifying that no candidate query was correct. The average success rate for all 35 workers being 15.7/20 questions. When comparing our explanation approach (utterances + highlights) to a baseline of no explanations, non-expert users failed to identify correct queries when shown only lambda DCS queries. This demonstrates that utterances and provenance-based highlights serve as effective explanations of formal queries to the layperson. We now show that using them jointly is superior to using only utterances.
When introducing our two explanation methods, we noted their complementary nature. NL utterances serve as highly detailed phrases describing the query, while highlighting provenance cells allows to quickly single out the correct queries. We put this claim to the test by measuring the impact our novel provenance-based highlights had on the average work-time of users. We measured the work-time of 20 distinct AMT workers, divided into two separate groups, each containing half of the participants. Workers from both groups were presented with 20 questions from WikiTableQuestions. The first group of workers were presented both with highlights and utterances as their query explanations, while the second group had to rely solely on NL utterances. Though both groups achieved identical correctness results, the group employing table highlights performed significantly faster. Results in Table TABREF60 show our provenance-based explanations cut the average and median work-time by 34% and 20% respectively. Since user work-time is valuable, the introduction of visual explanations such as table highlights may lead to significant savings in worker costs.
We have examined the effect to which our query explanations can help users improve the correctness of a baseline NL interface. Our user study compares the correctness of three scenarios:
Parser correctness - our baseline is the percentage of examples where the top query returned by the semantic parser was correct.
User correctness - the percentage of examples where the user selected a correct query from the top-7 generated by the parser.
Hybrid correctness - correctness of queries returned by a combination of the previous two scenarios. The system returns the query marked by the user as correct; if the user marks all queries as incorrect it will return the parser's top candidate.
Results in Table TABREF64 show the correctness rates of these scenarios. User correctness score is superior to that of the baseline parser by 7.5% (from 37.1% to 44.6%), while the hybrid approach outscores both with a correctness of 48.7% improving the baseline by 11.6%. For the user and hybrid correctness we used a INLINEFORM0 test to measure significance. Random queries and tables included in the experiment are presented in Table TABREF66 . We also include a comparison of the top ranked query of the baseline parser compared to that of the user.
We define the correctness bound as the percentage of examples where the top-k candidate queries actually contain a correct result. This bound serves as the optimal correctness score that workers can achieve. The 56% correctness-bound of the baseline parser stems from the sheer complexity of the WikiTableQuestions benchmark. Given the training and test tables are disjoint, the parser is tested on relations and entities unobserved during its training. This task of generalizing to unseen domains is an established challenge in semantic parsing BIBREF1 , BIBREF21 . Using the correctness-bound as an upper bound on our results shows the hybrid approach achieves 87% of its full potential. Though there is some room for improvement, it seems reasonable given that our non-expert workers possess no prior experience of their given task.
We describe the execution times for generating our query explanations in Table TABREF65 . We trained the semantic parser using the SMEPRE toolkit BIBREF0 on a machine with Xeon 2.20GHz CPU and 256GB RAM running Linux Ubuntu 14.04 LTS. We report the average generation times of candidate queries, utterances and highlights over the entire WikiTableQuestions test set, numbering 4,344 questions.
Training on User Feedback
We measure our system's ability to learn from user feedback in the form of question-query pairs. Given a question, the user is shown explanations of the parser's top-7 queries, using them to annotate the question, i.e. assign to it correct formal queries (e.g., the first query in Figure FIGREF48 ). Annotations were collected by displaying users with questions from the WikiTableQuestions training set along with query explanations of the parser results. To enhance the annotation quality, each question was presented to three distinct users, taking only the annotations marked by at least two of them as correct. Data collection was done using AMT and in total, 2,068 annotated questions were collected. Following a standard methodology, we split the annotated data into train and development sets. Out of our 2,068 annotated examples, 418 were selected as the development set, and 1,650 as the training set. The annotated development examples were used to evaluate the effect of our annotations on the parser correctness.
We experiment on two scenarios: (1) training the parser solely on 1,650 annotated examples; (2) integrating our training examples into the entire WikiTableQuestions training set of 11K examples. For each scenario we trained two parsers, one trained using annotations and the other without any use of annotations. To gain more robust results we ran our experiments on three different train/dev splits of our data, averaging the results. Table TABREF68 displays the results of our experiments. When training solely on the annotated examples, parser correctness on development examples increased by 8% (41.8% to 49.8%). The spike in correctness shows that feedback acquired using our explanations is high-quality input for the semantic parser, hence the parser achieves better correctness when trained on it compared to training on the original WikiTableQuestions benchmark.
When training on all 11K train examples using our 1,650 annotations we also saw an increase (of 2.1%), albeit being more modest due to the percentage of annotated examples. We witnessed an increase in both correctness and MRR (mean reciprocal rank) that grows in the number of annotated train examples. This further asserts the significance of annotated training data BIBREF22 , BIBREF14 and shows that our system can learn from quality feedback collected by non-experts.
Conclusion and Future Work
We have studied in this paper the problem of explaining complex NL queries to non expert users. We introduced visual query explanations in the form of table highlights, based on a novel cell-based provenance model tested on web tables from hundreds of distinct domains. Table highlights provide immediate visual feedback for identifying correct candidate queries. We combine table highlights with utterance based query explanations, significantly improving their effectiveness. Using our query explanations we enhanced an NL interface for querying tables by providing it with feedback at both deployment and training time. Feedback is procured through query explanations, allowing users with no technical background to query tables with confidence, while simultaneously providing feedback to enhance the interface itself. We implement a human in the loop paradigm, where our users both exploit the underlying Machine Learning algorithm while providing it with further data to train on.
We have put our methods to the test, having conducted an extensive user study to determine the clarity of our explanations. Experimenting with explanations for hundreds of formal queries, users proved to be successful in interactively choosing correct queries, easily topping the baseline parser correctness. The addition of provenance-based highlights helps boost the efficacy of user feedback, cutting average work-time by a third compared to the utterances baseline. | No |
a0876fcbcb5a5944b412613e885703f14732676c | a0876fcbcb5a5944b412613e885703f14732676c_0 | Q: How do the users in the user studies evaluate reliability of a NL interface?
Text: Introduction
Natural language interfaces have been gaining significant popularity, enabling ordinary users to write and execute complex queries. One of the prominent paradigms for developing NL interfaces is semantic parsing, which is the mapping of NL phrases into a formal language. As Machine Learning techniques are standardly used in semantic parsing, a training set of question-answer pairs is provided alongside a target database BIBREF0 , BIBREF1 , BIBREF2 . The parser is a parameterized function that is trained by updating its parameters such that questions from the training set are translated into queries that yield the correct answers.
A crucial challenge for using semantic parsers is their reliability. Flawless translation from NL to formal language is an open problem, and even state-of-the-art parsers are not always right. With no explanation of the executed query, users are left wondering if the result is actually correct. Consider the example in Figure FIGREF1 , displaying a table of Olympic games and the question "Greece held its last Olympics in what year?". A semantic parser parsing the question generates multiple candidate queries and returns the evaluation result of its top ranked query. The user is only presented with the evaluation result, 2004. Although the end result is correct, she has no clear indication whether the question was correctly parsed. In fact, the interface might have chosen any candidate query yielding 2004. Ensuring the system has executed a correct query (rather than simply returning a correct answer in a particular instance) is essential, as it enables reusing the query as the data evolves over time. For example, a user might wish for a query such as "The average price of the top 5 stocks on Wall Street" to be run on a daily basis. Only its correct translation into SQL will consistently return accurate results.
Our approach is to design provenance-based BIBREF3 , BIBREF4 query explanations that are extensible, domain-independent and immediately understandable by non-expert users. We devise a cell-based provenance model for explaining formal queries over web tables and implement it with our query explanations, (see Figure FIGREF1 ). We enhance an existing NL interface for querying tables BIBREF5 by introducing a novel component featuring our query explanations. Following the parsing of an input NL question, our component explains the candidate queries to users, allowing non-experts to choose the one that best fits their intention. The immediate application is to improve the quality of obtained queries at deployment time over simply choosing the parser's top query (without user feedback). Furthermore, we show how query explanations can be used to obtain user feedback which is used to retrain the Machine Learning system, thereby improving its performance.
System Overview
We review our system architecture from Figure FIGREF7 and describe its general workflow.
Preliminaries
We begin by formally defining our task of querying tables. Afterwards, we discuss the formal query language and show how lambda DCS queries can be translated directly into SQL.
Data Model
An NL interface for querying tables receives a question INLINEFORM0 on a table INLINEFORM1 and outputs a set of values INLINEFORM2 as the answer (where each value is either the content of a cell, or the result of an aggregate function on cells). As discussed in the introduction, we make the assumption that a query concerns a single table.
Following the model presented in BIBREF1 , all table records are ordered from top to bottom with each record possessing a unique INLINEFORM0 (0, 1, 2, ...). In addition, every record has a pointer INLINEFORM1 to the record above it. The values of table cells can be either strings, numbers or dates. While we view the table as a relation, it is common BIBREF1 , BIBREF5 to describe it as a knowledge base (KB) INLINEFORM2 where INLINEFORM3 is a set of entities and INLINEFORM4 a set of binary properties. The entity set, INLINEFORM5 is comprised of all table cells (e.g., INLINEFORM6 ) and all table records, while INLINEFORM7 contains all column headers, serving as binary relations from an entity to the table records it appears in. In the example of Figure FIGREF1 , column Country is a binary relation such that Country.Greece returns all table records where the value of column Country is Greece (see definition of composition operators below). If the table in Figure FIGREF1 has INLINEFORM8 records, the returned records indices will be INLINEFORM9 .
Query Language
Following the definition of our data model we introduce our formal query language, lambda dependency-based compositional semantics (lambda DCS) BIBREF6 , BIBREF0 , which is a language inspired by lambda calculus, that revolves around sets. Lambda DCS was originally designed for building an NL interface over Freebase BIBREF9 .
Lambda DCS is a highly expressive language, designed to represent complex NL questions involving sorting, aggregation intersection and more. It has been considered a standard language for performing semantic parsing over knowledge bases BIBREF6 , BIBREF0 , BIBREF1 , BIBREF5 . A lambda DCS formula is executed against a target table and returns either a set of values (string, number or date) or a set of table records. We describe here a simplified version of lambda DCS that will be sufficient for understanding the examples presented in this paper. For a full description of lambda DCS, the reader should refer to BIBREF6 . The basic constructs of lambda DCS are as follows:
Unary: a set of values. The simplest type of unary in a table is a table cell, e.g., Greece, which denotes the set of cells containing the entity 'Greece'.
Binary: A binary relation describes a relation between sets of objects. The simplest type of a binary relation is a table column INLINEFORM0 , mapping table entities to the records where they appear, e.g., Country.
Join: For a binary relation INLINEFORM0 and unary relation INLINEFORM1 , INLINEFORM2 operates as a selection and projection. INLINEFORM3 denotes all table records where the value of column Country is Greece.
Prev: Given records INLINEFORM0 the INLINEFORM1 operator will return the set of preceding table records, INLINEFORM2 .
Reverse: Given a binary relation INLINEFORM0 from INLINEFORM1 to INLINEFORM2 , there is a reversed binary relation R[ INLINEFORM3 ] from INLINEFORM4 to INLINEFORM5 . E.g., for a column binary relation INLINEFORM6 from table values to their records, R[ INLINEFORM7 ] is a relation from records to values. R[Year].Country.Greece takes all the record indices of Country.Greece and returns the values of column Year in these records. Similarly, R[Prev] denotes a relation from a set of records, to the set of following (reverse of previous) table records.
Intersection: Intersection of sets. E.g., the set of records where Country is Greece and also where Year is 2004, Country.Greece INLINEFORM0 Year.2004.
Union: Union of sets. E.g., records where the value of column Country is Greece or China, Country.Greece INLINEFORM0 Country.China.
Aggregation: Aggregate functions min, max, avg, sum, count that take a unary and return a unary with one number. E.g., INLINEFORM0 returns the number of records where the value of City is Athens.
Superlatives: argmax, argmin. For unary INLINEFORM0 and binary INLINEFORM1 , INLINEFORM2 is the set of all values INLINEFORM3 .
In this paper we use a group of predefined operators specifically designed for the task of querying tables BIBREF1 . The language operators are compositional in nature, allowing the semantic parser to compose several sub-formulas into a single formula representing complex query operations.
Example 3.1 Consider the following lambda DCS query on the table from Figure FIGREF1 , INLINEFORM0
it returns values of column City (binary) appearing in records (Record unary) that have the lowest value in column Year.
To position our work in the context of relational queries we show lambda DCS to be an expressive fragment of SQL. The translation into SQL proves useful when introducing our provenance model by aligning our model with previous work BIBREF10 , BIBREF4 . Table TABREF69 (presented at the end of the paper) describes all lambda DCS operators with their corresponding translation into SQL.
Example 3.2 Returning to the lambda DCS query from the previous example, it can be easily translated to SQL as,
SELECT City FROM T
WHERE Index IN (
SELECT Index FROM T
WHERE Year = ( SELECT MIN(Year) FROM T ) );
where Index denotes the attribute of record indices in table INLINEFORM0 . The query first computes the set of record indices containing the minimum value in column Year, which in our running example table is {0}. It then returns the values of column City in these records, which is Athens as it is the value of column City at record 0.
Provenance
The tracking and presentation of provenance data has been extensively studied in the context of relational queries BIBREF10 , BIBREF4 . In addition to explaining query results BIBREF4 , we can use provenance information for explaining the query execution on a given web table. We design a model for multilevel cell-based provenance over tables, with three levels of granularity. The model enables us to distinguish between different types of table cells involved in the execution process. This categorization of provenance cells serves as a form of query explanation that is later implemented in our provenance-based highlights (Section SECREF34 ).
Model Definitions
Given query INLINEFORM0 and table INLINEFORM1 , the execution result, denoted by INLINEFORM2 , is either a collection of table cells, or a numeric result of an aggregate or arithmetic operation.
We define INLINEFORM0 to be the infinite domain of possible queries over INLINEFORM1 , INLINEFORM2 to be the set of table records, INLINEFORM3 to be the set of table cells and denote by INLINEFORM4 the set of aggregate functions, {min, max, avg, count, sum}.
Our cell-based provenance takes as input a query and its corresponding table and returns the set of cells and aggregate functions involved in the query execution. The model distinguishes between three types of provenance cells. There are the cells returned as the query output INLINEFORM0 , cells that are examined during the execution, and also the cells in columns that are projected or aggregated on by the query. We formally define the following three cell-based provenance functions.
Definition 4.1 Let INLINEFORM0 be a formal query and INLINEFORM1 its corresponding table. We define three cell-based provenance functions, INLINEFORM2 . Given INLINEFORM3 the functions output a set of table cells and aggregate functions. INLINEFORM4
We use INLINEFORM0 to denote an aggregate function or arithmetic operation on tables cells. Given the compositional nature of the lambda DCS query language, we define INLINEFORM1 as the set of all sub-queries composing INLINEFORM2 . We have used INLINEFORM3 to denote the table columns that are either projected by the query, or that are aggregated on by it. DISPLAYFORM0 DISPLAYFORM1
Function INLINEFORM0 returns all cells output by INLINEFORM1 or, if INLINEFORM2 is the result of an arithmetic or aggregate operation, returns all table cells involved in that operation in addition to the aggregate function itself. INLINEFORM3 returns cells and aggregate functions used during the query execution. INLINEFORM4 returns all table cells in columns that are either projected or aggregated on by INLINEFORM5 . These cell-based provenance functions have a hierarchical relation, where the cells output by each function are a subset of those output by the following function. Therefore, the three provenance sets constitute an ordered chain, where INLINEFORM6 .
Having described our three levels of cell-based provenance, we combine them into a single multilevel cell-based model for querying tables.
Definition 4.2 Given formal query INLINEFORM0 and table INLINEFORM1 , the multilevel cell-based provenance of INLINEFORM2 executed on INLINEFORM3 is a function, INLINEFORM4
Returning the provenance chain, INLINEFORM0
Query Operators
Using our model, we describe the multilevel cell-based provenance of several lambda DCS operator in Table TABREF21 . Provenance descriptions of all lambda DCS operators are provided in Table TABREF69 (at the end of the paper). For simplicity, we omit the table parameter INLINEFORM0 from provenance expressions, writing INLINEFORM1 instead of INLINEFORM2 . We also denote both cells and aggregate functions as belonging to the same set.
We use INLINEFORM0 to denote a table cell with value INLINEFORM1 , while denoting specific cell values by INLINEFORM2 . Each cell INLINEFORM3 belongs to a table record, INLINEFORM4 with a unique index, INLINEFORM5 (Section SECREF8 ). We distinguish between two types of lambda DCS formulas: formulas returning values are denoted by INLINEFORM6 while those returning table records by INLINEFORM7 .
Example 4.3 We explain the provenance of the following lambda DCS query, INLINEFORM0
It returns the values of column Year in records where column City is Athens, thus INLINEFORM0 will return all cells containing these values. INLINEFORM1
The cells involved in the execution of INLINEFORM0 include the output cells INLINEFORM1 in addition to the provenance of the sub-formula City.Athens, defined as all cells of column City with value Athens. INLINEFORM2
Where, INLINEFORM0
The provenance of the columns of INLINEFORM0 is simply all cells appearing in columns Year and City. INLINEFORM1
The provenance rules used in the examples regard the lambda DCS operators of "column records" and of "column values". The definition of the relevant provenance rules are described in the first two rows of Table TABREF69 .
Explaining Queries
To allow users to understand formal queries we must provide them with effective explanations. We describe the two methods of our system for explaining its generated queries to non-experts. Our first method translates formal queries into NL, deriving a detailed utterance representing the query. The second method implements the multilevel provenance model introduced in Section SECREF4 . For each provenance function ( INLINEFORM0 ) we uniquely highlight its cells, creating a visual explanation of the query execution.
Query to Utterance
Given a formal query in lambda DCS we provide a domain independent method for converting it into a detailed NL utterance. Drawing on the work in BIBREF7 we use a similar technique of deriving an NL utterance alongside the formal query. We introduce new NL templates describing complex lambda DCS operations for querying tables.
Example 5.1 The lambda DCS query, INLINEFORM0
is mapped to the utterance, "value in column Year where column Country is Greece". If we compose it with an aggregate function, INLINEFORM0
its respective utterance will be composed as well, being "maximum of values in column Year where column Country is Greece". The full derivation trees are presented in Figure FIGREF32 , where the original query parse tree is shown on the left, while our derived NL explanation is presented on the right.
We implement query to utterance as part of the semantic parser of our interface (Section SECREF42 ). The actual parsing of questions into formal queries is achieved using a context-free grammar (CFG). As shown in Figure FIGREF32 , formal queries are derived recursively by repeatedly applying the grammar deduction rules. Using the CYK BIBREF11 algorithm, the semantic parser returns derivation trees that maximize its objective (Section SECREF42 ). To generate an NL utterance for any formal query, we change the right-hand-side of each grammar rule to be a sequence of both non-terminals and NL phrases. For example, grammar rule: ("maximum of" Values INLINEFORM0 Entity) where Values, Entity and "maximum of" are its non-terminals and NL phrase respectively. Table TABREF33 describes the rules of the CFG augmented with our NL utterances. At the end of the derivation, the full query utterance can be read as the yield of the parse tree.
To utilize utterances as query explanations, we design them to be as clear and understandable as possible, albeit having a somewhat clumsy syntax. The references to table columns, rows as part of the NL utterance helps to clarify the actual semantics of the query to the non-expert users.
As the utterances are descriptions of formal queries, reading the utterance of each candidate query to determine its correctness might take some time. As user work-time is expensive, explanation methods that allow to quickly target correct results are necessary. We enhance utterances by employing provenance-based explanations, used for quickly identifying correct queries.
Provenance to Highlights
The understanding of a table query can be achieved by examining the cells on which it is executed. We explain a query by highlighting its multilevel cell-based provenance (Section SECREF4 ).
Using our provenance model, we define a procedure that takes a query as input and returns all cells involved in its execution on the corresponding table. These cells are then highlighted in the table, illustrating the query execution. Given a query INLINEFORM0 and table INLINEFORM1 , the INLINEFORM2 procedure divides cells into four types, based on their multilevel provenance functions. To help illustrate the query, each type of its provenance cells is highlighted differently: Colored cells are equivalent to INLINEFORM3 and are the cells returned by INLINEFORM4 as output, or used to compute the final output. Framed cells are equivalent to INLINEFORM5 and are the cells and aggregate functions used during query execution. Lit cells are equivalent to INLINEFORM6 , and are the cells of columns projected by the query. All other cells are unrelated to the query, hence no highlights are applied to them.
Example 5.2 Consider the lambda DCS query, INLINEFORM0
The utterance of this query is, "difference in column Total between rows where Nation is Fiji and Tonga". Figure FIGREF38 displays the highlights generated for this query, lighting all of the query's columns, framing its provenance cells and coloring the cells that comprise its output. In this example, all cells in columns Nation and Total are lit. The cells Fiji and Tonga are part of INLINEFORM0 and are therefore framed. The cells in INLINEFORM1 , containing 130 and 20, are colored as they contain the values used to compute the final result.
To highlight a query over the input table we call the procedure INLINEFORM0 with INLINEFORM1 . We describe our implementation in Algorithm SECREF34 . It is a recursive procedure which leverages the compositional nature of lambda DCS formulas. It decomposes the query INLINEFORM2 into its set of sub-formulas INLINEFORM3 , recursively computing the multilevel provenance. When reaching an atomic formula the algorithm will execute it and return its output. Cells returned by a sub-formula are both lit and framed, being part of INLINEFORM4 and INLINEFORM5 . Finally, all of the cells in INLINEFORM6 (Equation EQREF24 ) are colored.
Examples of provenance-based highlights are provided for several lambda DCS operators in Figures FIGREF38 - FIGREF38 . We display highlight examples for all lambda DCS operators in Figures TABREF70 - TABREF70 (at the end of the paper). Highlighting query cell-based provenance [1] Highlight INLINEFORM0 , INLINEFORM1 , INLINEFORM2 INLINEFORM3 provenance sets INLINEFORM4 INLINEFORM5 aggregate function INLINEFORM6 INLINEFORM7 is atomic INLINEFORM8 INLINEFORM9 INLINEFORM10 INLINEFORM11 INLINEFORM12 INLINEFORM13 INLINEFORM14 INLINEFORM15 INLINEFORM16 INLINEFORM17 ; INLINEFORM18 INLINEFORM19 INLINEFORM20
We note that different queries may possess identical provenance-based highlights. Consider Figure FIGREF38 and the following query utterances,
"values in column Games that are more than 4."
"values in column Games that are at least 5 and also less than 17."
The highlights displayed on Figure FIGREF38 will be the same for both of the above queries. In such cases the user should refer to the NL utterances of the queries in order to distinguish between them. Thus our query explanation methods are complementary, with the provenance-based highlights providing quick visual feedback while the NL utterances serve as detailed descriptions.
Scaling to Large Tables
We elaborate on how our query explanations can be easily extended to tables with numerous records. Given the nature of the NL utterances, this form of explanation is independent of a table's given size. The utterance will still provide an informed explanation of the query regardless of the table size or its present relations.
When employing our provenance-based highlights to large tables it might seem intractable to display them to the user. However, the highlights are meant to explain the candidate query itself, and not the final answer returned by it. Thus we can precisely indicate to the user what are the semantics of the query by employing highlights to a subsample of the table.
An intuitive solution can be used to achieve a succinct sample. First we use Algorithm SECREF34 to compute the cell-based provenance sets INLINEFORM0 and to mark the aggregation operators on relevant table headers. We can then map each provenance cell to its relevant record (table row), enabling us to build corresponding record sets, INLINEFORM1 . To illustrate the query highlights we sample one record from each of the three sets: INLINEFORM2 , INLINEFORM3 and INLINEFORM4 . In the special case of a query containing arithmetic difference (Figure FIGREF38 ), we select two records from INLINEFORM5 , one for each subtracted value. Sampled records are ordered according to their order in the original table. The example in Figure FIGREF40 contains three table rows selected from a large web table BIBREF12 .
Concrete Applications
So far we have described our methods for query explanations (Sections SECREF30 , SECREF34 ) and we now harness these methods to enhance an existing NL interface for querying tables.
Implementation
We return to our system architecture from Figure FIGREF7 . Presented with an NL question and corresponding table, our interface parses the question into lambda DCS queries using the state-of-the-art parser in BIBREF5 . The parser is trained for the task of querying web tables using the WikiTableQuestions dataset BIBREF1 .
Following the mapping of a question to a set of candidate queries, our interface will generate relevant query explanations for each of the queries, displaying a detailed NL utterance and highlighting the provenance data. The explanations are presented to non-technical users to assist in selecting the correct formal-query representing the question.
User feedback in the form of question-query pairs is also used offline in order to retrain the semantic parser.
We briefly describe the benchmark dataset used in our framework and its relation to the task of querying web tables.
WikiTableQuestions BIBREF1 is a question answering dataset over semi-structured tables. It is comprised of question-answer pairs on HTML tables, and was constructed by selecting data tables from Wikipedia that contained at least 8 rows and 5 columns. Amazon Mechanical Turk workers were then tasked with writing trivia questions about each table. In contrast to common NLIDB benchmarks BIBREF2 , BIBREF0 , BIBREF15 , WikiTableQuestions contains 22,033 questions and is an order of magnitude larger than previous state-of-the-art datasets. Its questions were not designed by predefined templates but were hand crafted by users, demonstrating high linguistic variance. Compared to previous datasets on knowledge bases it covers nearly 4,000 unique column headers, containing far more relations than closed domain datasets BIBREF15 , BIBREF2 and datasets for querying knowledge bases BIBREF16 . Its questions cover a wide range of domains, requiring operations such as table lookup, aggregation, superlatives (argmax, argmin), arithmetic operations, joins and unions. The complexity of its questions can be shown in Tables TABREF6 and TABREF66 .
The complete dataset contains 22,033 examples on 2,108 tables. As the test set, 20% of the tables and their associated questions were set aside, while the remaining tables and questions serve as the training set. The separation between tables in the training and test sets forces the question answering system to handle new tables with previously unseen relations and entities.
Training on Feedback
The goal of the semantic parser is to translate natural language questions into equivalent formal queries. Thus, in order to ideally train the parser, we should train it on questions annotated with their respective queries. However, annotating NL questions with formal queries is a costly operation, hence recent works have trained semantic parsers on examples labeled solely with their answer BIBREF17 , BIBREF18 , BIBREF0 , BIBREF1 . This weak supervision facilitates the training process at the cost of learning from incorrect queries. Figure FIGREF48 presents two candidate queries for the question "What was the last year the team was a part of the USL A-league?". Note that both queries output the correct answer to the question, which is 2004. However, the second query is clearly incorrect given its utterance is "minimum value in column Year in rows that have the highest value in column Open Cup".
The WikiTableQuestions dataset, on which the parser is trained, is comprised of question-answer pairs. Thus by retraining the parser on question-query pairs, that are provided as feedback, we can improve its overall correctness. We address this in our work by explaining queries to non-experts, enabling them to select the correct candidate query or mark None when all are incorrect.
These annotations are then used to retrain the semantic parser. Given a question, its annotations are the queries marked as correct by users. We note that a question may have more than one correct annotation.
Semantic Parsing is the task of mapping natural language questions to formal language queries (SQL, lambda DCS, etc.) that are executed against a target database. The semantic parser is a parameterized function, trained by updating its parameter vector such that questions from the training set are translated to formal queries yielding the correct answer.
We denote the table by INLINEFORM0 and the NL question by INLINEFORM1 . The semantic parser aims to generate a query INLINEFORM2 which executes to the correct answer of INLINEFORM3 on INLINEFORM4 , denoted by INLINEFORM5 . In our running example from Figure FIGREF1 , the parser tries to generate queries which execute to the value 2004. We define INLINEFORM6 as the set of candidate queries generated by parsing INLINEFORM7 . For each INLINEFORM8 we extract a feature vector INLINEFORM9 and define a log-linear distribution over candidates: DISPLAYFORM0
where INLINEFORM0 is the parameter vector. We formally define the parser distribution of yielding the correct answer, DISPLAYFORM0
where INLINEFORM0 is 1 when INLINEFORM1 and zero otherwise.
The parser is trained using examples INLINEFORM0 , optimizing the parameter vector INLINEFORM1 using AdaGrad BIBREF19 in order to maximize the following objective BIBREF1 , DISPLAYFORM0
where INLINEFORM0 is a hyperparameter vector obtained from cross-validation. To train a semantic parser that is unconstrained to any specific domain we deploy the parser in BIBREF5 , trained end-to-end on the WikiTableQuestions dataset BIBREF1 .
We modify the original parser so that annotated questions are trained using question-query pairs while all other questions are trained as before. The set of annotated examples is denoted by INLINEFORM0 . Given annotated example INLINEFORM1 , its set of valid queries is INLINEFORM2 . We define the distribution for an annotated example to yield the correct answer by, DISPLAYFORM0
Where INLINEFORM0 is 1 when INLINEFORM1 and zero otherwise. Our new objective for retraining the semantic parser, DISPLAYFORM0
the first sum denoting the set of annotated examples, while the second sum denotes all other examples.
This enables the parser to update its parameters so that questions are translated into correct queries, rather than merely into queries that yield the correct answer.
Deployment
At deployment, user interaction is used to ensure that the system returns formal-queries that are correct.
We have constructed a web interface allowing users to pose NL questions on tables and by using our query explanations, to choose the correct query from the top-k generated candidates. Normally, a semantic parser receives an NL question as input and displays to the user only the result of its top ranked query. The user receives no explanation as to why was she returned this specific result or whether the parser had managed to correctly parse her question into formal language. In contrast to the baseline parser, our system displays to users its top-k candidates, allowing them to modify the parser's top query.
Example 6.1 Figure FIGREF51 shows an example from the WikitableQuestions test set with the question "How many more ships were wrecked in lake Huron than in Erie". Note that the original table contains many more records than those displayed in the figure. Given the explanations of the parser's top candidates, our provenance-based highlights make it clear that the first query is correct as it compares the table occurrences of lakes Huron and Erie. The second result is incorrect, comparing lakes Huron and Superior, while the third query does not compare occurrences.
Experiments
Following the presentation of concrete applications for our methods we have designed an experimental study to measure the effect of our query explanation mechanism. We conducted experiments to evaluate both the quality of our explanations, as well as their contribution to the baseline parser. This section is comprised of two main parts:
The experimental results show our query explanations to be effective, allowing non-experts to easily understand generated queries and to disqualify incorrect ones. Training on user feedback further improves the system correctness, allowing it to learn from user experience.
Evaluation Metrics
We begin by defining the system correctness, used as our main evaluation metric. Recall that the semantic parser is given an NL question INLINEFORM0 and table INLINEFORM1 and generates a set INLINEFORM2 of candidate queries. Each query INLINEFORM3 is then executed against the table, yielding result INLINEFORM4 . We define the parser correctness as the percentage of questions where the top-ranked query is a correct translation of INLINEFORM5 from NL to lambda DCS. In addition to correctness, we also measured the mean reciprocal rank (MRR), used for evaluating the average correctness of all candidate queries generated, rather than only that of the top-1.
Example 7.1 To illustrate the difference between correct answers and correct queries let us consider the example in Figure FIGREF48 . The parser generates the following candidate queries (we present only their utterances):
maximum value in column Year in rows where value of column League is USL A-League.
minimum value in column Year in rows that have the highest value in column Open Cup.
Both return the correct answer 2004, however only the first query conveys the correct translation of the NL question.
Interactive Parsing at Deployment
We use query explanations to improve the real-time performance of the semantic parser. Given any NL question on a (never before seen) table, the parser will generate a set of candidate queries. Using our explanations, the user will interactively select the correct query (when generated) from the parser's top-k results. We compare the correctness scores of our interactive method with that of the baseline parser.
Our user study was conducted using anonymous workers recruited through the the Amazon Mechanical Turk (AMT) crowdsourcing platform. Focusing on non-experts, our only requirements were that participants be over 18 years old and reside in a native English speaking country. Our study included 35 distinct workers, a significant number of participants compared to previous works on NL interfaces BIBREF4 , BIBREF15 , BIBREF20 . Rather than relying on a small set of NL test questions BIBREF4 , BIBREF15 we presented each worker with 20 distinct questions that were randomly selected from the WikiTableQuestions benchmark dataset (Section SECREF41 ). A total of 405 distinct questions were presented (as described in Table TABREF59 ). For each question, workers were shown explanations (utterances, highlights) of the top-7 candidate queries generated. Candidates were randomly ordered, rather than ranked by the parser scores, so that users will not be biased towards the parser's top query. Given a question, participants were asked to mark the correct candidate query, or None if no correct query was generated.
Displaying the top-k results allowed workers to improve the baseline parser in cases where the correct query was generated, but not ranked at the top. After examining different values of INLINEFORM0 , we chose to display top-k queries with INLINEFORM1 . We made sure to validate that our choice of INLINEFORM2 was sufficiently large, so that it included the correct query (when generated). We randomly selected 100 examples where no correct query was generated in the top-7 and examined whether one was generated within the top-14 queries. Results had shown that for INLINEFORM3 only 5% of the examples contained a correct query, a minor improvement at the cost of doubling user effort. Thus a choice of INLINEFORM4 appears to be reasonable.
To verify that our query explanations were understandable to non-experts we measured each worker's success. Results in Table TABREF59 show that in 78.4% of the cases, workers had succeeded in identifying the correct query or identifying that no candidate query was correct. The average success rate for all 35 workers being 15.7/20 questions. When comparing our explanation approach (utterances + highlights) to a baseline of no explanations, non-expert users failed to identify correct queries when shown only lambda DCS queries. This demonstrates that utterances and provenance-based highlights serve as effective explanations of formal queries to the layperson. We now show that using them jointly is superior to using only utterances.
When introducing our two explanation methods, we noted their complementary nature. NL utterances serve as highly detailed phrases describing the query, while highlighting provenance cells allows to quickly single out the correct queries. We put this claim to the test by measuring the impact our novel provenance-based highlights had on the average work-time of users. We measured the work-time of 20 distinct AMT workers, divided into two separate groups, each containing half of the participants. Workers from both groups were presented with 20 questions from WikiTableQuestions. The first group of workers were presented both with highlights and utterances as their query explanations, while the second group had to rely solely on NL utterances. Though both groups achieved identical correctness results, the group employing table highlights performed significantly faster. Results in Table TABREF60 show our provenance-based explanations cut the average and median work-time by 34% and 20% respectively. Since user work-time is valuable, the introduction of visual explanations such as table highlights may lead to significant savings in worker costs.
We have examined the effect to which our query explanations can help users improve the correctness of a baseline NL interface. Our user study compares the correctness of three scenarios:
Parser correctness - our baseline is the percentage of examples where the top query returned by the semantic parser was correct.
User correctness - the percentage of examples where the user selected a correct query from the top-7 generated by the parser.
Hybrid correctness - correctness of queries returned by a combination of the previous two scenarios. The system returns the query marked by the user as correct; if the user marks all queries as incorrect it will return the parser's top candidate.
Results in Table TABREF64 show the correctness rates of these scenarios. User correctness score is superior to that of the baseline parser by 7.5% (from 37.1% to 44.6%), while the hybrid approach outscores both with a correctness of 48.7% improving the baseline by 11.6%. For the user and hybrid correctness we used a INLINEFORM0 test to measure significance. Random queries and tables included in the experiment are presented in Table TABREF66 . We also include a comparison of the top ranked query of the baseline parser compared to that of the user.
We define the correctness bound as the percentage of examples where the top-k candidate queries actually contain a correct result. This bound serves as the optimal correctness score that workers can achieve. The 56% correctness-bound of the baseline parser stems from the sheer complexity of the WikiTableQuestions benchmark. Given the training and test tables are disjoint, the parser is tested on relations and entities unobserved during its training. This task of generalizing to unseen domains is an established challenge in semantic parsing BIBREF1 , BIBREF21 . Using the correctness-bound as an upper bound on our results shows the hybrid approach achieves 87% of its full potential. Though there is some room for improvement, it seems reasonable given that our non-expert workers possess no prior experience of their given task.
We describe the execution times for generating our query explanations in Table TABREF65 . We trained the semantic parser using the SMEPRE toolkit BIBREF0 on a machine with Xeon 2.20GHz CPU and 256GB RAM running Linux Ubuntu 14.04 LTS. We report the average generation times of candidate queries, utterances and highlights over the entire WikiTableQuestions test set, numbering 4,344 questions.
Training on User Feedback
We measure our system's ability to learn from user feedback in the form of question-query pairs. Given a question, the user is shown explanations of the parser's top-7 queries, using them to annotate the question, i.e. assign to it correct formal queries (e.g., the first query in Figure FIGREF48 ). Annotations were collected by displaying users with questions from the WikiTableQuestions training set along with query explanations of the parser results. To enhance the annotation quality, each question was presented to three distinct users, taking only the annotations marked by at least two of them as correct. Data collection was done using AMT and in total, 2,068 annotated questions were collected. Following a standard methodology, we split the annotated data into train and development sets. Out of our 2,068 annotated examples, 418 were selected as the development set, and 1,650 as the training set. The annotated development examples were used to evaluate the effect of our annotations on the parser correctness.
We experiment on two scenarios: (1) training the parser solely on 1,650 annotated examples; (2) integrating our training examples into the entire WikiTableQuestions training set of 11K examples. For each scenario we trained two parsers, one trained using annotations and the other without any use of annotations. To gain more robust results we ran our experiments on three different train/dev splits of our data, averaging the results. Table TABREF68 displays the results of our experiments. When training solely on the annotated examples, parser correctness on development examples increased by 8% (41.8% to 49.8%). The spike in correctness shows that feedback acquired using our explanations is high-quality input for the semantic parser, hence the parser achieves better correctness when trained on it compared to training on the original WikiTableQuestions benchmark.
When training on all 11K train examples using our 1,650 annotations we also saw an increase (of 2.1%), albeit being more modest due to the percentage of annotated examples. We witnessed an increase in both correctness and MRR (mean reciprocal rank) that grows in the number of annotated train examples. This further asserts the significance of annotated training data BIBREF22 , BIBREF14 and shows that our system can learn from quality feedback collected by non-experts.
Conclusion and Future Work
We have studied in this paper the problem of explaining complex NL queries to non expert users. We introduced visual query explanations in the form of table highlights, based on a novel cell-based provenance model tested on web tables from hundreds of distinct domains. Table highlights provide immediate visual feedback for identifying correct candidate queries. We combine table highlights with utterance based query explanations, significantly improving their effectiveness. Using our query explanations we enhanced an NL interface for querying tables by providing it with feedback at both deployment and training time. Feedback is procured through query explanations, allowing users with no technical background to query tables with confidence, while simultaneously providing feedback to enhance the interface itself. We implement a human in the loop paradigm, where our users both exploit the underlying Machine Learning algorithm while providing it with further data to train on.
We have put our methods to the test, having conducted an extensive user study to determine the clarity of our explanations. Experimenting with explanations for hundreds of formal queries, users proved to be successful in interactively choosing correct queries, easily topping the baseline parser correctness. The addition of provenance-based highlights helps boost the efficacy of user feedback, cutting average work-time by a third compared to the utterances baseline. | Unanswerable |
84d36bca06786070e49d3db784e42a51dd573d36 | 84d36bca06786070e49d3db784e42a51dd573d36_0 | Q: What was the task given to workers?
Text: Introduction
Crowdsourcing applications vary from basic, self-contained tasks such as image recognition or labeling BIBREF0 all the way to open-ended and creative endeavors such as collaborative writing, creative question proposal, or more general ideation BIBREF1 . Yet scaling the crowd to very large sets of creative tasks may require prohibitive numbers of workers. Scalability is one of the key challenges in crowdsourcing: how to best apply the valuable but limited resources provided by crowd workers and how to help workers be as efficient as possible.
Efficiency gains can be achieved either collectively at the level of the entire crowd or by helping individual workers. At the crowd level, efficiency can be gained by assigning tasks to workers in the best order BIBREF2 , by filtering out poor tasks or workers, or by best incentivizing workers BIBREF3 . At the individual worker level, efficiency gains can come from helping workers craft more accurate responses and complete tasks in less time.
One way to make workers individually more efficient is to computationally augment their task interface with useful information. For example, an autocompletion user interface (AUI) BIBREF4 , such as used on Google's main search page, may speed up workers as they answer questions or propose ideas. However, support for the benefits of AUIs is mixed and existing research has not considered short, repetitive inputs such as those required by many large-scale crowdsourcing problems. More generally, it is not yet clear what are the best approaches or general strategies to achieve efficiency gains for creative crowdsourcing tasks.
In this work, we conducted a randomized trial of the benefits of allowing workers to answer a text-based question with the help of an autocompletion user interface. Workers interacted with a web form that recorded how quickly they entered text into the response field and how quickly they submitted their responses after typing is completed. After the experiment concluded, we measured response diversity using textual analyses and response quality using a followup crowdsourcing task with an independent population of workers. Our results indicate that the AUI treatment did not affect quality, and did not help workers perform more quickly or achieve greater response consensus. Instead, workers with the AUI were significantly slower and their responses were more diverse than workers in the non-AUI control group.
Related Work
An important goal of crowdsourcing research is achieving efficient scalability of the crowd to very large sets of tasks. Efficiency in crowdsourcing manifests both in receiving more effective information per worker and in making individual workers faster and/or more accurate. The former problem is a significant area of interest BIBREF5 , BIBREF6 , BIBREF7 while less work has been put towards the latter.
One approach to helping workers be faster at individual tasks is the application of usability studies. BIBREF8 ( BIBREF8 ) famously showed how crowd workers can perform user studies, although this work was focused on using workers as usability testers for other platforms, not on studying crowdsourcing interfaces. More recent usability studies on the efficiency and accuracy of workers include: BIBREF9 ( BIBREF9 ), who consider the task completion times of macrotasks and microtasks and find workers given smaller microtasks were slower but achieve higher quality than those given larger macrotasks; BIBREF10 ( BIBREF10 ), who study how the sequence of tasks given to workers and interruptions between tasks may slow workers down; and BIBREF11 ( BIBREF11 ), who study completion times for relevance judgment tasks, and find that imposed time limits can improve relevance quality, but do not focus on ways to speed up workers. These studies do not test the effects of the task interface, however, as we do here.
The usability feature we study here is an autocompletion user interface (AUI). AUIs are broadly familiar to online workers at this point, thanks in particular to their prominence on Google's main search bar (evolving out of the original Google Instant implementation). However, literature on the benefits of AUIs (and related word prediction and completion interfaces) in terms of improving efficiency is decidedly mixed.
It is generally assumed that AUIs make users faster by saving keystrokes BIBREF12 . However, there is considerable debate about whether or not such gains are countered by increased cognitive load induced by processing the given autocompletions BIBREF13 . BIBREF14 ( BIBREF14 ) showed that typists can enter text more quickly with word completion and prediction interfaces than without. However, this study focused on a different input modality (an onscreen keyboard) and, more importantly, on a text transcription task: typists were asked to reproduce an existing text, not answer questions. BIBREF4 ( BIBREF4 ) showed that medical typists saved keystrokes when using an autocompletion interface to input standardized medical terms. However, they did not consider the elapsed times required by these users, instead focusing on response times of the AUI suggestions, and so it is unclear if the users were actually faster with the AUI. There is some evidence that long-term use of an AUI can lead to improved speed and not just keystroke savings BIBREF15 , but it is not clear how general such learning may be, and whether or not it is relevant to short-duration crowdsourcing tasks.
Experimental design
Here we describe the task we studied and its input data, worker recruitment, the design of our experimental treatment and control, the “instrumentation” we used to measure the speeds of workers as they performed our task, and our procedures to post-process and rate the worker responses to our task prior to subsequent analysis.
Data collection
We recruited 176 AMT workers to participate in our conceptualization task. Of these workers, 90 were randomly assigned to the Control group and 86 to the AUI group. These workers completed 1001 tasks: 496 tasks in the control and 505 in the AUI. All responses were gathered within a single 24-hour period during April, 2017.
After Control and AUI workers were finished responding, we initiated our non-experimental quality ratings task. Whenever multiple workers provided the same response to a given question, we only sought ratings for that single unique question and response. Each unique question-response pair ( INLINEFORM0 ) was rated at least 8–10 times (a few pairs were rated more often; we retained those extra ratings). We recruited 119 AMT workers (who were not members of the Control or AUI groups) who provided 4300 total ratings.
Differences in response time
We found that workers were slower overall with the AUI than without the AUI. In Fig. FIGREF16 we show the distributions of typing duration and submission delay. There was a slight difference in typing duration between Control and AUI (median 1.97s for Control compared with median 2.69s for AUI). However, there was a strong difference in the distributions of submission delay, with AUI workers taking longer to submit than Control workers (median submission delay of 7.27s vs. 4.44s). This is likely due to the time required to mentally process and select from the AUI options. We anticipated that the submission delay may be counter-balanced by the time saved entering text, but the total typing duration plus submission delay was still significantly longer for AUI than control (median 7.64s for Control vs. 12.14s for AUI). We conclude that the AUI makes workers significantly slower.
We anticipated that workers may learn over the course of multiple tasks. For example, the first time a worker sees the AUI will present a very different cognitive load than the 10th time. This learning may eventually lead to improved response times and so an AUI that may not be useful the first time may lead to performance gains as workers become more experienced.
To investigate learning effects, we recorded for each worker's question-response pair how many questions that worker had already answered, and examined the distributions of typing duration and submission delay conditioned on the number of previously answered questions (Fig. FIGREF17 ). Indeed, learning did occur: the submission delay (but not typing duration) decreased as workers responded to more questions. However, this did not translate to gains in overall performance between Control and AUI workers as learning occurred for both groups: Among AUI workers who answered 10 questions, the median submission delay on the 10th question was 8.02s, whereas for Control workers who answered 10 questions, the median delay on the 10th question was only 4.178s. This difference between Control and AUI submission delays was significant (Mann-Whitney test: INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 ). In comparison, AUI (Control) workers answering their first question had a median submission delay of 10.97s (7.00s). This difference was also significant (Mann-Whitney test: INLINEFORM4 , INLINEFORM5 , INLINEFORM6 , INLINEFORM7 ). We conclude that experience with the AUI will not eventually lead to faster responses those of the control.
Differences in response diversity
We were also interested in determining whether or not the worker responses were more consistent or more diverse due to the AUI. Response consistency for natural language data is important when a crowdsourcer wishes to pool or aggregate a set of worker responses. We anticipated that the AUI would lead to greater consistency by, among other effects, decreasing the rates of typos and misspellings. At the same time, however, the AUI could lead to more diversity due to cognitive priming: seeing suggested responses from the AUI may prompt the worker to revise their response. Increased diversity may be desirable when a crowdsourcer wants to receive as much information as possible from a given task.
To study the lexical and semantic diversities of responses, we performed three analyses. First, we aggregated all worker responses to a particular question into a single list corresponding to that question. Across all questions, we found that the number of unique responses was higher for the AUI than for the Control (Fig. FIGREF19 A), implying higher diversity for AUI than for Control.
Second, we compared the diversity of individual responses between Control and AUI for each question. To measure diversity for a question, we computed the number of responses divided by the number of unique responses to that question. We call this the response density. A set of responses has a response density of 1 when every response is unique but when every response is the same, the response density is equal to the number of responses. Across the ten questions, response density was significantly lower for AUI than for Control (Wilcoxon signed rank test paired on questions: INLINEFORM0 , INLINEFORM1 , INLINEFORM2 ) (Fig. FIGREF19 B).
Third, we estimated the semantic diversity of responses using word vectors. Word vectors, or word embeddings, are a state-of-the-art computational linguistics tool that incorporate the semantic meanings of words and phrases by learning vector representations that are embedded into a high-dimensional vector space BIBREF18 , BIBREF19 . Vector operations within this space such as addition and subtraction are capable of representing meaning and interrelationships between words BIBREF19 . For example, the vector INLINEFORM0 is very close to the vector INLINEFORM1 , indicating that these vectors capture analogy relations. Here we used 300-dimension word vectors trained on a 100B-word corpus taken from Google News (word2vec). For each question we computed the average similarity between words in the responses to that question—a lower similarity implies more semantically diverse answers. Specifically, for a given question INLINEFORM2 , we concatenated all responses to that question into a single document INLINEFORM3 , and averaged the vector similarities INLINEFORM4 of all pairs of words INLINEFORM5 in INLINEFORM6 , where INLINEFORM7 is the word vector corresponding to word INLINEFORM8 : DISPLAYFORM0
where INLINEFORM0 if INLINEFORM1 and zero otherwise. We also excluded from EQREF21 any word pairs where one or both words were not present in the pre-trained word vectors (approximately 13% of word pairs). For similarity INLINEFORM2 we chose the standard cosine similarity between two vectors. As with response density, we found that most questions had lower word vector similarity INLINEFORM3 (and are thus collectively more semantically diverse) when considering AUI responses as the document INLINEFORM4 than when INLINEFORM5 came from the Control workers (Fig. FIGREF19 C). The difference was significant (Wilcoxon signed rank test paired on questions: INLINEFORM6 , INLINEFORM7 , INLINEFORM8 ).
Taken together, we conclude from these three analyses that the AUI increased the diversity of the responses workers gave.
No difference in response quality
Following the collection of responses from the Control and AUI groups, separate AMT workers were asked to rate the quality of the original responses (see Experimental design). These ratings followed a 1–5 scale from lowest to highest. We present these ratings in Fig. FIGREF23 . While there was variation in overall quality across different questions (Fig. FIGREF23 A), we did not observe a consistent difference in perceived response quality between the two groups. There was also no statistical difference in the overall distributions of ratings per question (Fig. FIGREF23 B). We conclude that the AUI neither increased nor decreased response quality.
Discussion
We have showed via a randomized control trial that an autocompletion user interface (AUI) is not helpful in making workers more efficient. Further, the AUI led to a more lexically and semantically diverse set of text responses to a given task than if the AUI was not present. The AUI also had no noticeable impact, positive or negative, on response quality, as independently measured by other workers.
A challenge with text-focused crowdsourcing is aggregation of natural language responses. Unlike binary labeling tasks, for example, normalizing text data can be challenging. Should casing be removed? Should words be stemmed? What to do with punctuation? Should typos be fixed? One of our goals when testing the effects of the AUI was to see if it helps with this normalization task, so that crowdsourcers can spend less time aggregating responses. We found that the AUI would likely not help with this in the sense that the sets of responses became more diverse, not less. Yet, this may in fact be desirable—if a crowdsourcer wants as much diverse information from workers as possible, then showing them dynamic AUI suggestions may provide a cognitive priming mechanism to inspire workers to consider responses which otherwise would not have occurred to them.
One potential explanation for the increased submission delay among AUI workers is an excessive number of options presented by the AUI. The goal of an AUI is to present the best options at the top of the drop down menu (Fig. FIGREF2 B). Then a worker can quickly start typing and choose the best option with a single keystroke or mouse click. However, if the best option appears farther down the menu, then the worker must commit more time to scan and process the AUI suggestions. Our AUI always presented six suggestions, with another six available by scrolling, and our experiment did not vary these numbers. Yet the size of the AUI and where options land may play significant roles in submission delay, especially if significant numbers of selections come from AUI positions far from the input area.
We aimed to explore position effects, but due to some technical issues we did not record the positions in the AUI that workers chose. However, our Javascript instrumentation logged worker keystrokes as they typed so we can approximately reconstruct the AUI position of the worker's ultimate response. To do this, we first identified the logged text inputed by the worker before it was replaced by the AUI selection, then used this text to replicate the database query underlying the AUI, and lastly determined where the worker's final response appeared in the query results. This procedure is only an approximation because our instrumentation would occasionally fail to log some keystrokes and because a worker could potentially type out the entire response even if it also appeared in the AUI (which the worker may not have even noticed). Nevertheless, most AUI workers submitted responses that appeared in the AUI (Fig. FIGREF24 A) and, of those responses, most owere found in the first few (reconstructed) positions near the top of the AUI (Fig. FIGREF24 B). Specifically, we found that 59.3% of responses were found in the first two reconstructed positions, and 91.2% were in the first six. With the caveats of this analysis in mind, which we hope to address in future experiments, these results provide some evidence that the AUI responses were meaningful and that the AUI workers were delayed by the AUI even though most chosen responses came from the top area of the AUI which is most quickly accessible to the worker.
Beyond AUI position effects and the number of options shown in the AUI, there are many aspects of the interplay between workers and the AUI to be further explored. We limited workers to performing no more than ten tasks, but will an AUI eventually lead to efficiency gains beyond that level of experience? It is also an open question if an AUI will lead to efficiency gains when applying more advanced autocompletion and ranking algorithms than the one we used. Given that workers were slower with the AUI primarily due to a delay after they finished typing which far exceeded the delays of non-AUI workers, better algorithms may play a significant role in speeding up or, in this case, slowing down workers. Either way, our results here indicate that crowdsourcers must be very judicious if they wish to augment workers with autocompletion user interfaces.
Acknowledgments
We thank S. Lehman and J. Bongard for useful comments and gratefully acknowledge the resources provided by the Vermont Advanced Computing Core. This material is based upon work supported by the National Science Foundation under Grant No. IIS-1447634. | conceptualization task |
7af01e2580c332e2b5e8094908df4e43a29c8792 | 7af01e2580c332e2b5e8094908df4e43a29c8792_0 | Q: How was lexical diversity measured?
Text: Introduction
Crowdsourcing applications vary from basic, self-contained tasks such as image recognition or labeling BIBREF0 all the way to open-ended and creative endeavors such as collaborative writing, creative question proposal, or more general ideation BIBREF1 . Yet scaling the crowd to very large sets of creative tasks may require prohibitive numbers of workers. Scalability is one of the key challenges in crowdsourcing: how to best apply the valuable but limited resources provided by crowd workers and how to help workers be as efficient as possible.
Efficiency gains can be achieved either collectively at the level of the entire crowd or by helping individual workers. At the crowd level, efficiency can be gained by assigning tasks to workers in the best order BIBREF2 , by filtering out poor tasks or workers, or by best incentivizing workers BIBREF3 . At the individual worker level, efficiency gains can come from helping workers craft more accurate responses and complete tasks in less time.
One way to make workers individually more efficient is to computationally augment their task interface with useful information. For example, an autocompletion user interface (AUI) BIBREF4 , such as used on Google's main search page, may speed up workers as they answer questions or propose ideas. However, support for the benefits of AUIs is mixed and existing research has not considered short, repetitive inputs such as those required by many large-scale crowdsourcing problems. More generally, it is not yet clear what are the best approaches or general strategies to achieve efficiency gains for creative crowdsourcing tasks.
In this work, we conducted a randomized trial of the benefits of allowing workers to answer a text-based question with the help of an autocompletion user interface. Workers interacted with a web form that recorded how quickly they entered text into the response field and how quickly they submitted their responses after typing is completed. After the experiment concluded, we measured response diversity using textual analyses and response quality using a followup crowdsourcing task with an independent population of workers. Our results indicate that the AUI treatment did not affect quality, and did not help workers perform more quickly or achieve greater response consensus. Instead, workers with the AUI were significantly slower and their responses were more diverse than workers in the non-AUI control group.
Related Work
An important goal of crowdsourcing research is achieving efficient scalability of the crowd to very large sets of tasks. Efficiency in crowdsourcing manifests both in receiving more effective information per worker and in making individual workers faster and/or more accurate. The former problem is a significant area of interest BIBREF5 , BIBREF6 , BIBREF7 while less work has been put towards the latter.
One approach to helping workers be faster at individual tasks is the application of usability studies. BIBREF8 ( BIBREF8 ) famously showed how crowd workers can perform user studies, although this work was focused on using workers as usability testers for other platforms, not on studying crowdsourcing interfaces. More recent usability studies on the efficiency and accuracy of workers include: BIBREF9 ( BIBREF9 ), who consider the task completion times of macrotasks and microtasks and find workers given smaller microtasks were slower but achieve higher quality than those given larger macrotasks; BIBREF10 ( BIBREF10 ), who study how the sequence of tasks given to workers and interruptions between tasks may slow workers down; and BIBREF11 ( BIBREF11 ), who study completion times for relevance judgment tasks, and find that imposed time limits can improve relevance quality, but do not focus on ways to speed up workers. These studies do not test the effects of the task interface, however, as we do here.
The usability feature we study here is an autocompletion user interface (AUI). AUIs are broadly familiar to online workers at this point, thanks in particular to their prominence on Google's main search bar (evolving out of the original Google Instant implementation). However, literature on the benefits of AUIs (and related word prediction and completion interfaces) in terms of improving efficiency is decidedly mixed.
It is generally assumed that AUIs make users faster by saving keystrokes BIBREF12 . However, there is considerable debate about whether or not such gains are countered by increased cognitive load induced by processing the given autocompletions BIBREF13 . BIBREF14 ( BIBREF14 ) showed that typists can enter text more quickly with word completion and prediction interfaces than without. However, this study focused on a different input modality (an onscreen keyboard) and, more importantly, on a text transcription task: typists were asked to reproduce an existing text, not answer questions. BIBREF4 ( BIBREF4 ) showed that medical typists saved keystrokes when using an autocompletion interface to input standardized medical terms. However, they did not consider the elapsed times required by these users, instead focusing on response times of the AUI suggestions, and so it is unclear if the users were actually faster with the AUI. There is some evidence that long-term use of an AUI can lead to improved speed and not just keystroke savings BIBREF15 , but it is not clear how general such learning may be, and whether or not it is relevant to short-duration crowdsourcing tasks.
Experimental design
Here we describe the task we studied and its input data, worker recruitment, the design of our experimental treatment and control, the “instrumentation” we used to measure the speeds of workers as they performed our task, and our procedures to post-process and rate the worker responses to our task prior to subsequent analysis.
Data collection
We recruited 176 AMT workers to participate in our conceptualization task. Of these workers, 90 were randomly assigned to the Control group and 86 to the AUI group. These workers completed 1001 tasks: 496 tasks in the control and 505 in the AUI. All responses were gathered within a single 24-hour period during April, 2017.
After Control and AUI workers were finished responding, we initiated our non-experimental quality ratings task. Whenever multiple workers provided the same response to a given question, we only sought ratings for that single unique question and response. Each unique question-response pair ( INLINEFORM0 ) was rated at least 8–10 times (a few pairs were rated more often; we retained those extra ratings). We recruited 119 AMT workers (who were not members of the Control or AUI groups) who provided 4300 total ratings.
Differences in response time
We found that workers were slower overall with the AUI than without the AUI. In Fig. FIGREF16 we show the distributions of typing duration and submission delay. There was a slight difference in typing duration between Control and AUI (median 1.97s for Control compared with median 2.69s for AUI). However, there was a strong difference in the distributions of submission delay, with AUI workers taking longer to submit than Control workers (median submission delay of 7.27s vs. 4.44s). This is likely due to the time required to mentally process and select from the AUI options. We anticipated that the submission delay may be counter-balanced by the time saved entering text, but the total typing duration plus submission delay was still significantly longer for AUI than control (median 7.64s for Control vs. 12.14s for AUI). We conclude that the AUI makes workers significantly slower.
We anticipated that workers may learn over the course of multiple tasks. For example, the first time a worker sees the AUI will present a very different cognitive load than the 10th time. This learning may eventually lead to improved response times and so an AUI that may not be useful the first time may lead to performance gains as workers become more experienced.
To investigate learning effects, we recorded for each worker's question-response pair how many questions that worker had already answered, and examined the distributions of typing duration and submission delay conditioned on the number of previously answered questions (Fig. FIGREF17 ). Indeed, learning did occur: the submission delay (but not typing duration) decreased as workers responded to more questions. However, this did not translate to gains in overall performance between Control and AUI workers as learning occurred for both groups: Among AUI workers who answered 10 questions, the median submission delay on the 10th question was 8.02s, whereas for Control workers who answered 10 questions, the median delay on the 10th question was only 4.178s. This difference between Control and AUI submission delays was significant (Mann-Whitney test: INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 ). In comparison, AUI (Control) workers answering their first question had a median submission delay of 10.97s (7.00s). This difference was also significant (Mann-Whitney test: INLINEFORM4 , INLINEFORM5 , INLINEFORM6 , INLINEFORM7 ). We conclude that experience with the AUI will not eventually lead to faster responses those of the control.
Differences in response diversity
We were also interested in determining whether or not the worker responses were more consistent or more diverse due to the AUI. Response consistency for natural language data is important when a crowdsourcer wishes to pool or aggregate a set of worker responses. We anticipated that the AUI would lead to greater consistency by, among other effects, decreasing the rates of typos and misspellings. At the same time, however, the AUI could lead to more diversity due to cognitive priming: seeing suggested responses from the AUI may prompt the worker to revise their response. Increased diversity may be desirable when a crowdsourcer wants to receive as much information as possible from a given task.
To study the lexical and semantic diversities of responses, we performed three analyses. First, we aggregated all worker responses to a particular question into a single list corresponding to that question. Across all questions, we found that the number of unique responses was higher for the AUI than for the Control (Fig. FIGREF19 A), implying higher diversity for AUI than for Control.
Second, we compared the diversity of individual responses between Control and AUI for each question. To measure diversity for a question, we computed the number of responses divided by the number of unique responses to that question. We call this the response density. A set of responses has a response density of 1 when every response is unique but when every response is the same, the response density is equal to the number of responses. Across the ten questions, response density was significantly lower for AUI than for Control (Wilcoxon signed rank test paired on questions: INLINEFORM0 , INLINEFORM1 , INLINEFORM2 ) (Fig. FIGREF19 B).
Third, we estimated the semantic diversity of responses using word vectors. Word vectors, or word embeddings, are a state-of-the-art computational linguistics tool that incorporate the semantic meanings of words and phrases by learning vector representations that are embedded into a high-dimensional vector space BIBREF18 , BIBREF19 . Vector operations within this space such as addition and subtraction are capable of representing meaning and interrelationships between words BIBREF19 . For example, the vector INLINEFORM0 is very close to the vector INLINEFORM1 , indicating that these vectors capture analogy relations. Here we used 300-dimension word vectors trained on a 100B-word corpus taken from Google News (word2vec). For each question we computed the average similarity between words in the responses to that question—a lower similarity implies more semantically diverse answers. Specifically, for a given question INLINEFORM2 , we concatenated all responses to that question into a single document INLINEFORM3 , and averaged the vector similarities INLINEFORM4 of all pairs of words INLINEFORM5 in INLINEFORM6 , where INLINEFORM7 is the word vector corresponding to word INLINEFORM8 : DISPLAYFORM0
where INLINEFORM0 if INLINEFORM1 and zero otherwise. We also excluded from EQREF21 any word pairs where one or both words were not present in the pre-trained word vectors (approximately 13% of word pairs). For similarity INLINEFORM2 we chose the standard cosine similarity between two vectors. As with response density, we found that most questions had lower word vector similarity INLINEFORM3 (and are thus collectively more semantically diverse) when considering AUI responses as the document INLINEFORM4 than when INLINEFORM5 came from the Control workers (Fig. FIGREF19 C). The difference was significant (Wilcoxon signed rank test paired on questions: INLINEFORM6 , INLINEFORM7 , INLINEFORM8 ).
Taken together, we conclude from these three analyses that the AUI increased the diversity of the responses workers gave.
No difference in response quality
Following the collection of responses from the Control and AUI groups, separate AMT workers were asked to rate the quality of the original responses (see Experimental design). These ratings followed a 1–5 scale from lowest to highest. We present these ratings in Fig. FIGREF23 . While there was variation in overall quality across different questions (Fig. FIGREF23 A), we did not observe a consistent difference in perceived response quality between the two groups. There was also no statistical difference in the overall distributions of ratings per question (Fig. FIGREF23 B). We conclude that the AUI neither increased nor decreased response quality.
Discussion
We have showed via a randomized control trial that an autocompletion user interface (AUI) is not helpful in making workers more efficient. Further, the AUI led to a more lexically and semantically diverse set of text responses to a given task than if the AUI was not present. The AUI also had no noticeable impact, positive or negative, on response quality, as independently measured by other workers.
A challenge with text-focused crowdsourcing is aggregation of natural language responses. Unlike binary labeling tasks, for example, normalizing text data can be challenging. Should casing be removed? Should words be stemmed? What to do with punctuation? Should typos be fixed? One of our goals when testing the effects of the AUI was to see if it helps with this normalization task, so that crowdsourcers can spend less time aggregating responses. We found that the AUI would likely not help with this in the sense that the sets of responses became more diverse, not less. Yet, this may in fact be desirable—if a crowdsourcer wants as much diverse information from workers as possible, then showing them dynamic AUI suggestions may provide a cognitive priming mechanism to inspire workers to consider responses which otherwise would not have occurred to them.
One potential explanation for the increased submission delay among AUI workers is an excessive number of options presented by the AUI. The goal of an AUI is to present the best options at the top of the drop down menu (Fig. FIGREF2 B). Then a worker can quickly start typing and choose the best option with a single keystroke or mouse click. However, if the best option appears farther down the menu, then the worker must commit more time to scan and process the AUI suggestions. Our AUI always presented six suggestions, with another six available by scrolling, and our experiment did not vary these numbers. Yet the size of the AUI and where options land may play significant roles in submission delay, especially if significant numbers of selections come from AUI positions far from the input area.
We aimed to explore position effects, but due to some technical issues we did not record the positions in the AUI that workers chose. However, our Javascript instrumentation logged worker keystrokes as they typed so we can approximately reconstruct the AUI position of the worker's ultimate response. To do this, we first identified the logged text inputed by the worker before it was replaced by the AUI selection, then used this text to replicate the database query underlying the AUI, and lastly determined where the worker's final response appeared in the query results. This procedure is only an approximation because our instrumentation would occasionally fail to log some keystrokes and because a worker could potentially type out the entire response even if it also appeared in the AUI (which the worker may not have even noticed). Nevertheless, most AUI workers submitted responses that appeared in the AUI (Fig. FIGREF24 A) and, of those responses, most owere found in the first few (reconstructed) positions near the top of the AUI (Fig. FIGREF24 B). Specifically, we found that 59.3% of responses were found in the first two reconstructed positions, and 91.2% were in the first six. With the caveats of this analysis in mind, which we hope to address in future experiments, these results provide some evidence that the AUI responses were meaningful and that the AUI workers were delayed by the AUI even though most chosen responses came from the top area of the AUI which is most quickly accessible to the worker.
Beyond AUI position effects and the number of options shown in the AUI, there are many aspects of the interplay between workers and the AUI to be further explored. We limited workers to performing no more than ten tasks, but will an AUI eventually lead to efficiency gains beyond that level of experience? It is also an open question if an AUI will lead to efficiency gains when applying more advanced autocompletion and ranking algorithms than the one we used. Given that workers were slower with the AUI primarily due to a delay after they finished typing which far exceeded the delays of non-AUI workers, better algorithms may play a significant role in speeding up or, in this case, slowing down workers. Either way, our results here indicate that crowdsourcers must be very judicious if they wish to augment workers with autocompletion user interfaces.
Acknowledgments
We thank S. Lehman and J. Bongard for useful comments and gratefully acknowledge the resources provided by the Vermont Advanced Computing Core. This material is based upon work supported by the National Science Foundation under Grant No. IIS-1447634. | By computing number of unique responses and number of responses divided by the number of unique responses to that question for each of the questions |
c78f18606524539e4c573481e5bf1e0a242cc33c | c78f18606524539e4c573481e5bf1e0a242cc33c_0 | Q: How many responses did they obtain?
Text: Introduction
Crowdsourcing applications vary from basic, self-contained tasks such as image recognition or labeling BIBREF0 all the way to open-ended and creative endeavors such as collaborative writing, creative question proposal, or more general ideation BIBREF1 . Yet scaling the crowd to very large sets of creative tasks may require prohibitive numbers of workers. Scalability is one of the key challenges in crowdsourcing: how to best apply the valuable but limited resources provided by crowd workers and how to help workers be as efficient as possible.
Efficiency gains can be achieved either collectively at the level of the entire crowd or by helping individual workers. At the crowd level, efficiency can be gained by assigning tasks to workers in the best order BIBREF2 , by filtering out poor tasks or workers, or by best incentivizing workers BIBREF3 . At the individual worker level, efficiency gains can come from helping workers craft more accurate responses and complete tasks in less time.
One way to make workers individually more efficient is to computationally augment their task interface with useful information. For example, an autocompletion user interface (AUI) BIBREF4 , such as used on Google's main search page, may speed up workers as they answer questions or propose ideas. However, support for the benefits of AUIs is mixed and existing research has not considered short, repetitive inputs such as those required by many large-scale crowdsourcing problems. More generally, it is not yet clear what are the best approaches or general strategies to achieve efficiency gains for creative crowdsourcing tasks.
In this work, we conducted a randomized trial of the benefits of allowing workers to answer a text-based question with the help of an autocompletion user interface. Workers interacted with a web form that recorded how quickly they entered text into the response field and how quickly they submitted their responses after typing is completed. After the experiment concluded, we measured response diversity using textual analyses and response quality using a followup crowdsourcing task with an independent population of workers. Our results indicate that the AUI treatment did not affect quality, and did not help workers perform more quickly or achieve greater response consensus. Instead, workers with the AUI were significantly slower and their responses were more diverse than workers in the non-AUI control group.
Related Work
An important goal of crowdsourcing research is achieving efficient scalability of the crowd to very large sets of tasks. Efficiency in crowdsourcing manifests both in receiving more effective information per worker and in making individual workers faster and/or more accurate. The former problem is a significant area of interest BIBREF5 , BIBREF6 , BIBREF7 while less work has been put towards the latter.
One approach to helping workers be faster at individual tasks is the application of usability studies. BIBREF8 ( BIBREF8 ) famously showed how crowd workers can perform user studies, although this work was focused on using workers as usability testers for other platforms, not on studying crowdsourcing interfaces. More recent usability studies on the efficiency and accuracy of workers include: BIBREF9 ( BIBREF9 ), who consider the task completion times of macrotasks and microtasks and find workers given smaller microtasks were slower but achieve higher quality than those given larger macrotasks; BIBREF10 ( BIBREF10 ), who study how the sequence of tasks given to workers and interruptions between tasks may slow workers down; and BIBREF11 ( BIBREF11 ), who study completion times for relevance judgment tasks, and find that imposed time limits can improve relevance quality, but do not focus on ways to speed up workers. These studies do not test the effects of the task interface, however, as we do here.
The usability feature we study here is an autocompletion user interface (AUI). AUIs are broadly familiar to online workers at this point, thanks in particular to their prominence on Google's main search bar (evolving out of the original Google Instant implementation). However, literature on the benefits of AUIs (and related word prediction and completion interfaces) in terms of improving efficiency is decidedly mixed.
It is generally assumed that AUIs make users faster by saving keystrokes BIBREF12 . However, there is considerable debate about whether or not such gains are countered by increased cognitive load induced by processing the given autocompletions BIBREF13 . BIBREF14 ( BIBREF14 ) showed that typists can enter text more quickly with word completion and prediction interfaces than without. However, this study focused on a different input modality (an onscreen keyboard) and, more importantly, on a text transcription task: typists were asked to reproduce an existing text, not answer questions. BIBREF4 ( BIBREF4 ) showed that medical typists saved keystrokes when using an autocompletion interface to input standardized medical terms. However, they did not consider the elapsed times required by these users, instead focusing on response times of the AUI suggestions, and so it is unclear if the users were actually faster with the AUI. There is some evidence that long-term use of an AUI can lead to improved speed and not just keystroke savings BIBREF15 , but it is not clear how general such learning may be, and whether or not it is relevant to short-duration crowdsourcing tasks.
Experimental design
Here we describe the task we studied and its input data, worker recruitment, the design of our experimental treatment and control, the “instrumentation” we used to measure the speeds of workers as they performed our task, and our procedures to post-process and rate the worker responses to our task prior to subsequent analysis.
Data collection
We recruited 176 AMT workers to participate in our conceptualization task. Of these workers, 90 were randomly assigned to the Control group and 86 to the AUI group. These workers completed 1001 tasks: 496 tasks in the control and 505 in the AUI. All responses were gathered within a single 24-hour period during April, 2017.
After Control and AUI workers were finished responding, we initiated our non-experimental quality ratings task. Whenever multiple workers provided the same response to a given question, we only sought ratings for that single unique question and response. Each unique question-response pair ( INLINEFORM0 ) was rated at least 8–10 times (a few pairs were rated more often; we retained those extra ratings). We recruited 119 AMT workers (who were not members of the Control or AUI groups) who provided 4300 total ratings.
Differences in response time
We found that workers were slower overall with the AUI than without the AUI. In Fig. FIGREF16 we show the distributions of typing duration and submission delay. There was a slight difference in typing duration between Control and AUI (median 1.97s for Control compared with median 2.69s for AUI). However, there was a strong difference in the distributions of submission delay, with AUI workers taking longer to submit than Control workers (median submission delay of 7.27s vs. 4.44s). This is likely due to the time required to mentally process and select from the AUI options. We anticipated that the submission delay may be counter-balanced by the time saved entering text, but the total typing duration plus submission delay was still significantly longer for AUI than control (median 7.64s for Control vs. 12.14s for AUI). We conclude that the AUI makes workers significantly slower.
We anticipated that workers may learn over the course of multiple tasks. For example, the first time a worker sees the AUI will present a very different cognitive load than the 10th time. This learning may eventually lead to improved response times and so an AUI that may not be useful the first time may lead to performance gains as workers become more experienced.
To investigate learning effects, we recorded for each worker's question-response pair how many questions that worker had already answered, and examined the distributions of typing duration and submission delay conditioned on the number of previously answered questions (Fig. FIGREF17 ). Indeed, learning did occur: the submission delay (but not typing duration) decreased as workers responded to more questions. However, this did not translate to gains in overall performance between Control and AUI workers as learning occurred for both groups: Among AUI workers who answered 10 questions, the median submission delay on the 10th question was 8.02s, whereas for Control workers who answered 10 questions, the median delay on the 10th question was only 4.178s. This difference between Control and AUI submission delays was significant (Mann-Whitney test: INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 ). In comparison, AUI (Control) workers answering their first question had a median submission delay of 10.97s (7.00s). This difference was also significant (Mann-Whitney test: INLINEFORM4 , INLINEFORM5 , INLINEFORM6 , INLINEFORM7 ). We conclude that experience with the AUI will not eventually lead to faster responses those of the control.
Differences in response diversity
We were also interested in determining whether or not the worker responses were more consistent or more diverse due to the AUI. Response consistency for natural language data is important when a crowdsourcer wishes to pool or aggregate a set of worker responses. We anticipated that the AUI would lead to greater consistency by, among other effects, decreasing the rates of typos and misspellings. At the same time, however, the AUI could lead to more diversity due to cognitive priming: seeing suggested responses from the AUI may prompt the worker to revise their response. Increased diversity may be desirable when a crowdsourcer wants to receive as much information as possible from a given task.
To study the lexical and semantic diversities of responses, we performed three analyses. First, we aggregated all worker responses to a particular question into a single list corresponding to that question. Across all questions, we found that the number of unique responses was higher for the AUI than for the Control (Fig. FIGREF19 A), implying higher diversity for AUI than for Control.
Second, we compared the diversity of individual responses between Control and AUI for each question. To measure diversity for a question, we computed the number of responses divided by the number of unique responses to that question. We call this the response density. A set of responses has a response density of 1 when every response is unique but when every response is the same, the response density is equal to the number of responses. Across the ten questions, response density was significantly lower for AUI than for Control (Wilcoxon signed rank test paired on questions: INLINEFORM0 , INLINEFORM1 , INLINEFORM2 ) (Fig. FIGREF19 B).
Third, we estimated the semantic diversity of responses using word vectors. Word vectors, or word embeddings, are a state-of-the-art computational linguistics tool that incorporate the semantic meanings of words and phrases by learning vector representations that are embedded into a high-dimensional vector space BIBREF18 , BIBREF19 . Vector operations within this space such as addition and subtraction are capable of representing meaning and interrelationships between words BIBREF19 . For example, the vector INLINEFORM0 is very close to the vector INLINEFORM1 , indicating that these vectors capture analogy relations. Here we used 300-dimension word vectors trained on a 100B-word corpus taken from Google News (word2vec). For each question we computed the average similarity between words in the responses to that question—a lower similarity implies more semantically diverse answers. Specifically, for a given question INLINEFORM2 , we concatenated all responses to that question into a single document INLINEFORM3 , and averaged the vector similarities INLINEFORM4 of all pairs of words INLINEFORM5 in INLINEFORM6 , where INLINEFORM7 is the word vector corresponding to word INLINEFORM8 : DISPLAYFORM0
where INLINEFORM0 if INLINEFORM1 and zero otherwise. We also excluded from EQREF21 any word pairs where one or both words were not present in the pre-trained word vectors (approximately 13% of word pairs). For similarity INLINEFORM2 we chose the standard cosine similarity between two vectors. As with response density, we found that most questions had lower word vector similarity INLINEFORM3 (and are thus collectively more semantically diverse) when considering AUI responses as the document INLINEFORM4 than when INLINEFORM5 came from the Control workers (Fig. FIGREF19 C). The difference was significant (Wilcoxon signed rank test paired on questions: INLINEFORM6 , INLINEFORM7 , INLINEFORM8 ).
Taken together, we conclude from these three analyses that the AUI increased the diversity of the responses workers gave.
No difference in response quality
Following the collection of responses from the Control and AUI groups, separate AMT workers were asked to rate the quality of the original responses (see Experimental design). These ratings followed a 1–5 scale from lowest to highest. We present these ratings in Fig. FIGREF23 . While there was variation in overall quality across different questions (Fig. FIGREF23 A), we did not observe a consistent difference in perceived response quality between the two groups. There was also no statistical difference in the overall distributions of ratings per question (Fig. FIGREF23 B). We conclude that the AUI neither increased nor decreased response quality.
Discussion
We have showed via a randomized control trial that an autocompletion user interface (AUI) is not helpful in making workers more efficient. Further, the AUI led to a more lexically and semantically diverse set of text responses to a given task than if the AUI was not present. The AUI also had no noticeable impact, positive or negative, on response quality, as independently measured by other workers.
A challenge with text-focused crowdsourcing is aggregation of natural language responses. Unlike binary labeling tasks, for example, normalizing text data can be challenging. Should casing be removed? Should words be stemmed? What to do with punctuation? Should typos be fixed? One of our goals when testing the effects of the AUI was to see if it helps with this normalization task, so that crowdsourcers can spend less time aggregating responses. We found that the AUI would likely not help with this in the sense that the sets of responses became more diverse, not less. Yet, this may in fact be desirable—if a crowdsourcer wants as much diverse information from workers as possible, then showing them dynamic AUI suggestions may provide a cognitive priming mechanism to inspire workers to consider responses which otherwise would not have occurred to them.
One potential explanation for the increased submission delay among AUI workers is an excessive number of options presented by the AUI. The goal of an AUI is to present the best options at the top of the drop down menu (Fig. FIGREF2 B). Then a worker can quickly start typing and choose the best option with a single keystroke or mouse click. However, if the best option appears farther down the menu, then the worker must commit more time to scan and process the AUI suggestions. Our AUI always presented six suggestions, with another six available by scrolling, and our experiment did not vary these numbers. Yet the size of the AUI and where options land may play significant roles in submission delay, especially if significant numbers of selections come from AUI positions far from the input area.
We aimed to explore position effects, but due to some technical issues we did not record the positions in the AUI that workers chose. However, our Javascript instrumentation logged worker keystrokes as they typed so we can approximately reconstruct the AUI position of the worker's ultimate response. To do this, we first identified the logged text inputed by the worker before it was replaced by the AUI selection, then used this text to replicate the database query underlying the AUI, and lastly determined where the worker's final response appeared in the query results. This procedure is only an approximation because our instrumentation would occasionally fail to log some keystrokes and because a worker could potentially type out the entire response even if it also appeared in the AUI (which the worker may not have even noticed). Nevertheless, most AUI workers submitted responses that appeared in the AUI (Fig. FIGREF24 A) and, of those responses, most owere found in the first few (reconstructed) positions near the top of the AUI (Fig. FIGREF24 B). Specifically, we found that 59.3% of responses were found in the first two reconstructed positions, and 91.2% were in the first six. With the caveats of this analysis in mind, which we hope to address in future experiments, these results provide some evidence that the AUI responses were meaningful and that the AUI workers were delayed by the AUI even though most chosen responses came from the top area of the AUI which is most quickly accessible to the worker.
Beyond AUI position effects and the number of options shown in the AUI, there are many aspects of the interplay between workers and the AUI to be further explored. We limited workers to performing no more than ten tasks, but will an AUI eventually lead to efficiency gains beyond that level of experience? It is also an open question if an AUI will lead to efficiency gains when applying more advanced autocompletion and ranking algorithms than the one we used. Given that workers were slower with the AUI primarily due to a delay after they finished typing which far exceeded the delays of non-AUI workers, better algorithms may play a significant role in speeding up or, in this case, slowing down workers. Either way, our results here indicate that crowdsourcers must be very judicious if they wish to augment workers with autocompletion user interfaces.
Acknowledgments
We thank S. Lehman and J. Bongard for useful comments and gratefully acknowledge the resources provided by the Vermont Advanced Computing Core. This material is based upon work supported by the National Science Foundation under Grant No. IIS-1447634. | 1001 |
0cf6d52d7eafd43ff961377572bccefc29caf612 | 0cf6d52d7eafd43ff961377572bccefc29caf612_0 | Q: What crowdsourcing platform was used?
Text: Introduction
Crowdsourcing applications vary from basic, self-contained tasks such as image recognition or labeling BIBREF0 all the way to open-ended and creative endeavors such as collaborative writing, creative question proposal, or more general ideation BIBREF1 . Yet scaling the crowd to very large sets of creative tasks may require prohibitive numbers of workers. Scalability is one of the key challenges in crowdsourcing: how to best apply the valuable but limited resources provided by crowd workers and how to help workers be as efficient as possible.
Efficiency gains can be achieved either collectively at the level of the entire crowd or by helping individual workers. At the crowd level, efficiency can be gained by assigning tasks to workers in the best order BIBREF2 , by filtering out poor tasks or workers, or by best incentivizing workers BIBREF3 . At the individual worker level, efficiency gains can come from helping workers craft more accurate responses and complete tasks in less time.
One way to make workers individually more efficient is to computationally augment their task interface with useful information. For example, an autocompletion user interface (AUI) BIBREF4 , such as used on Google's main search page, may speed up workers as they answer questions or propose ideas. However, support for the benefits of AUIs is mixed and existing research has not considered short, repetitive inputs such as those required by many large-scale crowdsourcing problems. More generally, it is not yet clear what are the best approaches or general strategies to achieve efficiency gains for creative crowdsourcing tasks.
In this work, we conducted a randomized trial of the benefits of allowing workers to answer a text-based question with the help of an autocompletion user interface. Workers interacted with a web form that recorded how quickly they entered text into the response field and how quickly they submitted their responses after typing is completed. After the experiment concluded, we measured response diversity using textual analyses and response quality using a followup crowdsourcing task with an independent population of workers. Our results indicate that the AUI treatment did not affect quality, and did not help workers perform more quickly or achieve greater response consensus. Instead, workers with the AUI were significantly slower and their responses were more diverse than workers in the non-AUI control group.
Related Work
An important goal of crowdsourcing research is achieving efficient scalability of the crowd to very large sets of tasks. Efficiency in crowdsourcing manifests both in receiving more effective information per worker and in making individual workers faster and/or more accurate. The former problem is a significant area of interest BIBREF5 , BIBREF6 , BIBREF7 while less work has been put towards the latter.
One approach to helping workers be faster at individual tasks is the application of usability studies. BIBREF8 ( BIBREF8 ) famously showed how crowd workers can perform user studies, although this work was focused on using workers as usability testers for other platforms, not on studying crowdsourcing interfaces. More recent usability studies on the efficiency and accuracy of workers include: BIBREF9 ( BIBREF9 ), who consider the task completion times of macrotasks and microtasks and find workers given smaller microtasks were slower but achieve higher quality than those given larger macrotasks; BIBREF10 ( BIBREF10 ), who study how the sequence of tasks given to workers and interruptions between tasks may slow workers down; and BIBREF11 ( BIBREF11 ), who study completion times for relevance judgment tasks, and find that imposed time limits can improve relevance quality, but do not focus on ways to speed up workers. These studies do not test the effects of the task interface, however, as we do here.
The usability feature we study here is an autocompletion user interface (AUI). AUIs are broadly familiar to online workers at this point, thanks in particular to their prominence on Google's main search bar (evolving out of the original Google Instant implementation). However, literature on the benefits of AUIs (and related word prediction and completion interfaces) in terms of improving efficiency is decidedly mixed.
It is generally assumed that AUIs make users faster by saving keystrokes BIBREF12 . However, there is considerable debate about whether or not such gains are countered by increased cognitive load induced by processing the given autocompletions BIBREF13 . BIBREF14 ( BIBREF14 ) showed that typists can enter text more quickly with word completion and prediction interfaces than without. However, this study focused on a different input modality (an onscreen keyboard) and, more importantly, on a text transcription task: typists were asked to reproduce an existing text, not answer questions. BIBREF4 ( BIBREF4 ) showed that medical typists saved keystrokes when using an autocompletion interface to input standardized medical terms. However, they did not consider the elapsed times required by these users, instead focusing on response times of the AUI suggestions, and so it is unclear if the users were actually faster with the AUI. There is some evidence that long-term use of an AUI can lead to improved speed and not just keystroke savings BIBREF15 , but it is not clear how general such learning may be, and whether or not it is relevant to short-duration crowdsourcing tasks.
Experimental design
Here we describe the task we studied and its input data, worker recruitment, the design of our experimental treatment and control, the “instrumentation” we used to measure the speeds of workers as they performed our task, and our procedures to post-process and rate the worker responses to our task prior to subsequent analysis.
Data collection
We recruited 176 AMT workers to participate in our conceptualization task. Of these workers, 90 were randomly assigned to the Control group and 86 to the AUI group. These workers completed 1001 tasks: 496 tasks in the control and 505 in the AUI. All responses were gathered within a single 24-hour period during April, 2017.
After Control and AUI workers were finished responding, we initiated our non-experimental quality ratings task. Whenever multiple workers provided the same response to a given question, we only sought ratings for that single unique question and response. Each unique question-response pair ( INLINEFORM0 ) was rated at least 8–10 times (a few pairs were rated more often; we retained those extra ratings). We recruited 119 AMT workers (who were not members of the Control or AUI groups) who provided 4300 total ratings.
Differences in response time
We found that workers were slower overall with the AUI than without the AUI. In Fig. FIGREF16 we show the distributions of typing duration and submission delay. There was a slight difference in typing duration between Control and AUI (median 1.97s for Control compared with median 2.69s for AUI). However, there was a strong difference in the distributions of submission delay, with AUI workers taking longer to submit than Control workers (median submission delay of 7.27s vs. 4.44s). This is likely due to the time required to mentally process and select from the AUI options. We anticipated that the submission delay may be counter-balanced by the time saved entering text, but the total typing duration plus submission delay was still significantly longer for AUI than control (median 7.64s for Control vs. 12.14s for AUI). We conclude that the AUI makes workers significantly slower.
We anticipated that workers may learn over the course of multiple tasks. For example, the first time a worker sees the AUI will present a very different cognitive load than the 10th time. This learning may eventually lead to improved response times and so an AUI that may not be useful the first time may lead to performance gains as workers become more experienced.
To investigate learning effects, we recorded for each worker's question-response pair how many questions that worker had already answered, and examined the distributions of typing duration and submission delay conditioned on the number of previously answered questions (Fig. FIGREF17 ). Indeed, learning did occur: the submission delay (but not typing duration) decreased as workers responded to more questions. However, this did not translate to gains in overall performance between Control and AUI workers as learning occurred for both groups: Among AUI workers who answered 10 questions, the median submission delay on the 10th question was 8.02s, whereas for Control workers who answered 10 questions, the median delay on the 10th question was only 4.178s. This difference between Control and AUI submission delays was significant (Mann-Whitney test: INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 ). In comparison, AUI (Control) workers answering their first question had a median submission delay of 10.97s (7.00s). This difference was also significant (Mann-Whitney test: INLINEFORM4 , INLINEFORM5 , INLINEFORM6 , INLINEFORM7 ). We conclude that experience with the AUI will not eventually lead to faster responses those of the control.
Differences in response diversity
We were also interested in determining whether or not the worker responses were more consistent or more diverse due to the AUI. Response consistency for natural language data is important when a crowdsourcer wishes to pool or aggregate a set of worker responses. We anticipated that the AUI would lead to greater consistency by, among other effects, decreasing the rates of typos and misspellings. At the same time, however, the AUI could lead to more diversity due to cognitive priming: seeing suggested responses from the AUI may prompt the worker to revise their response. Increased diversity may be desirable when a crowdsourcer wants to receive as much information as possible from a given task.
To study the lexical and semantic diversities of responses, we performed three analyses. First, we aggregated all worker responses to a particular question into a single list corresponding to that question. Across all questions, we found that the number of unique responses was higher for the AUI than for the Control (Fig. FIGREF19 A), implying higher diversity for AUI than for Control.
Second, we compared the diversity of individual responses between Control and AUI for each question. To measure diversity for a question, we computed the number of responses divided by the number of unique responses to that question. We call this the response density. A set of responses has a response density of 1 when every response is unique but when every response is the same, the response density is equal to the number of responses. Across the ten questions, response density was significantly lower for AUI than for Control (Wilcoxon signed rank test paired on questions: INLINEFORM0 , INLINEFORM1 , INLINEFORM2 ) (Fig. FIGREF19 B).
Third, we estimated the semantic diversity of responses using word vectors. Word vectors, or word embeddings, are a state-of-the-art computational linguistics tool that incorporate the semantic meanings of words and phrases by learning vector representations that are embedded into a high-dimensional vector space BIBREF18 , BIBREF19 . Vector operations within this space such as addition and subtraction are capable of representing meaning and interrelationships between words BIBREF19 . For example, the vector INLINEFORM0 is very close to the vector INLINEFORM1 , indicating that these vectors capture analogy relations. Here we used 300-dimension word vectors trained on a 100B-word corpus taken from Google News (word2vec). For each question we computed the average similarity between words in the responses to that question—a lower similarity implies more semantically diverse answers. Specifically, for a given question INLINEFORM2 , we concatenated all responses to that question into a single document INLINEFORM3 , and averaged the vector similarities INLINEFORM4 of all pairs of words INLINEFORM5 in INLINEFORM6 , where INLINEFORM7 is the word vector corresponding to word INLINEFORM8 : DISPLAYFORM0
where INLINEFORM0 if INLINEFORM1 and zero otherwise. We also excluded from EQREF21 any word pairs where one or both words were not present in the pre-trained word vectors (approximately 13% of word pairs). For similarity INLINEFORM2 we chose the standard cosine similarity between two vectors. As with response density, we found that most questions had lower word vector similarity INLINEFORM3 (and are thus collectively more semantically diverse) when considering AUI responses as the document INLINEFORM4 than when INLINEFORM5 came from the Control workers (Fig. FIGREF19 C). The difference was significant (Wilcoxon signed rank test paired on questions: INLINEFORM6 , INLINEFORM7 , INLINEFORM8 ).
Taken together, we conclude from these three analyses that the AUI increased the diversity of the responses workers gave.
No difference in response quality
Following the collection of responses from the Control and AUI groups, separate AMT workers were asked to rate the quality of the original responses (see Experimental design). These ratings followed a 1–5 scale from lowest to highest. We present these ratings in Fig. FIGREF23 . While there was variation in overall quality across different questions (Fig. FIGREF23 A), we did not observe a consistent difference in perceived response quality between the two groups. There was also no statistical difference in the overall distributions of ratings per question (Fig. FIGREF23 B). We conclude that the AUI neither increased nor decreased response quality.
Discussion
We have showed via a randomized control trial that an autocompletion user interface (AUI) is not helpful in making workers more efficient. Further, the AUI led to a more lexically and semantically diverse set of text responses to a given task than if the AUI was not present. The AUI also had no noticeable impact, positive or negative, on response quality, as independently measured by other workers.
A challenge with text-focused crowdsourcing is aggregation of natural language responses. Unlike binary labeling tasks, for example, normalizing text data can be challenging. Should casing be removed? Should words be stemmed? What to do with punctuation? Should typos be fixed? One of our goals when testing the effects of the AUI was to see if it helps with this normalization task, so that crowdsourcers can spend less time aggregating responses. We found that the AUI would likely not help with this in the sense that the sets of responses became more diverse, not less. Yet, this may in fact be desirable—if a crowdsourcer wants as much diverse information from workers as possible, then showing them dynamic AUI suggestions may provide a cognitive priming mechanism to inspire workers to consider responses which otherwise would not have occurred to them.
One potential explanation for the increased submission delay among AUI workers is an excessive number of options presented by the AUI. The goal of an AUI is to present the best options at the top of the drop down menu (Fig. FIGREF2 B). Then a worker can quickly start typing and choose the best option with a single keystroke or mouse click. However, if the best option appears farther down the menu, then the worker must commit more time to scan and process the AUI suggestions. Our AUI always presented six suggestions, with another six available by scrolling, and our experiment did not vary these numbers. Yet the size of the AUI and where options land may play significant roles in submission delay, especially if significant numbers of selections come from AUI positions far from the input area.
We aimed to explore position effects, but due to some technical issues we did not record the positions in the AUI that workers chose. However, our Javascript instrumentation logged worker keystrokes as they typed so we can approximately reconstruct the AUI position of the worker's ultimate response. To do this, we first identified the logged text inputed by the worker before it was replaced by the AUI selection, then used this text to replicate the database query underlying the AUI, and lastly determined where the worker's final response appeared in the query results. This procedure is only an approximation because our instrumentation would occasionally fail to log some keystrokes and because a worker could potentially type out the entire response even if it also appeared in the AUI (which the worker may not have even noticed). Nevertheless, most AUI workers submitted responses that appeared in the AUI (Fig. FIGREF24 A) and, of those responses, most owere found in the first few (reconstructed) positions near the top of the AUI (Fig. FIGREF24 B). Specifically, we found that 59.3% of responses were found in the first two reconstructed positions, and 91.2% were in the first six. With the caveats of this analysis in mind, which we hope to address in future experiments, these results provide some evidence that the AUI responses were meaningful and that the AUI workers were delayed by the AUI even though most chosen responses came from the top area of the AUI which is most quickly accessible to the worker.
Beyond AUI position effects and the number of options shown in the AUI, there are many aspects of the interplay between workers and the AUI to be further explored. We limited workers to performing no more than ten tasks, but will an AUI eventually lead to efficiency gains beyond that level of experience? It is also an open question if an AUI will lead to efficiency gains when applying more advanced autocompletion and ranking algorithms than the one we used. Given that workers were slower with the AUI primarily due to a delay after they finished typing which far exceeded the delays of non-AUI workers, better algorithms may play a significant role in speeding up or, in this case, slowing down workers. Either way, our results here indicate that crowdsourcers must be very judicious if they wish to augment workers with autocompletion user interfaces.
Acknowledgments
We thank S. Lehman and J. Bongard for useful comments and gratefully acknowledge the resources provided by the Vermont Advanced Computing Core. This material is based upon work supported by the National Science Foundation under Grant No. IIS-1447634. | AMT |
ddd6ba43c4e1138156dd2ef03c25a4c4a47adad0 | ddd6ba43c4e1138156dd2ef03c25a4c4a47adad0_0 | Q: Are results reported only for English data?
Text: Introduction
Abstractive test summarization is an important text generation task. With the applying of the sequence-to-sequence model and the publication of large-scale datasets, the quality of the automatic generated summarization has been greatly improved BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . However, the semantic consistency of the automatically generated summaries is still far from satisfactory.
The commonly-used large-scale datasets for deep learning models are constructed based on naturally-annotated data with heuristic rules BIBREF1 , BIBREF3 , BIBREF4 . The summaries are not written for the source content specifically. It suggests that the provided summary may not be semantically consistent with the source content. For example, the dataset for Chinese social media text summarization, namely LCSTS, contains more than 20% text-summary pairs that are not related, according to the statistics of the manually checked data BIBREF1 .
Table TABREF1 shows an example of semantic inconsistency. Typically, the reference summary contains extra information that cannot be understood from the source content. It is hard to conclude the summary even for a human. Due to the inconsistency, the system cannot extract enough information in the source text, and it would be hard for the model to learn to generate the summary accordingly. The model has to encode spurious correspondence of the summary and the source content by memorization. However, this kind of correspondence is superficial and is not actually needed for generating reasonable summaries. Moreover, the information is harmful to generating semantically consistent summaries, because unrelated information is modeled. For example, the word UTF8gbsn“利益” (benefits) in the summary is not related to the source content. Thus, it has to be remembered by the model, together with the source content. However, this correspondence is spurious, because the word UTF8gbsn“利益” is not related to any word in the source content. In the following, we refer to this problem as Spurious Correspondence caused by the semantically inconsistent data. In this work, we aim to alleviate the impact of the semantic inconsistency of the current dataset. Based on the sequence-to-sequence model, we propose a regularization method to heuristically show down the learning of the spurious correspondence, so that the unrelated information in the dataset is less represented by the model. We incorporate a new soft training target to achieve this goal. For each output time in training, in addition to the gold reference word, the current output also targets at a softened output word distribution that regularizes the current output word distribution. In this way, a more robust correspondence of the source content and the output words can be learned, and potentially, the output summary will be more semantically consistent. To obtain the softened output word distribution, we propose two methods based on the sequence-to-sequence model.
More detailed explanation is introduced in Section SECREF2 . Another problem for abstractive text summarization is that the system summary cannot be easily evaluated automatically. ROUGE BIBREF9 is widely used for summarization evaluation. However, as ROUGE is designed for extractive text summarization, it cannot deal with summary paraphrasing in abstractive text summarization. Besides, as ROUGE is based on the reference, it requires high-quality reference summary for a reasonable evaluation, which is also lacking in the existing dataset for Chinese social media text summarization. We argue that for proper evaluation of text generation task, human evaluation cannot be avoided. We propose a simple and practical human evaluation for evaluating text summarization, where the summary is evaluated against the source content instead of the reference. It handles both of the problems of paraphrasing and lack of high-quality reference. The contributions of this work are summarized as follows:
Proposed Method
Base on the fact that the spurious correspondence is not stable and its realization in the model is prone to change, we propose to alleviate the issue heuristically by regularization. We use the cross-entropy with an annealed output distribution as the regularization term in the loss so that the little fluctuation in the distribution will be depressed and more robust and stable correspondence will be learned. By correspondence, we mean the relation between (a) the current output, and (b) the source content and the partially generated output. Furthermore, we propose to use an additional output layer to generate the annealed output distribution. Due to the same fact, the two output layers will differ more in the words that superficially co-occur, so that the output distribution can be better regularized.
Regularizing the Neural Network with Annealed Distribution
Typically, in the training of the sequence-to-sequence model, only the one-hot hard target is used in the cross-entropy based loss function. For an example in the training set, the loss of an output vector is DISPLAYFORM0
where INLINEFORM0 is the output vector, INLINEFORM1 is the one-hot hard target vector, and INLINEFORM2 is the number of labels. However, as INLINEFORM3 is the one-hot vector, all the elements are zero except the one representing the correct label. Hence, the loss becomes DISPLAYFORM0
where INLINEFORM0 is the index of the correct label. The loss is then summed over the output sentences and across the minibatch and used as the source error signal in the backpropagation. The hard target could cause several problems in the training. Soft training methods try to use a soft target distribution to provide a generalized error signal to the training. For the summarization task, a straight-forward way would be to use the current output vector as the soft target, which contains the knowledge learned by the current model, i.e., the correspondence of the source content and the current output word: DISPLAYFORM0
Then, the two losses are combined as the new loss function: DISPLAYFORM0
where INLINEFORM0 is the index of the true label and INLINEFORM1 is the strength of the soft training loss. We refer to this approach as Self-Train (The left part of Figure FIGREF6 ). The output of the model can be seen as a refined supervisory signal for the learning of the model. The added loss promotes the learning of more stable correspondence. The output not only learns from the one-hot distribution but also the distribution generated by the model itself. However, during the training, the output of the neural network can become too close to the one-hot distribution. To solve this, we make the soft target the soften output distribution. We apply the softmax with temperature INLINEFORM2 , which is computed by DISPLAYFORM0
This transformation keeps the relative order of the labels, and a higher temperature will make the output distributed more evenly. The key motivation is that if the model is still not confident how to generate the current output word under the supervision of the reference summary, it means the correspondence can be spurious and the reference output is unlikely to be concluded from the source content. It makes no sense to force the model to learn such correspondence. The regularization follows that motivation, and in such case, the error signal will be less significant compared to the one-hot target. In the case where the model is extremely confident how to generate the current output, the annealed distribution will resemble the one-hot target. Thus, the regularization is not effective. In all, we make use of the model itself to identify the spurious correspondence and then regularize the output distribution accordingly.
Dual Output Layers
However, the aforementioned method tries to regularize the output word distribution based on what it has already learned. The relative order of the output words is kept. The self-dependency may not be desirable for regularization. It may be better if more correspondence that is spurious can be identified. In this paper, we further propose to obtain the soft target from a different view of the model, so that different knowledge of the dataset can be used to mitigate the overfitting problem. An additional output layer is introduced to generate the soft target. The two output layers share the same hidden representation but have independent parameters. They could learn different knowledge of the data. We refer to this approach as Dual-Train. For clarity, the original output layer is denoted by INLINEFORM0 and the new output layer INLINEFORM1 . Their outputs are denoted by INLINEFORM2 and INLINEFORM3 , respectively. The output layer INLINEFORM4 acts as the original output layer. We apply soft training using the output from INLINEFORM5 to this output layer to increase its ability of generalization. Suppose the correct label is INLINEFORM6 . The target of the output INLINEFORM7 includes both the one-hot distribution and the distribution generated from INLINEFORM8 : DISPLAYFORM0
The new output layer INLINEFORM0 is trained normally using the originally hard target. This output layer is not used in the prediction, and its only purpose is to generate the soft target to facilitate the soft training of INLINEFORM1 . Suppose the correct label is INLINEFORM2 . The target of the output INLINEFORM3 includes only the one-hot distribution: DISPLAYFORM0
Because of the random initialization of the parameters in the output layers, INLINEFORM0 and INLINEFORM1 could learn different things. The diversified knowledge is helpful when dealing with the spurious correspondence in the data. It can also be seen as an online kind of ensemble methods. Several different instances of the same model are softly aggregated into one to make classification. The right part of Figure FIGREF6 shows the architecture of the proposed Dual-Train method.
Experiments
We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. We also analyze the output text and the output label distribution of the models, showing the power of the proposed approach. Finally, we show the cases where the correspondences learned by the proposed approach are still problematic, which can be explained based on the approach we adopt.
Dataset
Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo. The whole dataset is split into three parts, with 2,400,591 pairs in PART I for training, 10,666 pairs in PART II for validation, and 1,106 pairs in PART III for testing. The authors of the dataset have manually annotated the relevance scores, ranging from 1 to 5, of the text-summary pairs in PART II and PART III. They suggested that only pairs with scores no less than three should be used for evaluation, which leaves 8,685 pairs in PART II, and 725 pairs in PART III. From the statistics of the PART II and PART III, we can see that more than 20% of the pairs are dropped to maintain semantic quality. It indicates that the training set, which has not been manually annotated and checked, contains a huge quantity of unrelated text-summary pairs.
Experimental Settings
We use the sequence-to-sequence model BIBREF10 with attention BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 as the Baseline. Both the encoder and decoder are based on the single layer LSTM BIBREF15 . The word embedding size is 400, and the hidden state size of the LSTM unit is 500. We conduct experiments on the word level. To convert the character sequences into word sequences, we use Jieba to segment the words, the same with the existing work BIBREF1 , BIBREF6 . Self-Train and Dual-Train are implemented based on the baseline model, with two more hyper-parameters, the temperature INLINEFORM0 and the soft training strength INLINEFORM1 . We use a very simple setting for all tasks, and set INLINEFORM2 , INLINEFORM3 . We pre-train the model without applying the soft training objective for 5 epochs out of total 10 epochs. We use the Adam optimizer BIBREF16 for all the tasks, using the default settings with INLINEFORM4 , INLINEFORM5 , and INLINEFORM6 . In testing, we use beam search to generate the summaries, and the beam size is set to 5. We report the test results at the epoch that achieves the best score on the development set.
Evaluation Protocol
For text summarization, a common automatic evaluation method is ROUGE BIBREF9 . The generated summary is evaluated against the reference summary, based on unigram recall (ROUGE-1), bigram recall (ROUGE-2), and recall of longest common subsequence (ROUGE-L). To facilitate comparison with the existing systems, we adopt ROUGE as the automatic evaluation method. The ROUGE is calculated on the character level, following the previous work BIBREF1 . However, for abstractive text summarization, the ROUGE is sub-optimal, and cannot assess the semantic consistency between the summary and the source content, especially when there is only one reference for a piece of text. The reason is that the same content may be expressed in different ways with different focuses. Simple word match cannot recognize the paraphrasing. It is the case for all of the existing large-scale datasets. Besides, as aforementioned, ROUGE is calculated on the character level in Chinese text summarization, making the metrics favor the models on the character level in practice. In Chinese, a word is the smallest semantic element that can be uttered in isolation, not a character. In the extreme case, the generated text could be completely intelligible, but the characters could still match. In theory, calculating ROUGE metrics on the word level could alleviate the problem. However, word segmentation is also a non-trivial task for Chinese. There are many kinds of segmentation rules, which will produce different ROUGE scores. We argue that it is not acceptable to introduce additional systematic bias in automatic evaluations, and automatic evaluation for semantically related tasks can only serve as a reference. To avoid the deficiencies, we propose a simple human evaluation method to assess the semantic consistency. Each summary candidate is evaluated against the text rather than the reference. If the candidate is irrelevant or incorrect to the text, or the candidate is not understandable, the candidate is labeled bad. Otherwise, the candidate is labeled good. Then, we can get an accuracy of the good summaries. The proposed evaluation is very simple and straight-forward. It focuses on the relevance between the summary and the text. The semantic consistency should be the major consideration when putting the text summarization methods into practice, but the current automatic methods cannot judge properly. For detailed guidelines in human evaluation, please refer to Appendix SECREF6 . In the human evaluation, the text-summary pairs are dispatched to two human annotators who are native speakers of Chinese. As in our setting the summary is evaluated against the reference, the number of the pairs needs to be manually evaluated is four times the number of the pairs in the test set, because we need to compare four systems in total. To decrease the workload and get a hint about the annotation quality at the same time, we adopt the following procedure. We first randomly select 100 pairs in the validation set for the two human annotators to evaluate. Each pair is annotated twice, and the inter-annotator agreement is checked. We find that under the protocol, the inter-annotator agreement is quite high. In the evaluation of the test set, a pair is only annotated once to accelerate evaluation. To further maintain consistency, summaries of the same source content will not be distributed to different annotators.
Experimental Results
First, we show the results for human evaluation, which focuses on the semantic consistency of the summary with its source content. We evaluate the systems implemented by us as well as the reference. We cannot conduct human evaluations for the existing systems from other work, because the output summaries needed are not available for us. Besides, the baseline system we implemented is very competitive in terms of ROUGE and achieves better performance than almost all the existing systems. The results are listed in Table TABREF24 . It is surprising to see that the accuracy of the reference summaries does not reach 100%. It means that the test set still contains text-summary pairs of poor quality even after removing the pairs with relevance scores lower than 3 as suggested by the authors of the dataset. As we can see, Dual-Train improves the accuracy by 4%. Due to the rigorous definition of being good, the results mean that 4% more of the summaries are semantically consistent with their source content. However, Self-Train has a performance drop compared to the baseline. After investigating its generated summaries, we find that the major reason is that the generated summaries are not grammatically complete and often stop too early, although the generated part is indeed more related to the source content. Because the definition of being good, the improved relevance does not make up the loss on intelligibility.
Then, we compare the automatic evaluation results in Table TABREF25 . As we can see, only applying soft training without adaptation (Self-Train) hurts the performance. With the additional output layer (Dual-Train), the performance can be greatly improved over the baseline. Moreover, with the proposed method the simple baseline model is second to the best compared with the state-of-the-art models and even surpasses in ROUGE-2. It is promising that applying the proposed method to the state-of-the-art model could also improve its performance. The automatic evaluation is done on the original test set to facilitate comparison with existing work. However, a more reasonable setting would be to exclude the 52 test instances that are found bad in the human evaluation, because the quality of the automatic evaluation depends on the reference summary. As the existing methods do not provide their test output, it is a non-trivial task to reproduce all their results of the same reported performance. Nonetheless, it does not change the fact that ROUGE cannot handle the issues in abstractive text summarization properly.
Experimental Analysis
To examine the effect of the proposed method and reveal how the proposed method improves the consistency, we compare the output of the baseline with Dual-Train, based on both the output text and the output label distribution. We also conduct error analysis to discover room for improvements.
To gain a better understanding of the results, we analyze the summaries generated by the baseline model and our proposed model. Some of the summaries are listed in Table TABREF28 . As shown in the table, the summaries generated by the proposed method are much better than the baseline, and we believe they are more precise and informative than the references. In the first one, the baseline system generates a grammatical but unrelated summary, while the proposed method generates a more informative summary. In the second one, the baseline system generates a related but ungrammatical summary, while the proposed method generates a summary related to the source content but different from the reference. We believe the generated summary is actually better than the reference because the focus of the visit is not the event itself but its purpose. In the third one, the baseline system generates a related and grammatical summary, but the facts stated are completely incorrect. The summary generated by the proposed method is more comprehensive than the reference, while the reference only includes the facts in the last sentence of the source content. In short, the generated summary of the proposed method is more consistent with the source content. It also exhibits the necessity of the proposed human evaluation. Because when the generated summary is evaluated against the reference, it may seem redundant or wrong, but it is actually true to the source content. While it is arguable that the generated summary is better than the reference, there is no doubt that the generated summary of the proposed method is better than the baseline. However, the improvement cannot be properly shown by the existing evaluation methods. Furthermore, the examples suggest that the proposed method does learn better correspondence. The highlighted words in each example in Table TABREF28 share almost the same previous words. However, in the first one, the baseline considers “UTF8gbsn停” (stop) as the most related words, which is a sign of noisy word relations learned from other training examples, while the proposed method generates “UTF8gbsn进站” (to the platform), which is more related to what a human thinks. It is the same with the second example, where a human selects “UTF8gbsn专家” (expert) and Dual-Train selects “UTF8gbsn工作者” (worker), while the baseline selects “UTF8gbsn钻研” (research) and fails to generate a grammatical sentence later. In the third one, the reference and the baseline use the same word, while Dual-Train chooses a word of the same meaning. It can be concluded that Dual-Train indeed learns better word relations that could generalize to the test set, and good word relations can guide the decoder to generate semantically consistent summaries.
To show why the generated text of the proposed method is more related to the source content, we further analyze the label distribution, i.e., the word distribution, generated by the (first) output layer, from which the output word is selected. To illustrate the relationship, we calculate a representation for each word based on the label distributions. Each representation is associated with a specific label (word), denoted by INLINEFORM0 , and each dimension INLINEFORM1 shows how likely the label indexed by INLINEFORM2 will be generated instead of the label INLINEFORM3 . To get such representation, we run the model on the training set and get the output vectors in the decoder, which are then averaged with respect to their corresponding labels to form a representation. We can obtain the most related words of a word by simply selecting the highest values from its representation. Table TABREF30 lists some of the labels and the top 4 labels that are most likely to replace each of the labels. It is a hint about the correspondence learned by the model. From the results, it can be observed that Dual-Train learns the better semantic relevance of a word compared to the baseline because the spurious word correspondence is alleviated by regularization. For example, the possible substitutes of the word “UTF8gbsn多长时间” (how long) considered by Dual-Train include “UTF8gbsn多少” (how many), “UTF8gbsn多久” (how long) and “UTF8gbsn时间” (time). However, the relatedness is learned poorly in the baseline, as there is “UTF8gbsn知道” (know), a number, and two particles in the possible substitutes considered by the baseline. Another representative example is the word “UTF8gbsn图像” (image), where the baseline also includes two particles in its most related words. The phenomenon shows that the baseline suffers from spurious correspondence in the data, and learns noisy and harmful relations, which rely too much on the co-occurrence. In contrast, the proposed method can capture more stable semantic relatedness of the words. For text summarization, grouping the words that are in the same topic together can help the model to generate sentences that are more coherent and can improve the quality of the summarization and the relevance to the source content. Although the proposed method resolves a large number of the noisy word relations, there are still cases that the less related words are not eliminated. For example, the top 4 most similar words of “UTF8gbsn期货业” (futures industry) from the proposed method include “UTF8gbsn改革” (reform). It is more related than “2013” from the baseline, but it can still be harmful to text summarization. The problem could arise from the fact that words as “UTF8gbsn期货业” rarely occur in the training data, and their relatedness is not reflected in the data. Another issue is that there are some particles, e.g., “UTF8gbsn的” (DE) in the most related words. A possible explanation is that particles show up too often in the contexts of the word, and it is hard for the models to distinguish them from the real semantically-related words. As our proposed approach is based on regularization of the less common correspondence, it is reasonable that such kind of relation cannot be eliminated. The first case can be categorized into data sparsity, which usually needs the aid of knowledge bases to solve. The second case is due to the characteristics of natural language. However, as such words are often closed class words, the case can be resolved by manually restricting the relatedness of these words.
Related Work
Related work includes efforts on designing models for the Chinese social media text summarization task and the efforts on obtaining soft training target for supervised learning.
Systems for Chinese Social Media Text Summarization
The Large-Scale Chinese Short Text Summarization dataset was proposed by BIBREF1 . Along with the datasets, BIBREF1 also proposed two systems to solve the task, namely RNN and RNN-context. They were two sequence-to-sequence based models with GRU as the encoder and the decoder. The difference between them was that RNN-context had attention mechanism while RNN did not. They conducted experiments both on the character level and on the word level. RNN-distract BIBREF5 was a distraction-based neural model, where the attention mechanism focused on different parts of the source content. CopyNet BIBREF6 incorporated a copy mechanism to allow part of the generated summary to be copied from the source content. The copy mechanism also explained that the results of their word-level model were better than the results of their character-level model. SRB BIBREF17 was a sequence-to-sequence based neural model to improve the semantic relevance between the input text and the output summary. DRGD BIBREF8 was a deep recurrent generative decoder model, combining the decoder with a variational autoencoder.
Methods for Obtaining Soft Training Target
Soft target aims to refine the supervisory signal in supervised learning. Related work includes soft target for traditional learning algorithms and model distillation for deep learning algorithms. The soft label methods are typically for binary classification BIBREF18 , where the human annotators not only assign a label for an example but also give information on how confident they are regarding the annotation. The main difference from our method is that the soft label methods require additional annotation information (e.g., the confidence information of the annotated labels) of the training data, which is costly in the text summarization task. There have also been prior studies on model distillation in deep learning that distills big models into a smaller one. Model distillation BIBREF19 combined different instances of the same model into a single one. It used the output distributions of the previously trained models as the soft target distribution to train a new model. A similar work to model distillation is the soft-target regularization method BIBREF20 for image classification. Instead of using the outputs of other instances, it used an exponential average of the past label distributions of the current instance as the soft target distribution. The proposed method is different compared with the existing model distillation methods, in that the proposed method does not require additional models or additional space to record the past soft label distributions. The existing methods are not suitable for text summarization tasks, because the training of an additional model is costly, and the additional space is huge due to the massive number of data. The proposed method uses its current state as the soft target distribution and eliminates the need to train additional models or to store the history information.
Conclusions
We propose a regularization approach for the sequence-to-sequence model on the Chinese social media summarization task. In the proposed approach, we use a cross-entropy based regularization term to make the model neglect the possible unrelated words. We propose two methods for obtaining the soft output word distribution used in the regularization, of which Dual-Train proves to be more effective. Experimental results show that the proposed method can improve the semantic consistency by 4% in terms of human evaluation. As shown by the analysis, the proposed method achieves the improvements by eliminating the less semantically-related word correspondence. The proposed human evaluation method is effective and efficient in judging the semantic consistency, which is absent in previous work but is crucial in the accurate evaluation of the text summarization systems. The proposed metric is simple to conduct and easy to interpret. It also provides an insight on how practicable the existing systems are in the real-world scenario.
Standard for Human Evaluation
For human evaluation, the annotators are asked to evaluate the summary against the source content based on the goodness of the summary. If the summary is not understandable, relevant or correct according to the source content, the summary is considered bad. More concretely, the annotators are asked to examine the following aspects to determine whether the summary is good:
If a rule is not met, the summary is labeled bad, and the following rules do not need to be checked. In Table TABREF33 , we give examples for cases of each rule. In the first one, the summary is not fluent, because the patient of the predicate UTF8gbsn“找” (seek for) is missing. The second summary is fluent, but the content is not related to the source, in that we cannot determine if Lei Jun is actually fighting the scalpers based on the source content. In the third one, the summary is fluent and related to the source content, but the facts are wrong, as the summary is made up by facts of different people. The last one met all the three rules, and thus it is considered good. This work is supported in part by the GS501100001809National Natural Science Foundation of Chinahttp://dx.doi.org/10.13039/501100001809 under Grant No. GS50110000180961673028. | No |
bd99aba3309da96e96eab3e0f4c4c8c70b51980a | bd99aba3309da96e96eab3e0f4c4c8c70b51980a_0 | Q: Which existing models does this approach outperform?
Text: Introduction
Abstractive test summarization is an important text generation task. With the applying of the sequence-to-sequence model and the publication of large-scale datasets, the quality of the automatic generated summarization has been greatly improved BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . However, the semantic consistency of the automatically generated summaries is still far from satisfactory.
The commonly-used large-scale datasets for deep learning models are constructed based on naturally-annotated data with heuristic rules BIBREF1 , BIBREF3 , BIBREF4 . The summaries are not written for the source content specifically. It suggests that the provided summary may not be semantically consistent with the source content. For example, the dataset for Chinese social media text summarization, namely LCSTS, contains more than 20% text-summary pairs that are not related, according to the statistics of the manually checked data BIBREF1 .
Table TABREF1 shows an example of semantic inconsistency. Typically, the reference summary contains extra information that cannot be understood from the source content. It is hard to conclude the summary even for a human. Due to the inconsistency, the system cannot extract enough information in the source text, and it would be hard for the model to learn to generate the summary accordingly. The model has to encode spurious correspondence of the summary and the source content by memorization. However, this kind of correspondence is superficial and is not actually needed for generating reasonable summaries. Moreover, the information is harmful to generating semantically consistent summaries, because unrelated information is modeled. For example, the word UTF8gbsn“利益” (benefits) in the summary is not related to the source content. Thus, it has to be remembered by the model, together with the source content. However, this correspondence is spurious, because the word UTF8gbsn“利益” is not related to any word in the source content. In the following, we refer to this problem as Spurious Correspondence caused by the semantically inconsistent data. In this work, we aim to alleviate the impact of the semantic inconsistency of the current dataset. Based on the sequence-to-sequence model, we propose a regularization method to heuristically show down the learning of the spurious correspondence, so that the unrelated information in the dataset is less represented by the model. We incorporate a new soft training target to achieve this goal. For each output time in training, in addition to the gold reference word, the current output also targets at a softened output word distribution that regularizes the current output word distribution. In this way, a more robust correspondence of the source content and the output words can be learned, and potentially, the output summary will be more semantically consistent. To obtain the softened output word distribution, we propose two methods based on the sequence-to-sequence model.
More detailed explanation is introduced in Section SECREF2 . Another problem for abstractive text summarization is that the system summary cannot be easily evaluated automatically. ROUGE BIBREF9 is widely used for summarization evaluation. However, as ROUGE is designed for extractive text summarization, it cannot deal with summary paraphrasing in abstractive text summarization. Besides, as ROUGE is based on the reference, it requires high-quality reference summary for a reasonable evaluation, which is also lacking in the existing dataset for Chinese social media text summarization. We argue that for proper evaluation of text generation task, human evaluation cannot be avoided. We propose a simple and practical human evaluation for evaluating text summarization, where the summary is evaluated against the source content instead of the reference. It handles both of the problems of paraphrasing and lack of high-quality reference. The contributions of this work are summarized as follows:
Proposed Method
Base on the fact that the spurious correspondence is not stable and its realization in the model is prone to change, we propose to alleviate the issue heuristically by regularization. We use the cross-entropy with an annealed output distribution as the regularization term in the loss so that the little fluctuation in the distribution will be depressed and more robust and stable correspondence will be learned. By correspondence, we mean the relation between (a) the current output, and (b) the source content and the partially generated output. Furthermore, we propose to use an additional output layer to generate the annealed output distribution. Due to the same fact, the two output layers will differ more in the words that superficially co-occur, so that the output distribution can be better regularized.
Regularizing the Neural Network with Annealed Distribution
Typically, in the training of the sequence-to-sequence model, only the one-hot hard target is used in the cross-entropy based loss function. For an example in the training set, the loss of an output vector is DISPLAYFORM0
where INLINEFORM0 is the output vector, INLINEFORM1 is the one-hot hard target vector, and INLINEFORM2 is the number of labels. However, as INLINEFORM3 is the one-hot vector, all the elements are zero except the one representing the correct label. Hence, the loss becomes DISPLAYFORM0
where INLINEFORM0 is the index of the correct label. The loss is then summed over the output sentences and across the minibatch and used as the source error signal in the backpropagation. The hard target could cause several problems in the training. Soft training methods try to use a soft target distribution to provide a generalized error signal to the training. For the summarization task, a straight-forward way would be to use the current output vector as the soft target, which contains the knowledge learned by the current model, i.e., the correspondence of the source content and the current output word: DISPLAYFORM0
Then, the two losses are combined as the new loss function: DISPLAYFORM0
where INLINEFORM0 is the index of the true label and INLINEFORM1 is the strength of the soft training loss. We refer to this approach as Self-Train (The left part of Figure FIGREF6 ). The output of the model can be seen as a refined supervisory signal for the learning of the model. The added loss promotes the learning of more stable correspondence. The output not only learns from the one-hot distribution but also the distribution generated by the model itself. However, during the training, the output of the neural network can become too close to the one-hot distribution. To solve this, we make the soft target the soften output distribution. We apply the softmax with temperature INLINEFORM2 , which is computed by DISPLAYFORM0
This transformation keeps the relative order of the labels, and a higher temperature will make the output distributed more evenly. The key motivation is that if the model is still not confident how to generate the current output word under the supervision of the reference summary, it means the correspondence can be spurious and the reference output is unlikely to be concluded from the source content. It makes no sense to force the model to learn such correspondence. The regularization follows that motivation, and in such case, the error signal will be less significant compared to the one-hot target. In the case where the model is extremely confident how to generate the current output, the annealed distribution will resemble the one-hot target. Thus, the regularization is not effective. In all, we make use of the model itself to identify the spurious correspondence and then regularize the output distribution accordingly.
Dual Output Layers
However, the aforementioned method tries to regularize the output word distribution based on what it has already learned. The relative order of the output words is kept. The self-dependency may not be desirable for regularization. It may be better if more correspondence that is spurious can be identified. In this paper, we further propose to obtain the soft target from a different view of the model, so that different knowledge of the dataset can be used to mitigate the overfitting problem. An additional output layer is introduced to generate the soft target. The two output layers share the same hidden representation but have independent parameters. They could learn different knowledge of the data. We refer to this approach as Dual-Train. For clarity, the original output layer is denoted by INLINEFORM0 and the new output layer INLINEFORM1 . Their outputs are denoted by INLINEFORM2 and INLINEFORM3 , respectively. The output layer INLINEFORM4 acts as the original output layer. We apply soft training using the output from INLINEFORM5 to this output layer to increase its ability of generalization. Suppose the correct label is INLINEFORM6 . The target of the output INLINEFORM7 includes both the one-hot distribution and the distribution generated from INLINEFORM8 : DISPLAYFORM0
The new output layer INLINEFORM0 is trained normally using the originally hard target. This output layer is not used in the prediction, and its only purpose is to generate the soft target to facilitate the soft training of INLINEFORM1 . Suppose the correct label is INLINEFORM2 . The target of the output INLINEFORM3 includes only the one-hot distribution: DISPLAYFORM0
Because of the random initialization of the parameters in the output layers, INLINEFORM0 and INLINEFORM1 could learn different things. The diversified knowledge is helpful when dealing with the spurious correspondence in the data. It can also be seen as an online kind of ensemble methods. Several different instances of the same model are softly aggregated into one to make classification. The right part of Figure FIGREF6 shows the architecture of the proposed Dual-Train method.
Experiments
We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. We also analyze the output text and the output label distribution of the models, showing the power of the proposed approach. Finally, we show the cases where the correspondences learned by the proposed approach are still problematic, which can be explained based on the approach we adopt.
Dataset
Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo. The whole dataset is split into three parts, with 2,400,591 pairs in PART I for training, 10,666 pairs in PART II for validation, and 1,106 pairs in PART III for testing. The authors of the dataset have manually annotated the relevance scores, ranging from 1 to 5, of the text-summary pairs in PART II and PART III. They suggested that only pairs with scores no less than three should be used for evaluation, which leaves 8,685 pairs in PART II, and 725 pairs in PART III. From the statistics of the PART II and PART III, we can see that more than 20% of the pairs are dropped to maintain semantic quality. It indicates that the training set, which has not been manually annotated and checked, contains a huge quantity of unrelated text-summary pairs.
Experimental Settings
We use the sequence-to-sequence model BIBREF10 with attention BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 as the Baseline. Both the encoder and decoder are based on the single layer LSTM BIBREF15 . The word embedding size is 400, and the hidden state size of the LSTM unit is 500. We conduct experiments on the word level. To convert the character sequences into word sequences, we use Jieba to segment the words, the same with the existing work BIBREF1 , BIBREF6 . Self-Train and Dual-Train are implemented based on the baseline model, with two more hyper-parameters, the temperature INLINEFORM0 and the soft training strength INLINEFORM1 . We use a very simple setting for all tasks, and set INLINEFORM2 , INLINEFORM3 . We pre-train the model without applying the soft training objective for 5 epochs out of total 10 epochs. We use the Adam optimizer BIBREF16 for all the tasks, using the default settings with INLINEFORM4 , INLINEFORM5 , and INLINEFORM6 . In testing, we use beam search to generate the summaries, and the beam size is set to 5. We report the test results at the epoch that achieves the best score on the development set.
Evaluation Protocol
For text summarization, a common automatic evaluation method is ROUGE BIBREF9 . The generated summary is evaluated against the reference summary, based on unigram recall (ROUGE-1), bigram recall (ROUGE-2), and recall of longest common subsequence (ROUGE-L). To facilitate comparison with the existing systems, we adopt ROUGE as the automatic evaluation method. The ROUGE is calculated on the character level, following the previous work BIBREF1 . However, for abstractive text summarization, the ROUGE is sub-optimal, and cannot assess the semantic consistency between the summary and the source content, especially when there is only one reference for a piece of text. The reason is that the same content may be expressed in different ways with different focuses. Simple word match cannot recognize the paraphrasing. It is the case for all of the existing large-scale datasets. Besides, as aforementioned, ROUGE is calculated on the character level in Chinese text summarization, making the metrics favor the models on the character level in practice. In Chinese, a word is the smallest semantic element that can be uttered in isolation, not a character. In the extreme case, the generated text could be completely intelligible, but the characters could still match. In theory, calculating ROUGE metrics on the word level could alleviate the problem. However, word segmentation is also a non-trivial task for Chinese. There are many kinds of segmentation rules, which will produce different ROUGE scores. We argue that it is not acceptable to introduce additional systematic bias in automatic evaluations, and automatic evaluation for semantically related tasks can only serve as a reference. To avoid the deficiencies, we propose a simple human evaluation method to assess the semantic consistency. Each summary candidate is evaluated against the text rather than the reference. If the candidate is irrelevant or incorrect to the text, or the candidate is not understandable, the candidate is labeled bad. Otherwise, the candidate is labeled good. Then, we can get an accuracy of the good summaries. The proposed evaluation is very simple and straight-forward. It focuses on the relevance between the summary and the text. The semantic consistency should be the major consideration when putting the text summarization methods into practice, but the current automatic methods cannot judge properly. For detailed guidelines in human evaluation, please refer to Appendix SECREF6 . In the human evaluation, the text-summary pairs are dispatched to two human annotators who are native speakers of Chinese. As in our setting the summary is evaluated against the reference, the number of the pairs needs to be manually evaluated is four times the number of the pairs in the test set, because we need to compare four systems in total. To decrease the workload and get a hint about the annotation quality at the same time, we adopt the following procedure. We first randomly select 100 pairs in the validation set for the two human annotators to evaluate. Each pair is annotated twice, and the inter-annotator agreement is checked. We find that under the protocol, the inter-annotator agreement is quite high. In the evaluation of the test set, a pair is only annotated once to accelerate evaluation. To further maintain consistency, summaries of the same source content will not be distributed to different annotators.
Experimental Results
First, we show the results for human evaluation, which focuses on the semantic consistency of the summary with its source content. We evaluate the systems implemented by us as well as the reference. We cannot conduct human evaluations for the existing systems from other work, because the output summaries needed are not available for us. Besides, the baseline system we implemented is very competitive in terms of ROUGE and achieves better performance than almost all the existing systems. The results are listed in Table TABREF24 . It is surprising to see that the accuracy of the reference summaries does not reach 100%. It means that the test set still contains text-summary pairs of poor quality even after removing the pairs with relevance scores lower than 3 as suggested by the authors of the dataset. As we can see, Dual-Train improves the accuracy by 4%. Due to the rigorous definition of being good, the results mean that 4% more of the summaries are semantically consistent with their source content. However, Self-Train has a performance drop compared to the baseline. After investigating its generated summaries, we find that the major reason is that the generated summaries are not grammatically complete and often stop too early, although the generated part is indeed more related to the source content. Because the definition of being good, the improved relevance does not make up the loss on intelligibility.
Then, we compare the automatic evaluation results in Table TABREF25 . As we can see, only applying soft training without adaptation (Self-Train) hurts the performance. With the additional output layer (Dual-Train), the performance can be greatly improved over the baseline. Moreover, with the proposed method the simple baseline model is second to the best compared with the state-of-the-art models and even surpasses in ROUGE-2. It is promising that applying the proposed method to the state-of-the-art model could also improve its performance. The automatic evaluation is done on the original test set to facilitate comparison with existing work. However, a more reasonable setting would be to exclude the 52 test instances that are found bad in the human evaluation, because the quality of the automatic evaluation depends on the reference summary. As the existing methods do not provide their test output, it is a non-trivial task to reproduce all their results of the same reported performance. Nonetheless, it does not change the fact that ROUGE cannot handle the issues in abstractive text summarization properly.
Experimental Analysis
To examine the effect of the proposed method and reveal how the proposed method improves the consistency, we compare the output of the baseline with Dual-Train, based on both the output text and the output label distribution. We also conduct error analysis to discover room for improvements.
To gain a better understanding of the results, we analyze the summaries generated by the baseline model and our proposed model. Some of the summaries are listed in Table TABREF28 . As shown in the table, the summaries generated by the proposed method are much better than the baseline, and we believe they are more precise and informative than the references. In the first one, the baseline system generates a grammatical but unrelated summary, while the proposed method generates a more informative summary. In the second one, the baseline system generates a related but ungrammatical summary, while the proposed method generates a summary related to the source content but different from the reference. We believe the generated summary is actually better than the reference because the focus of the visit is not the event itself but its purpose. In the third one, the baseline system generates a related and grammatical summary, but the facts stated are completely incorrect. The summary generated by the proposed method is more comprehensive than the reference, while the reference only includes the facts in the last sentence of the source content. In short, the generated summary of the proposed method is more consistent with the source content. It also exhibits the necessity of the proposed human evaluation. Because when the generated summary is evaluated against the reference, it may seem redundant or wrong, but it is actually true to the source content. While it is arguable that the generated summary is better than the reference, there is no doubt that the generated summary of the proposed method is better than the baseline. However, the improvement cannot be properly shown by the existing evaluation methods. Furthermore, the examples suggest that the proposed method does learn better correspondence. The highlighted words in each example in Table TABREF28 share almost the same previous words. However, in the first one, the baseline considers “UTF8gbsn停” (stop) as the most related words, which is a sign of noisy word relations learned from other training examples, while the proposed method generates “UTF8gbsn进站” (to the platform), which is more related to what a human thinks. It is the same with the second example, where a human selects “UTF8gbsn专家” (expert) and Dual-Train selects “UTF8gbsn工作者” (worker), while the baseline selects “UTF8gbsn钻研” (research) and fails to generate a grammatical sentence later. In the third one, the reference and the baseline use the same word, while Dual-Train chooses a word of the same meaning. It can be concluded that Dual-Train indeed learns better word relations that could generalize to the test set, and good word relations can guide the decoder to generate semantically consistent summaries.
To show why the generated text of the proposed method is more related to the source content, we further analyze the label distribution, i.e., the word distribution, generated by the (first) output layer, from which the output word is selected. To illustrate the relationship, we calculate a representation for each word based on the label distributions. Each representation is associated with a specific label (word), denoted by INLINEFORM0 , and each dimension INLINEFORM1 shows how likely the label indexed by INLINEFORM2 will be generated instead of the label INLINEFORM3 . To get such representation, we run the model on the training set and get the output vectors in the decoder, which are then averaged with respect to their corresponding labels to form a representation. We can obtain the most related words of a word by simply selecting the highest values from its representation. Table TABREF30 lists some of the labels and the top 4 labels that are most likely to replace each of the labels. It is a hint about the correspondence learned by the model. From the results, it can be observed that Dual-Train learns the better semantic relevance of a word compared to the baseline because the spurious word correspondence is alleviated by regularization. For example, the possible substitutes of the word “UTF8gbsn多长时间” (how long) considered by Dual-Train include “UTF8gbsn多少” (how many), “UTF8gbsn多久” (how long) and “UTF8gbsn时间” (time). However, the relatedness is learned poorly in the baseline, as there is “UTF8gbsn知道” (know), a number, and two particles in the possible substitutes considered by the baseline. Another representative example is the word “UTF8gbsn图像” (image), where the baseline also includes two particles in its most related words. The phenomenon shows that the baseline suffers from spurious correspondence in the data, and learns noisy and harmful relations, which rely too much on the co-occurrence. In contrast, the proposed method can capture more stable semantic relatedness of the words. For text summarization, grouping the words that are in the same topic together can help the model to generate sentences that are more coherent and can improve the quality of the summarization and the relevance to the source content. Although the proposed method resolves a large number of the noisy word relations, there are still cases that the less related words are not eliminated. For example, the top 4 most similar words of “UTF8gbsn期货业” (futures industry) from the proposed method include “UTF8gbsn改革” (reform). It is more related than “2013” from the baseline, but it can still be harmful to text summarization. The problem could arise from the fact that words as “UTF8gbsn期货业” rarely occur in the training data, and their relatedness is not reflected in the data. Another issue is that there are some particles, e.g., “UTF8gbsn的” (DE) in the most related words. A possible explanation is that particles show up too often in the contexts of the word, and it is hard for the models to distinguish them from the real semantically-related words. As our proposed approach is based on regularization of the less common correspondence, it is reasonable that such kind of relation cannot be eliminated. The first case can be categorized into data sparsity, which usually needs the aid of knowledge bases to solve. The second case is due to the characteristics of natural language. However, as such words are often closed class words, the case can be resolved by manually restricting the relatedness of these words.
Related Work
Related work includes efforts on designing models for the Chinese social media text summarization task and the efforts on obtaining soft training target for supervised learning.
Systems for Chinese Social Media Text Summarization
The Large-Scale Chinese Short Text Summarization dataset was proposed by BIBREF1 . Along with the datasets, BIBREF1 also proposed two systems to solve the task, namely RNN and RNN-context. They were two sequence-to-sequence based models with GRU as the encoder and the decoder. The difference between them was that RNN-context had attention mechanism while RNN did not. They conducted experiments both on the character level and on the word level. RNN-distract BIBREF5 was a distraction-based neural model, where the attention mechanism focused on different parts of the source content. CopyNet BIBREF6 incorporated a copy mechanism to allow part of the generated summary to be copied from the source content. The copy mechanism also explained that the results of their word-level model were better than the results of their character-level model. SRB BIBREF17 was a sequence-to-sequence based neural model to improve the semantic relevance between the input text and the output summary. DRGD BIBREF8 was a deep recurrent generative decoder model, combining the decoder with a variational autoencoder.
Methods for Obtaining Soft Training Target
Soft target aims to refine the supervisory signal in supervised learning. Related work includes soft target for traditional learning algorithms and model distillation for deep learning algorithms. The soft label methods are typically for binary classification BIBREF18 , where the human annotators not only assign a label for an example but also give information on how confident they are regarding the annotation. The main difference from our method is that the soft label methods require additional annotation information (e.g., the confidence information of the annotated labels) of the training data, which is costly in the text summarization task. There have also been prior studies on model distillation in deep learning that distills big models into a smaller one. Model distillation BIBREF19 combined different instances of the same model into a single one. It used the output distributions of the previously trained models as the soft target distribution to train a new model. A similar work to model distillation is the soft-target regularization method BIBREF20 for image classification. Instead of using the outputs of other instances, it used an exponential average of the past label distributions of the current instance as the soft target distribution. The proposed method is different compared with the existing model distillation methods, in that the proposed method does not require additional models or additional space to record the past soft label distributions. The existing methods are not suitable for text summarization tasks, because the training of an additional model is costly, and the additional space is huge due to the massive number of data. The proposed method uses its current state as the soft target distribution and eliminates the need to train additional models or to store the history information.
Conclusions
We propose a regularization approach for the sequence-to-sequence model on the Chinese social media summarization task. In the proposed approach, we use a cross-entropy based regularization term to make the model neglect the possible unrelated words. We propose two methods for obtaining the soft output word distribution used in the regularization, of which Dual-Train proves to be more effective. Experimental results show that the proposed method can improve the semantic consistency by 4% in terms of human evaluation. As shown by the analysis, the proposed method achieves the improvements by eliminating the less semantically-related word correspondence. The proposed human evaluation method is effective and efficient in judging the semantic consistency, which is absent in previous work but is crucial in the accurate evaluation of the text summarization systems. The proposed metric is simple to conduct and easy to interpret. It also provides an insight on how practicable the existing systems are in the real-world scenario.
Standard for Human Evaluation
For human evaluation, the annotators are asked to evaluate the summary against the source content based on the goodness of the summary. If the summary is not understandable, relevant or correct according to the source content, the summary is considered bad. More concretely, the annotators are asked to examine the following aspects to determine whether the summary is good:
If a rule is not met, the summary is labeled bad, and the following rules do not need to be checked. In Table TABREF33 , we give examples for cases of each rule. In the first one, the summary is not fluent, because the patient of the predicate UTF8gbsn“找” (seek for) is missing. The second summary is fluent, but the content is not related to the source, in that we cannot determine if Lei Jun is actually fighting the scalpers based on the source content. In the third one, the summary is fluent and related to the source content, but the facts are wrong, as the summary is made up by facts of different people. The last one met all the three rules, and thus it is considered good. This work is supported in part by the GS501100001809National Natural Science Foundation of Chinahttp://dx.doi.org/10.13039/501100001809 under Grant No. GS50110000180961673028. | RNN-context, SRB, CopyNet, RNN-distract, DRGD |
73bb8b7d7e98ccb88bb19ecd2215d91dd212f50d | 73bb8b7d7e98ccb88bb19ecd2215d91dd212f50d_0 | Q: What human evaluation method is proposed?
Text: Introduction
Abstractive test summarization is an important text generation task. With the applying of the sequence-to-sequence model and the publication of large-scale datasets, the quality of the automatic generated summarization has been greatly improved BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . However, the semantic consistency of the automatically generated summaries is still far from satisfactory.
The commonly-used large-scale datasets for deep learning models are constructed based on naturally-annotated data with heuristic rules BIBREF1 , BIBREF3 , BIBREF4 . The summaries are not written for the source content specifically. It suggests that the provided summary may not be semantically consistent with the source content. For example, the dataset for Chinese social media text summarization, namely LCSTS, contains more than 20% text-summary pairs that are not related, according to the statistics of the manually checked data BIBREF1 .
Table TABREF1 shows an example of semantic inconsistency. Typically, the reference summary contains extra information that cannot be understood from the source content. It is hard to conclude the summary even for a human. Due to the inconsistency, the system cannot extract enough information in the source text, and it would be hard for the model to learn to generate the summary accordingly. The model has to encode spurious correspondence of the summary and the source content by memorization. However, this kind of correspondence is superficial and is not actually needed for generating reasonable summaries. Moreover, the information is harmful to generating semantically consistent summaries, because unrelated information is modeled. For example, the word UTF8gbsn“利益” (benefits) in the summary is not related to the source content. Thus, it has to be remembered by the model, together with the source content. However, this correspondence is spurious, because the word UTF8gbsn“利益” is not related to any word in the source content. In the following, we refer to this problem as Spurious Correspondence caused by the semantically inconsistent data. In this work, we aim to alleviate the impact of the semantic inconsistency of the current dataset. Based on the sequence-to-sequence model, we propose a regularization method to heuristically show down the learning of the spurious correspondence, so that the unrelated information in the dataset is less represented by the model. We incorporate a new soft training target to achieve this goal. For each output time in training, in addition to the gold reference word, the current output also targets at a softened output word distribution that regularizes the current output word distribution. In this way, a more robust correspondence of the source content and the output words can be learned, and potentially, the output summary will be more semantically consistent. To obtain the softened output word distribution, we propose two methods based on the sequence-to-sequence model.
More detailed explanation is introduced in Section SECREF2 . Another problem for abstractive text summarization is that the system summary cannot be easily evaluated automatically. ROUGE BIBREF9 is widely used for summarization evaluation. However, as ROUGE is designed for extractive text summarization, it cannot deal with summary paraphrasing in abstractive text summarization. Besides, as ROUGE is based on the reference, it requires high-quality reference summary for a reasonable evaluation, which is also lacking in the existing dataset for Chinese social media text summarization. We argue that for proper evaluation of text generation task, human evaluation cannot be avoided. We propose a simple and practical human evaluation for evaluating text summarization, where the summary is evaluated against the source content instead of the reference. It handles both of the problems of paraphrasing and lack of high-quality reference. The contributions of this work are summarized as follows:
Proposed Method
Base on the fact that the spurious correspondence is not stable and its realization in the model is prone to change, we propose to alleviate the issue heuristically by regularization. We use the cross-entropy with an annealed output distribution as the regularization term in the loss so that the little fluctuation in the distribution will be depressed and more robust and stable correspondence will be learned. By correspondence, we mean the relation between (a) the current output, and (b) the source content and the partially generated output. Furthermore, we propose to use an additional output layer to generate the annealed output distribution. Due to the same fact, the two output layers will differ more in the words that superficially co-occur, so that the output distribution can be better regularized.
Regularizing the Neural Network with Annealed Distribution
Typically, in the training of the sequence-to-sequence model, only the one-hot hard target is used in the cross-entropy based loss function. For an example in the training set, the loss of an output vector is DISPLAYFORM0
where INLINEFORM0 is the output vector, INLINEFORM1 is the one-hot hard target vector, and INLINEFORM2 is the number of labels. However, as INLINEFORM3 is the one-hot vector, all the elements are zero except the one representing the correct label. Hence, the loss becomes DISPLAYFORM0
where INLINEFORM0 is the index of the correct label. The loss is then summed over the output sentences and across the minibatch and used as the source error signal in the backpropagation. The hard target could cause several problems in the training. Soft training methods try to use a soft target distribution to provide a generalized error signal to the training. For the summarization task, a straight-forward way would be to use the current output vector as the soft target, which contains the knowledge learned by the current model, i.e., the correspondence of the source content and the current output word: DISPLAYFORM0
Then, the two losses are combined as the new loss function: DISPLAYFORM0
where INLINEFORM0 is the index of the true label and INLINEFORM1 is the strength of the soft training loss. We refer to this approach as Self-Train (The left part of Figure FIGREF6 ). The output of the model can be seen as a refined supervisory signal for the learning of the model. The added loss promotes the learning of more stable correspondence. The output not only learns from the one-hot distribution but also the distribution generated by the model itself. However, during the training, the output of the neural network can become too close to the one-hot distribution. To solve this, we make the soft target the soften output distribution. We apply the softmax with temperature INLINEFORM2 , which is computed by DISPLAYFORM0
This transformation keeps the relative order of the labels, and a higher temperature will make the output distributed more evenly. The key motivation is that if the model is still not confident how to generate the current output word under the supervision of the reference summary, it means the correspondence can be spurious and the reference output is unlikely to be concluded from the source content. It makes no sense to force the model to learn such correspondence. The regularization follows that motivation, and in such case, the error signal will be less significant compared to the one-hot target. In the case where the model is extremely confident how to generate the current output, the annealed distribution will resemble the one-hot target. Thus, the regularization is not effective. In all, we make use of the model itself to identify the spurious correspondence and then regularize the output distribution accordingly.
Dual Output Layers
However, the aforementioned method tries to regularize the output word distribution based on what it has already learned. The relative order of the output words is kept. The self-dependency may not be desirable for regularization. It may be better if more correspondence that is spurious can be identified. In this paper, we further propose to obtain the soft target from a different view of the model, so that different knowledge of the dataset can be used to mitigate the overfitting problem. An additional output layer is introduced to generate the soft target. The two output layers share the same hidden representation but have independent parameters. They could learn different knowledge of the data. We refer to this approach as Dual-Train. For clarity, the original output layer is denoted by INLINEFORM0 and the new output layer INLINEFORM1 . Their outputs are denoted by INLINEFORM2 and INLINEFORM3 , respectively. The output layer INLINEFORM4 acts as the original output layer. We apply soft training using the output from INLINEFORM5 to this output layer to increase its ability of generalization. Suppose the correct label is INLINEFORM6 . The target of the output INLINEFORM7 includes both the one-hot distribution and the distribution generated from INLINEFORM8 : DISPLAYFORM0
The new output layer INLINEFORM0 is trained normally using the originally hard target. This output layer is not used in the prediction, and its only purpose is to generate the soft target to facilitate the soft training of INLINEFORM1 . Suppose the correct label is INLINEFORM2 . The target of the output INLINEFORM3 includes only the one-hot distribution: DISPLAYFORM0
Because of the random initialization of the parameters in the output layers, INLINEFORM0 and INLINEFORM1 could learn different things. The diversified knowledge is helpful when dealing with the spurious correspondence in the data. It can also be seen as an online kind of ensemble methods. Several different instances of the same model are softly aggregated into one to make classification. The right part of Figure FIGREF6 shows the architecture of the proposed Dual-Train method.
Experiments
We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. We also analyze the output text and the output label distribution of the models, showing the power of the proposed approach. Finally, we show the cases where the correspondences learned by the proposed approach are still problematic, which can be explained based on the approach we adopt.
Dataset
Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo. The whole dataset is split into three parts, with 2,400,591 pairs in PART I for training, 10,666 pairs in PART II for validation, and 1,106 pairs in PART III for testing. The authors of the dataset have manually annotated the relevance scores, ranging from 1 to 5, of the text-summary pairs in PART II and PART III. They suggested that only pairs with scores no less than three should be used for evaluation, which leaves 8,685 pairs in PART II, and 725 pairs in PART III. From the statistics of the PART II and PART III, we can see that more than 20% of the pairs are dropped to maintain semantic quality. It indicates that the training set, which has not been manually annotated and checked, contains a huge quantity of unrelated text-summary pairs.
Experimental Settings
We use the sequence-to-sequence model BIBREF10 with attention BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 as the Baseline. Both the encoder and decoder are based on the single layer LSTM BIBREF15 . The word embedding size is 400, and the hidden state size of the LSTM unit is 500. We conduct experiments on the word level. To convert the character sequences into word sequences, we use Jieba to segment the words, the same with the existing work BIBREF1 , BIBREF6 . Self-Train and Dual-Train are implemented based on the baseline model, with two more hyper-parameters, the temperature INLINEFORM0 and the soft training strength INLINEFORM1 . We use a very simple setting for all tasks, and set INLINEFORM2 , INLINEFORM3 . We pre-train the model without applying the soft training objective for 5 epochs out of total 10 epochs. We use the Adam optimizer BIBREF16 for all the tasks, using the default settings with INLINEFORM4 , INLINEFORM5 , and INLINEFORM6 . In testing, we use beam search to generate the summaries, and the beam size is set to 5. We report the test results at the epoch that achieves the best score on the development set.
Evaluation Protocol
For text summarization, a common automatic evaluation method is ROUGE BIBREF9 . The generated summary is evaluated against the reference summary, based on unigram recall (ROUGE-1), bigram recall (ROUGE-2), and recall of longest common subsequence (ROUGE-L). To facilitate comparison with the existing systems, we adopt ROUGE as the automatic evaluation method. The ROUGE is calculated on the character level, following the previous work BIBREF1 . However, for abstractive text summarization, the ROUGE is sub-optimal, and cannot assess the semantic consistency between the summary and the source content, especially when there is only one reference for a piece of text. The reason is that the same content may be expressed in different ways with different focuses. Simple word match cannot recognize the paraphrasing. It is the case for all of the existing large-scale datasets. Besides, as aforementioned, ROUGE is calculated on the character level in Chinese text summarization, making the metrics favor the models on the character level in practice. In Chinese, a word is the smallest semantic element that can be uttered in isolation, not a character. In the extreme case, the generated text could be completely intelligible, but the characters could still match. In theory, calculating ROUGE metrics on the word level could alleviate the problem. However, word segmentation is also a non-trivial task for Chinese. There are many kinds of segmentation rules, which will produce different ROUGE scores. We argue that it is not acceptable to introduce additional systematic bias in automatic evaluations, and automatic evaluation for semantically related tasks can only serve as a reference. To avoid the deficiencies, we propose a simple human evaluation method to assess the semantic consistency. Each summary candidate is evaluated against the text rather than the reference. If the candidate is irrelevant or incorrect to the text, or the candidate is not understandable, the candidate is labeled bad. Otherwise, the candidate is labeled good. Then, we can get an accuracy of the good summaries. The proposed evaluation is very simple and straight-forward. It focuses on the relevance between the summary and the text. The semantic consistency should be the major consideration when putting the text summarization methods into practice, but the current automatic methods cannot judge properly. For detailed guidelines in human evaluation, please refer to Appendix SECREF6 . In the human evaluation, the text-summary pairs are dispatched to two human annotators who are native speakers of Chinese. As in our setting the summary is evaluated against the reference, the number of the pairs needs to be manually evaluated is four times the number of the pairs in the test set, because we need to compare four systems in total. To decrease the workload and get a hint about the annotation quality at the same time, we adopt the following procedure. We first randomly select 100 pairs in the validation set for the two human annotators to evaluate. Each pair is annotated twice, and the inter-annotator agreement is checked. We find that under the protocol, the inter-annotator agreement is quite high. In the evaluation of the test set, a pair is only annotated once to accelerate evaluation. To further maintain consistency, summaries of the same source content will not be distributed to different annotators.
Experimental Results
First, we show the results for human evaluation, which focuses on the semantic consistency of the summary with its source content. We evaluate the systems implemented by us as well as the reference. We cannot conduct human evaluations for the existing systems from other work, because the output summaries needed are not available for us. Besides, the baseline system we implemented is very competitive in terms of ROUGE and achieves better performance than almost all the existing systems. The results are listed in Table TABREF24 . It is surprising to see that the accuracy of the reference summaries does not reach 100%. It means that the test set still contains text-summary pairs of poor quality even after removing the pairs with relevance scores lower than 3 as suggested by the authors of the dataset. As we can see, Dual-Train improves the accuracy by 4%. Due to the rigorous definition of being good, the results mean that 4% more of the summaries are semantically consistent with their source content. However, Self-Train has a performance drop compared to the baseline. After investigating its generated summaries, we find that the major reason is that the generated summaries are not grammatically complete and often stop too early, although the generated part is indeed more related to the source content. Because the definition of being good, the improved relevance does not make up the loss on intelligibility.
Then, we compare the automatic evaluation results in Table TABREF25 . As we can see, only applying soft training without adaptation (Self-Train) hurts the performance. With the additional output layer (Dual-Train), the performance can be greatly improved over the baseline. Moreover, with the proposed method the simple baseline model is second to the best compared with the state-of-the-art models and even surpasses in ROUGE-2. It is promising that applying the proposed method to the state-of-the-art model could also improve its performance. The automatic evaluation is done on the original test set to facilitate comparison with existing work. However, a more reasonable setting would be to exclude the 52 test instances that are found bad in the human evaluation, because the quality of the automatic evaluation depends on the reference summary. As the existing methods do not provide their test output, it is a non-trivial task to reproduce all their results of the same reported performance. Nonetheless, it does not change the fact that ROUGE cannot handle the issues in abstractive text summarization properly.
Experimental Analysis
To examine the effect of the proposed method and reveal how the proposed method improves the consistency, we compare the output of the baseline with Dual-Train, based on both the output text and the output label distribution. We also conduct error analysis to discover room for improvements.
To gain a better understanding of the results, we analyze the summaries generated by the baseline model and our proposed model. Some of the summaries are listed in Table TABREF28 . As shown in the table, the summaries generated by the proposed method are much better than the baseline, and we believe they are more precise and informative than the references. In the first one, the baseline system generates a grammatical but unrelated summary, while the proposed method generates a more informative summary. In the second one, the baseline system generates a related but ungrammatical summary, while the proposed method generates a summary related to the source content but different from the reference. We believe the generated summary is actually better than the reference because the focus of the visit is not the event itself but its purpose. In the third one, the baseline system generates a related and grammatical summary, but the facts stated are completely incorrect. The summary generated by the proposed method is more comprehensive than the reference, while the reference only includes the facts in the last sentence of the source content. In short, the generated summary of the proposed method is more consistent with the source content. It also exhibits the necessity of the proposed human evaluation. Because when the generated summary is evaluated against the reference, it may seem redundant or wrong, but it is actually true to the source content. While it is arguable that the generated summary is better than the reference, there is no doubt that the generated summary of the proposed method is better than the baseline. However, the improvement cannot be properly shown by the existing evaluation methods. Furthermore, the examples suggest that the proposed method does learn better correspondence. The highlighted words in each example in Table TABREF28 share almost the same previous words. However, in the first one, the baseline considers “UTF8gbsn停” (stop) as the most related words, which is a sign of noisy word relations learned from other training examples, while the proposed method generates “UTF8gbsn进站” (to the platform), which is more related to what a human thinks. It is the same with the second example, where a human selects “UTF8gbsn专家” (expert) and Dual-Train selects “UTF8gbsn工作者” (worker), while the baseline selects “UTF8gbsn钻研” (research) and fails to generate a grammatical sentence later. In the third one, the reference and the baseline use the same word, while Dual-Train chooses a word of the same meaning. It can be concluded that Dual-Train indeed learns better word relations that could generalize to the test set, and good word relations can guide the decoder to generate semantically consistent summaries.
To show why the generated text of the proposed method is more related to the source content, we further analyze the label distribution, i.e., the word distribution, generated by the (first) output layer, from which the output word is selected. To illustrate the relationship, we calculate a representation for each word based on the label distributions. Each representation is associated with a specific label (word), denoted by INLINEFORM0 , and each dimension INLINEFORM1 shows how likely the label indexed by INLINEFORM2 will be generated instead of the label INLINEFORM3 . To get such representation, we run the model on the training set and get the output vectors in the decoder, which are then averaged with respect to their corresponding labels to form a representation. We can obtain the most related words of a word by simply selecting the highest values from its representation. Table TABREF30 lists some of the labels and the top 4 labels that are most likely to replace each of the labels. It is a hint about the correspondence learned by the model. From the results, it can be observed that Dual-Train learns the better semantic relevance of a word compared to the baseline because the spurious word correspondence is alleviated by regularization. For example, the possible substitutes of the word “UTF8gbsn多长时间” (how long) considered by Dual-Train include “UTF8gbsn多少” (how many), “UTF8gbsn多久” (how long) and “UTF8gbsn时间” (time). However, the relatedness is learned poorly in the baseline, as there is “UTF8gbsn知道” (know), a number, and two particles in the possible substitutes considered by the baseline. Another representative example is the word “UTF8gbsn图像” (image), where the baseline also includes two particles in its most related words. The phenomenon shows that the baseline suffers from spurious correspondence in the data, and learns noisy and harmful relations, which rely too much on the co-occurrence. In contrast, the proposed method can capture more stable semantic relatedness of the words. For text summarization, grouping the words that are in the same topic together can help the model to generate sentences that are more coherent and can improve the quality of the summarization and the relevance to the source content. Although the proposed method resolves a large number of the noisy word relations, there are still cases that the less related words are not eliminated. For example, the top 4 most similar words of “UTF8gbsn期货业” (futures industry) from the proposed method include “UTF8gbsn改革” (reform). It is more related than “2013” from the baseline, but it can still be harmful to text summarization. The problem could arise from the fact that words as “UTF8gbsn期货业” rarely occur in the training data, and their relatedness is not reflected in the data. Another issue is that there are some particles, e.g., “UTF8gbsn的” (DE) in the most related words. A possible explanation is that particles show up too often in the contexts of the word, and it is hard for the models to distinguish them from the real semantically-related words. As our proposed approach is based on regularization of the less common correspondence, it is reasonable that such kind of relation cannot be eliminated. The first case can be categorized into data sparsity, which usually needs the aid of knowledge bases to solve. The second case is due to the characteristics of natural language. However, as such words are often closed class words, the case can be resolved by manually restricting the relatedness of these words.
Related Work
Related work includes efforts on designing models for the Chinese social media text summarization task and the efforts on obtaining soft training target for supervised learning.
Systems for Chinese Social Media Text Summarization
The Large-Scale Chinese Short Text Summarization dataset was proposed by BIBREF1 . Along with the datasets, BIBREF1 also proposed two systems to solve the task, namely RNN and RNN-context. They were two sequence-to-sequence based models with GRU as the encoder and the decoder. The difference between them was that RNN-context had attention mechanism while RNN did not. They conducted experiments both on the character level and on the word level. RNN-distract BIBREF5 was a distraction-based neural model, where the attention mechanism focused on different parts of the source content. CopyNet BIBREF6 incorporated a copy mechanism to allow part of the generated summary to be copied from the source content. The copy mechanism also explained that the results of their word-level model were better than the results of their character-level model. SRB BIBREF17 was a sequence-to-sequence based neural model to improve the semantic relevance between the input text and the output summary. DRGD BIBREF8 was a deep recurrent generative decoder model, combining the decoder with a variational autoencoder.
Methods for Obtaining Soft Training Target
Soft target aims to refine the supervisory signal in supervised learning. Related work includes soft target for traditional learning algorithms and model distillation for deep learning algorithms. The soft label methods are typically for binary classification BIBREF18 , where the human annotators not only assign a label for an example but also give information on how confident they are regarding the annotation. The main difference from our method is that the soft label methods require additional annotation information (e.g., the confidence information of the annotated labels) of the training data, which is costly in the text summarization task. There have also been prior studies on model distillation in deep learning that distills big models into a smaller one. Model distillation BIBREF19 combined different instances of the same model into a single one. It used the output distributions of the previously trained models as the soft target distribution to train a new model. A similar work to model distillation is the soft-target regularization method BIBREF20 for image classification. Instead of using the outputs of other instances, it used an exponential average of the past label distributions of the current instance as the soft target distribution. The proposed method is different compared with the existing model distillation methods, in that the proposed method does not require additional models or additional space to record the past soft label distributions. The existing methods are not suitable for text summarization tasks, because the training of an additional model is costly, and the additional space is huge due to the massive number of data. The proposed method uses its current state as the soft target distribution and eliminates the need to train additional models or to store the history information.
Conclusions
We propose a regularization approach for the sequence-to-sequence model on the Chinese social media summarization task. In the proposed approach, we use a cross-entropy based regularization term to make the model neglect the possible unrelated words. We propose two methods for obtaining the soft output word distribution used in the regularization, of which Dual-Train proves to be more effective. Experimental results show that the proposed method can improve the semantic consistency by 4% in terms of human evaluation. As shown by the analysis, the proposed method achieves the improvements by eliminating the less semantically-related word correspondence. The proposed human evaluation method is effective and efficient in judging the semantic consistency, which is absent in previous work but is crucial in the accurate evaluation of the text summarization systems. The proposed metric is simple to conduct and easy to interpret. It also provides an insight on how practicable the existing systems are in the real-world scenario.
Standard for Human Evaluation
For human evaluation, the annotators are asked to evaluate the summary against the source content based on the goodness of the summary. If the summary is not understandable, relevant or correct according to the source content, the summary is considered bad. More concretely, the annotators are asked to examine the following aspects to determine whether the summary is good:
If a rule is not met, the summary is labeled bad, and the following rules do not need to be checked. In Table TABREF33 , we give examples for cases of each rule. In the first one, the summary is not fluent, because the patient of the predicate UTF8gbsn“找” (seek for) is missing. The second summary is fluent, but the content is not related to the source, in that we cannot determine if Lei Jun is actually fighting the scalpers based on the source content. In the third one, the summary is fluent and related to the source content, but the facts are wrong, as the summary is made up by facts of different people. The last one met all the three rules, and thus it is considered good. This work is supported in part by the GS501100001809National Natural Science Foundation of Chinahttp://dx.doi.org/10.13039/501100001809 under Grant No. GS50110000180961673028. | comparing the summary with the text instead of the reference and labeling the candidate bad if it is incorrect or irrelevant |
86e3136271a7b93991c8de5d310ab15a6ac5ab8c | 86e3136271a7b93991c8de5d310ab15a6ac5ab8c_0 | Q: How is human evaluation performed, what were the criteria?
Text: Introduction
Open-domain response generation BIBREF0, BIBREF1 for single-round short text conversation BIBREF2, aims at generating a meaningful and interesting response given a query from human users. Neural generation models are of growing interest in this topic due to their potential to leverage massive conversational datasets on the web. These generation models such as encoder-decoder models BIBREF3, BIBREF2, BIBREF4, directly build a mapping from the input query to its output response, which treats all query-response pairs uniformly and optimizes the maximum likelihood estimation (MLE). However, when the models converge, they tend to output bland and generic responses BIBREF5, BIBREF6, BIBREF7.
Many enhanced encoder-decoder approaches have been proposed to improve the quality of generated responses. They can be broadly classified into two categories (see Section SECREF2 for details): (1) One that does not change the encoder-decoder framework itself. These approaches only change the decoding strategy, such as encouraging diverse tokens to be selected in beam search BIBREF5, BIBREF8; or adding more components based on the encoder-decoder framework, such as the Generative Adversarial Network (GAN)-based methods BIBREF9, BIBREF10, BIBREF11 which add discriminators to perform adversarial training; (2) The second category modifies the encoder-decoder framework directly by incorporating useful information as latent variables in order to generate more specific responses BIBREF12, BIBREF13. However, all these enhanced methods still optimize the MLE of the log-likelihood or the complete log-likelihood conditioned on their assumed latent information, and models estimated by the MLE naturally favor to output frequent patterns in training data.
Instead of optimizing the MLE, some researchers propose to use the conditional variational autoencoder (CVAE), which maximizes the lower bound on the conditional data log-likelihood on a continuous latent variable BIBREF14, BIBREF15. Open-domain response generation is a one-to-many problem, in which a query can be associated with many valid responses. The CVAE-based models generally assume the latent variable follows a multivariate Gaussian distribution with a diagonal covariance matrix, which can capture the latent distribution over all valid responses. With different sampled latent variables, the model is expected to decode diverse responses. Due to the advantage of the CVAE in modeling the response generation process, we focus on improving the performance of the CVAE-based response generation models.
Although the CVAE has achieved impressive results on many generation problems BIBREF16, BIBREF17, recent results on response generation show that the CVAE-based generation models still suffer from the low output diversity problem. That is multiple sampled latent variables result in responses with similar semantic meanings. To address this problem, extra guided signals are often used to improve the basic CVAE. BIBREF14 zhao2017learning use dialogue acts to capture the discourse variations in multi-round dialogues as guided knowledge. However, such discourse information can hardly be extracted for short-text conversation.
In our work, we propose a discrete CVAE (DCVAE), which utilizes a discrete latent variable with an explicit semantic meaning in the CVAE for short-text conversation. Our model mitigates the low output diversity problem in the CVAE by exploiting the semantic distance between the latent variables to maintain good diversity between the sampled latent variables. Accordingly, we propose a two-stage sampling approach to enable efficient selection of diverse variables from a large latent space assumed in the short-text conversation task.
To summarize, this work makes three contributions: (1) We propose a response generation model for short-text conversation based on a DCVAE, which utilizes a discrete latent variable with an explicit semantic meaning and could generate high-quality responses. (2) A two-stage sampling approach is devised to enable efficient selection of diverse variables from a large latent space assumed in the short-text conversation task. (3) Experimental results show that the proposed DCVAE with the two-stage sampling approach outperforms various kinds of generation models under both automatic and human evaluations, and generates more high-quality responses. All our code and datasets are available at https://ai.tencent.com/ailab/nlp/dialogue.
Related Work
In this section, we briefly review recent advancement in encoder-decoder models and CVAE-based models for response generation.
Related Work ::: Encoder-decoder models
Encoder-decoder models for short-text conversation BIBREF3, BIBREF2 maximize the likelihood of responses given queries. During testing, a decoder sequentially generates a response using search strategies such as beam search. However, these models frequently generate bland and generic responses.
Some early work improves the quality of generated responses by modifying the decoding strategy. For example, BIBREF5 li2016diversity propose to use the maximum mutual information (MMI) to penalize general responses in beam search during testing. Some later studies alter the data distributions according to different sample weighting schemes, encouraging the model to put more emphasis on learning samples with rare words BIBREF18, BIBREF19. As can be seen, these methods focus on either pre-processing the dataset before training or post-processing the results in testing, with no change to encoder-decoder models themselves.
Some other work use encoder-decoder models as the basis and add more components to refine the response generation process. BIBREF9 xu2017neural present a GAN-based model with an approximate embedding layer. zhang2018generating employ an adversarial learning method to directly optimize the lower bounder of the MMI objective BIBREF5 in model training. These models employ the encoder-decoder models as the generator and focus on how to design the discriminator and optimize the generator and discriminator jointly. Deep reinforcement learning is also applied to model future reward in chatbot after an encoder-decoder model converges BIBREF6, BIBREF11. The above methods directly integrate the encoder-decoder models as one of their model modules and still do not actually modify the encoder-decoder models.
Many attentions have turned to incorporate useful information as latent variables in the encoder-decoder framework to improve the quality of generated responses. BIBREF12 yao2017towards consider that a response is generated by a query and a pre-computed cue word jointly. BIBREF13 zhou2017mechanism utilize a set of latent embeddings to model diverse responding mechanisms. BIBREF20 xing2017topic introduce pre-defined topics from an external corpus to augment the information used in response generation. BIBREF21 gao2019generating propose a model that infers latent words to generate multiple responses. These studies indicate that many factors in conversation are useful to model the variation of a generated response, but it is nontrivial to extract all of them. Also, these methods still optimize the MLE of the complete log-likelihood conditioned on their assumed latent information, and the model optimized with the MLE naturally favors to output frequent patterns in the training data. Note that we apply a similar latent space assumption as used in BIBREF12, BIBREF21, i.e. the latent variables are words from the vocabulary. However, they use a latent word in a factorized encoder-decoder model, but our model uses it to construct a discrete CVAE and our optimization algorithm is entirely different from theirs.
Related Work ::: The CVAE-based models
A few works indicate that it is worth trying to apply the CVAE to dialogue generation which is originally used in image generation BIBREF16, BIBREF17 and optimized with the variational lower bound of the conditional log-likelihood. For task-oriented dialogues, BIBREF22 wen2017latent use the latent variable to model intentions in the framework of neural variational inference. For chit-chat multi-round conversations, BIBREF23 serban2017hierarchical model the generative process with multiple levels of variability based on a hierarchical sequence-to-sequence model with a continuous high-dimensional latent variable. BIBREF14 zhao2017learning make use of the CVAE and the latent variable is used to capture discourse-level variations. BIBREF24 gu2018dialogwae propose to induce the latent variables by transforming context-dependent Gaussian noise. BIBREF15 shen2017conditional present a conditional variational framework for generating specific responses based on specific attributes. Yet, it is observed in other tasks such as image captioning BIBREF25 and question generation BIBREF26 that the CVAE-based generation models suffer from the low output diversity problem, i.e. multiple sampled variables point to the same generated sequences. In this work, we utilize a discrete latent variable with an interpretable meaning to alleviate this low output diversity problem on short-text conversation.
We find that BIBREF27 zhao2018unsupervised make use of a set of discrete variables that define high-level attributes of a response. Although they interpret meanings of the learned discrete latent variables by clustering data according to certain classes (e.g. dialog acts), such latent variables still have no exact meanings. In our model, we connect each latent variable with a word in the vocabulary, thus each latent variable has an exact semantic meaning. Besides, they focus on multi-turn dialogue generation and presented an unsupervised discrete sentence representation learning method learned from the context while our concentration is primarily on single-turn dialogue generation with no context information.
Proposed Models ::: DCVAE and Basic Network Modules
Following previous CVAE-based generation models BIBREF14, we introduce a latent variable $z$ for each input sequence and our goal is to maximize the lower bound on the conditional data log-likelihood $p(\mathbf {y}|\mathbf {x})$, where $\mathbf {x}$ is the input query sequence and $\mathbf {y}$ is the target response sequence:
Here, $p(z|\mathbf {x})$/$q(z|\mathbf {y},\mathbf {x})$/$p(\mathbf {y}|\mathbf {x},z)$ is parameterized by the prior/posterior/generation network respectively. $D_{KL}(q(z|\mathbf {y},\mathbf {x})||p(z|\mathbf {x}))$ is the Kullback-Leibler (KL) divergence between the posterior and prior distribution. Generally, $z$ is set to follow a Gaussian distribution in both the prior and posterior networks. As mentioned in the related work, directly using the above CVAE formulation causes the low output diversity problem. This observation is also validated in the short-text conversation task in our experiments.
Now, we introduce our basic discrete CVAE formulation to alleviate the low output diversity problem. We change the continuous latent variable $z$ to a discrete latent one with an explicit interpretable meaning, which could actively control the generation of the response. An intuitive way is to connect each latent variable with a word in the vocabulary. With a sampled latent $z$ from the prior (in testing)/posterior network (in training), the generation network will take the query representation together with the word embedding of this latent variable as the input to decode the response. Here, we assume that a single word is enough to drive the generation network to output diverse responses for short text conversation, in which the response is generally short and compact.
A major advantage of our DCVAE is that for words with far different meanings, their word embeddings (especially that we use a good pre-trained word embedding corpus) generally have a large distance and drive the generation network to decode scattered responses, thus improve the output diversity. In the standard CVAE, $z$'s assumed in a continuous space may not maintain the semantic distance as in the embedding space and diverse $z$'s may point to the same semantic meaning, in which case the generation network is hard to train well with such confusing information. Moreover, we can make use of the semantic distance between latent variables to perform better sampling to approximate the objective during optimization, which will be introduced in Section SECREF10.
The latent variable $z$ is thus set to follow a categorical distribution with each dimension corresponding to a word in the vocabulary. Therefore the prior and posterior networks should output categorical probability distributions:
where $\theta $ and $\phi $ are parameters of the two networks respectively. The KL distance of these two distributions can be calculated in a closed form solution:
where $Z$ contains all words in the vocabulary.
In the following, we present the details of the prior, posterior and generation network.
Prior network $p(z|\mathbf {x})$: It aims at inferring the latent variable $z$ given the input sequence $x$. We first obtain an input representation $\mathbf {h}_{\mathbf {x}}^{p}$ by encoding the input query $\mathbf {x}$ with a bi-directional GRU and then compute $g_{\theta }(\mathbf {x})$ in Eq. as follows:
where $\theta $ contains parameters in both the bidirectional GRU and Eq. DISPLAY_FORM8.
Posterior network $q(z|\mathbf {y}, \mathbf {x})$: It infers a latent variable $z$ given a input query $\mathbf {x}$ and its target response $\mathbf {y}$. We construct both representations for the input and the target sequence by separated bi-directional GRU's, then add them up to compute $f_{\phi }(\mathbf {y}, \mathbf {x})$ in Eq. to predict the probability of $z$:
where $\phi $ contains parameters in the two encoding functions and Eq. DISPLAY_FORM9. Note that the parameters of the encoding functions are not shared in the prior and posterior network.
Generation network $p(\mathbf {y}|\mathbf {x},z)$: We adopt an encoder-decoder model with attention BIBREF28 used in the decoder. With a sampled latent variable $z$, a typical strategy is to combine its representation, which in this case is the word embedding $\mathbf {e}_z$ of $z$, only in the beginning of decoding. However, many previous works observe that the influence of the added information will vanish over time BIBREF12, BIBREF21. Thus, after obtaining an attentional hidden state at each decoding step, we concatenate the representation $\mathbf {h}_z$ of the latent variable and the current hidden state to produce a final output in our generation network.
Proposed Models ::: A Two-Stage Sampling Approach
When the CVAE models are optimized, they tend to converge to a solution with a vanishingly small KL term, thus failing to encode meaningful information in $z$. To address this problem, we follow the idea in BIBREF14, which introduces an auxiliary loss that requires the decoder in the generation network to predict the bag-of-words in the response $\mathbf {y}$. Specifically, the response $\mathbf {y}$ is now represented by two sequences simultaneously: $\mathbf {y}_o$ with word order and $\mathbf {y}_{bow}$ without order. These two sequences are assumed to be conditionally independent given $z$ and $\mathbf {x}$. Then our training objective can be rewritten as:
where $p(\mathbf {y}_{bow}|\mathbf {x}, z)$ is obtained by a multilayer perceptron $\mathbf {h}^{b} = \mbox{MLP}(\mathbf {x}, z)$:
where $|\mathbf {y}|$ is the length of $\mathbf {y}$, $y_t$ is the word index of $t$-th word in $\mathbf {y}$, and $V$ is the vocabulary size.
During training, we generally approximate $\mathbb {E}_{z \sim q(z|\mathbf {y},\mathbf {x})}[\log p(\mathbf {y}|\mathbf {x},z)]$ by sampling $N$ times of $z$ from the distribution $q(z|\mathbf {y}, \mathbf {x})$. In our model, the latent space is discrete but generally large since we set it as the vocabulary in the dataset . The vocabulary consists of words that are similar in syntactic or semantic. Directly sampling $z$ from the categorical distribution in Eq. cannot make use of such word similarity information.
Hence, we propose to modify our model in Section SECREF4 to consider the word similarity for sampling multiple accurate and diverse latent $z$'s. We first cluster $z \in Z$ into $K$ clusters $c_1,\ldots , c_K$. Each $z$ belongs to only one of the $K$ clusters and dissimilar words lie in distinctive groups. We use the K-means clustering algorithm to group $z$'s using a pre-trained embedding corpus BIBREF29. Then we revise the posterior network to perform a two-stage cluster sampling by decomposing $q(z|\mathbf {y}, \mathbf {x})$ as :
That is, we first compute $q(c_{k_z}|\mathbf {y}, \mathbf {x})$, which is the probability of the cluster that $z$ belongs to conditioned on both $\mathbf {x}$ and $\mathbf {y}$. Next, we compute $q(z|\mathbf {x}, \mathbf {y}, c_{k_z})$, which is the probability distribution of $z$ conditioned on the $\mathbf {x}$, $\mathbf {y}$ and the cluster $c_{k_z}$. When we perform sampling from $q(z|\mathbf {x}, \mathbf {y})$, we can exploit the following two-stage sampling approach: first sample the cluster based on $q( c_{k} |\mathbf {x}, \mathbf {y})$; next sample a specific $z$ from $z$'s within the sampled cluster based on $q(z|\mathbf {x}, \mathbf {y}, c_{k_z})$.
Similarly, we can decompose the prior distribution $p(z| \mathbf {x})$ accordingly for consistency:
In testing, we can perform the two-stage sampling according to $p(c_{k}|\mathbf {x})$ and $p(z|\mathbf {x}, c_{k_z})$. Our full model is illustrated in Figure FIGREF3.
Network structure modification: To modify the network structure for the two-stage sampling method, we first compute the probability of each cluster given $\mathbf {x}$ in the prior network (or $\mathbf {x}$ and $\mathbf {y}$ in the posterior network) with a softmax layer (Eq. DISPLAY_FORM8 or Eq. DISPLAY_FORM9 followed by a softmax function). We then add the input representation and the cluster embedding $\mathbf {e}_{c_z}$ of a sampled cluster $c_{z}$, and use another softmax layer to compute the probability of each $z$ within the sampled cluster. In the generation network, the representation of $z$ is the sum of the cluster embedding $\mathbf {e}_{c_z}$ and its word embedding $\mathbf {e}_{z}$.
Network pre-training: To speed up the convergence of our model, we pre-extract keywords from each query using the TF-IDF method. Then we use these keywords to pre-train the prior and posterior networks. The generation network is not pre-trained because in practice it converges fast in only a few epochs.
Experimental Settings
Next, we describe our experimental settings including the dataset, implementation details, all compared methods, and the evaluation metrics.
Experimental Settings ::: Dataset
We conduct our experiments on a short-text conversation benchmark dataset BIBREF2 which contains about 4 million post-response pairs from the Sina Weibo , a Chinese social platforms. We employ the Jieba Chinese word segmenter to tokenize the queries and responses into sequences of Chinese words. We use a vocabulary of 50,000 words (a mixture of Chinese words and characters), which covers 99.98% of words in the dataset. All other words are replaced with $<$UNK$>$.
Experimental Settings ::: Implementation Details
We use single-layer bi-directional GRU for the encoder in the prior/posterior/generation network, and one-layer GRU for the decoder in the generation network. The dimension of all hidden vectors is 1024. The cluster embedding dimension is 620. Except that the word embeddings are initialized by the word embedding corpus BIBREF29, all other parameters are initialized by sampling from a uniform distribution $[-0.1,0.1]$. The batch size is 128. We use Adam optimizer with a learning rate of 0.0001. For the number of clusters $K$ in our method, we evaluate four different values $(5, 10, 100, 1000)$ using automatic metrics and set $K$ to 10 which tops the four options empirically. It takes about one day for every two epochs of our model on a Tesla P40 GPU, and we train ten epochs in total. During testing, we use beam search with a beam size of 10.
Experimental Settings ::: Compared Methods
In our work, we focus on comparing various methods that model $p(\mathbf {y}|\mathbf {x})$ differently. We compare our proposed discrete CVAE (DCVAE) with the two-stage sampling approach to three categories of response generation models:
Baselines: Seq2seq, the basic encoder-decoder model with soft attention mechanism BIBREF30 used in decoding and beam search used in testing; MMI-bidi BIBREF5, which uses the MMI to re-rank results from beam search.
CVAE BIBREF14: We adjust the original work which is for multi-round conversation for our single-round setting. For a fair comparison, we utilize the same keywords used in our network pre-training as the knowledge-guided features in this model.
Other enhanced encoder-decoder models: Hierarchical Gated Fusion Unit (HGFU) BIBREF12, which incorporates a cue word extracted using pointwise mutual information (PMI) into the decoder to generate meaningful responses; Mechanism-Aware Neural Machine (MANM) BIBREF13, which introduces latent embeddings to allow for multiple diverse response generation.
Here, we do not compare RL/GAN-based methods because all our compared methods can replace their objectives with the use of reward functions in the RL-based methods or add a discriminator in the GAN-based methods to further improve the overall performance. However, these are not the contribution of our work, which we leave to future work to discuss the usefulness of our model as well as other enhanced generation models combined with the RL/GAN-based methods.
Experimental Settings ::: Evaluation
To evaluate the responses generated by all compared methods, we compute the following automatic metrics on our test set:
BLEU: BLEU-n measures the average n-gram precision on a set of reference responses. We report BLEU-n with n=1,2,3,4.
Distinct-1 & distinct-2 BIBREF5: We count the numbers of distinct uni-grams and bi-grams in the generated responses and divide the numbers by the total number of generated uni-grams and bi-grams in the test set. These metrics can be regarded as an automatic metric to evaluate the diversity of the responses.
Three annotators from a commercial annotation company are recruited to conduct our human evaluation. Responses from different models are shuffled for labeling. 300 test queries are randomly selected out, and annotators are asked to independently score the results of these queries with different points in terms of their quality: (1) Good (3 points): The response is grammatical, semantically relevant to the query, and more importantly informative and interesting; (2) Acceptable (2 points): The response is grammatical, semantically relevant to the query, but too trivial or generic (e.g.,“我不知道(I don't know)", “我也是(Me too)”, “我喜欢(I like it)" etc.); (3) Failed (1 point): The response has grammar mistakes or irrelevant to the query.
Experimental Results and Analysis
In the following, we will present results of all compared methods and conduct a case study on such results. Then, we will perform further analysis of our proposed method by varying different settings of the components designed in our model.
Experimental Results and Analysis ::: Results on All Compared Methods
Results on automatic metrics are shown on the left-hand side of Table TABREF20. From the results we can see that our proposed DCVAE achieves the best BLEU scores and the second best distinct ratios. The HGFU has the best dist-2 ratio, but its BLEU scores are the worst. These results indicate that the responses generated by the HGFU are less close to the ground true references. Although the automatic evaluation generally indicates the quality of generated responses, it can not accurately evaluate the generated response and the automatic metrics may not be consistent with human perceptions BIBREF31. Thus, we consider human evaluation results more reliable.
For the human evaluation results on the right-hand side of Table TABREF20, we show the mean and standard deviation of all test results as well as the percentage of acceptable responses (2 or 3 points) and good responses (3 points only). Our proposed DCVAE has the best quality score among all compared methods. Moreover, DCVAE achieves a much higher good ratio, which means it generates more informative and interesting responses. Besides, the HGFU's acceptable and good ratios are much lower than our model indicating that it may not maintain enough response relevance when encouraging diversity. This is consistent with the results of the automatic evaluation in Table TABREF20. We also notice that the CVAE achieves the worst human annotation score. This validates that the original CVAE for open-domain response generation does not work well and our proposed DCVAE is an effective way to improve the CVAE for better output diversity.
Experimental Results and Analysis ::: Case Study
Figure FIGREF29 shows four example queries with their responses generated by all compared methods. The Seq2seq baseline tends to generate less informative responses. Though MMI-bidi can select different words to be used, its generated responses are still far from informative. MANM can avoid generating generic responses in most cases, but sometimes its generated response is irrelevant to the query, as shown in the left bottom case. Moreover, the latent responding mechanisms in MANM have no explicit or interpretable meaning. Similar results can be observed from HGFU. If the PMI selects irrelevant cue words, the resulting response may not be relevant. Meanwhile, responses generated by our DCVAE are more informative as well as relevant to input queries.
Experimental Results and Analysis ::: Different Sizes of the Latent Space
We vary the size of the latent space (i.e., sampled word space $Z$) used in our proposed DCVAE. Figure FIGREF32 shows the automatic and human evaluation results on the latent space setting to the top 10k, 20k, all words in the vocabulary. On the automatic evaluation results, if the sampled latent space is getting larger, the BLEU-4 score increases but the distinct ratios drop. We find out that though the DCVAE with a small latent space has a higher distinct-1/2 ratio, many generated sentences are grammatically incorrect. This is also why the BLEU-4 score decreases. On the human evaluation results, all metrics improve with the use of a larger latent space. This is consistent with our motivation that open-domain short-text conversation covers a wide range of topics and areas, and the top frequent words are not enough to capture the content of most training pairs. Thus a small latent space, i.e. the top frequent words only, is not feasible to model enough latent information and a large latent space is generally favored in our proposed model.
Experimental Results and Analysis ::: Analysis on the Two-Stage Sampling
We further look into whether the two-stage sampling method is effective in the proposed DCVAE. Here, the One-Stage method corresponds to the basic formulation in Section SECREF4 with no use of the clustering information in the prior or posterior network. Results on both automatic and human evaluation metrics are shown in Figure. FIGREF37 and FIGREF38. We can observe that the performance of the DCVAE without the two-stage sampling method drops drastically. This means that the proposed two-stage sampling method is important for the DCVAE to work well.
Besides, to validate the effectiveness of clustering, we implemented a modified DCVAE (DCVAE-CD) that uses a pure categorical distribution in which each variable has no exact meaning. That is, the embedding of each latent variable does not correspond to any word embedding. Automatic evaluation results of this modified model are shown in Figure. FIGREF39. We can see that DCVAE-CD performs worse, which means the distribution on word vocabulary is important in our model.
Conclusion
In this paper, we have presented a novel response generation model for short-text conversation via a discrete CVAE. We replace the continuous latent variable in the standard CVAE by an interpretable discrete variable, which is set to a word in the vocabulary. The sampled latent word has an explicit semantic meaning, acting as a guide to the generation of informative and diverse responses. We also propose to use a two-stage sampling approach to enable efficient selection of diverse variables from a large latent space, which is very essential for our model. Experimental results show that our model outperforms various kinds of generation models under both automatic and human evaluations.
Acknowledgements
This work was supported by National Natural Science Foundation of China (Grant No. 61751206, 61876120). | (1) Good (3 points): The response is grammatical, semantically relevant to the query, and more importantly informative and interesting, (2) Acceptable (2 points): The response is grammatical, semantically relevant to the query, but too trivial or generic, (3) Failed (1 point): The response has grammar mistakes or irrelevant to the query |
b48cd91219429f910b1ea6fcd6f4bd143ddf096f | b48cd91219429f910b1ea6fcd6f4bd143ddf096f_0 | Q: What automatic metrics are used?
Text: Introduction
Open-domain response generation BIBREF0, BIBREF1 for single-round short text conversation BIBREF2, aims at generating a meaningful and interesting response given a query from human users. Neural generation models are of growing interest in this topic due to their potential to leverage massive conversational datasets on the web. These generation models such as encoder-decoder models BIBREF3, BIBREF2, BIBREF4, directly build a mapping from the input query to its output response, which treats all query-response pairs uniformly and optimizes the maximum likelihood estimation (MLE). However, when the models converge, they tend to output bland and generic responses BIBREF5, BIBREF6, BIBREF7.
Many enhanced encoder-decoder approaches have been proposed to improve the quality of generated responses. They can be broadly classified into two categories (see Section SECREF2 for details): (1) One that does not change the encoder-decoder framework itself. These approaches only change the decoding strategy, such as encouraging diverse tokens to be selected in beam search BIBREF5, BIBREF8; or adding more components based on the encoder-decoder framework, such as the Generative Adversarial Network (GAN)-based methods BIBREF9, BIBREF10, BIBREF11 which add discriminators to perform adversarial training; (2) The second category modifies the encoder-decoder framework directly by incorporating useful information as latent variables in order to generate more specific responses BIBREF12, BIBREF13. However, all these enhanced methods still optimize the MLE of the log-likelihood or the complete log-likelihood conditioned on their assumed latent information, and models estimated by the MLE naturally favor to output frequent patterns in training data.
Instead of optimizing the MLE, some researchers propose to use the conditional variational autoencoder (CVAE), which maximizes the lower bound on the conditional data log-likelihood on a continuous latent variable BIBREF14, BIBREF15. Open-domain response generation is a one-to-many problem, in which a query can be associated with many valid responses. The CVAE-based models generally assume the latent variable follows a multivariate Gaussian distribution with a diagonal covariance matrix, which can capture the latent distribution over all valid responses. With different sampled latent variables, the model is expected to decode diverse responses. Due to the advantage of the CVAE in modeling the response generation process, we focus on improving the performance of the CVAE-based response generation models.
Although the CVAE has achieved impressive results on many generation problems BIBREF16, BIBREF17, recent results on response generation show that the CVAE-based generation models still suffer from the low output diversity problem. That is multiple sampled latent variables result in responses with similar semantic meanings. To address this problem, extra guided signals are often used to improve the basic CVAE. BIBREF14 zhao2017learning use dialogue acts to capture the discourse variations in multi-round dialogues as guided knowledge. However, such discourse information can hardly be extracted for short-text conversation.
In our work, we propose a discrete CVAE (DCVAE), which utilizes a discrete latent variable with an explicit semantic meaning in the CVAE for short-text conversation. Our model mitigates the low output diversity problem in the CVAE by exploiting the semantic distance between the latent variables to maintain good diversity between the sampled latent variables. Accordingly, we propose a two-stage sampling approach to enable efficient selection of diverse variables from a large latent space assumed in the short-text conversation task.
To summarize, this work makes three contributions: (1) We propose a response generation model for short-text conversation based on a DCVAE, which utilizes a discrete latent variable with an explicit semantic meaning and could generate high-quality responses. (2) A two-stage sampling approach is devised to enable efficient selection of diverse variables from a large latent space assumed in the short-text conversation task. (3) Experimental results show that the proposed DCVAE with the two-stage sampling approach outperforms various kinds of generation models under both automatic and human evaluations, and generates more high-quality responses. All our code and datasets are available at https://ai.tencent.com/ailab/nlp/dialogue.
Related Work
In this section, we briefly review recent advancement in encoder-decoder models and CVAE-based models for response generation.
Related Work ::: Encoder-decoder models
Encoder-decoder models for short-text conversation BIBREF3, BIBREF2 maximize the likelihood of responses given queries. During testing, a decoder sequentially generates a response using search strategies such as beam search. However, these models frequently generate bland and generic responses.
Some early work improves the quality of generated responses by modifying the decoding strategy. For example, BIBREF5 li2016diversity propose to use the maximum mutual information (MMI) to penalize general responses in beam search during testing. Some later studies alter the data distributions according to different sample weighting schemes, encouraging the model to put more emphasis on learning samples with rare words BIBREF18, BIBREF19. As can be seen, these methods focus on either pre-processing the dataset before training or post-processing the results in testing, with no change to encoder-decoder models themselves.
Some other work use encoder-decoder models as the basis and add more components to refine the response generation process. BIBREF9 xu2017neural present a GAN-based model with an approximate embedding layer. zhang2018generating employ an adversarial learning method to directly optimize the lower bounder of the MMI objective BIBREF5 in model training. These models employ the encoder-decoder models as the generator and focus on how to design the discriminator and optimize the generator and discriminator jointly. Deep reinforcement learning is also applied to model future reward in chatbot after an encoder-decoder model converges BIBREF6, BIBREF11. The above methods directly integrate the encoder-decoder models as one of their model modules and still do not actually modify the encoder-decoder models.
Many attentions have turned to incorporate useful information as latent variables in the encoder-decoder framework to improve the quality of generated responses. BIBREF12 yao2017towards consider that a response is generated by a query and a pre-computed cue word jointly. BIBREF13 zhou2017mechanism utilize a set of latent embeddings to model diverse responding mechanisms. BIBREF20 xing2017topic introduce pre-defined topics from an external corpus to augment the information used in response generation. BIBREF21 gao2019generating propose a model that infers latent words to generate multiple responses. These studies indicate that many factors in conversation are useful to model the variation of a generated response, but it is nontrivial to extract all of them. Also, these methods still optimize the MLE of the complete log-likelihood conditioned on their assumed latent information, and the model optimized with the MLE naturally favors to output frequent patterns in the training data. Note that we apply a similar latent space assumption as used in BIBREF12, BIBREF21, i.e. the latent variables are words from the vocabulary. However, they use a latent word in a factorized encoder-decoder model, but our model uses it to construct a discrete CVAE and our optimization algorithm is entirely different from theirs.
Related Work ::: The CVAE-based models
A few works indicate that it is worth trying to apply the CVAE to dialogue generation which is originally used in image generation BIBREF16, BIBREF17 and optimized with the variational lower bound of the conditional log-likelihood. For task-oriented dialogues, BIBREF22 wen2017latent use the latent variable to model intentions in the framework of neural variational inference. For chit-chat multi-round conversations, BIBREF23 serban2017hierarchical model the generative process with multiple levels of variability based on a hierarchical sequence-to-sequence model with a continuous high-dimensional latent variable. BIBREF14 zhao2017learning make use of the CVAE and the latent variable is used to capture discourse-level variations. BIBREF24 gu2018dialogwae propose to induce the latent variables by transforming context-dependent Gaussian noise. BIBREF15 shen2017conditional present a conditional variational framework for generating specific responses based on specific attributes. Yet, it is observed in other tasks such as image captioning BIBREF25 and question generation BIBREF26 that the CVAE-based generation models suffer from the low output diversity problem, i.e. multiple sampled variables point to the same generated sequences. In this work, we utilize a discrete latent variable with an interpretable meaning to alleviate this low output diversity problem on short-text conversation.
We find that BIBREF27 zhao2018unsupervised make use of a set of discrete variables that define high-level attributes of a response. Although they interpret meanings of the learned discrete latent variables by clustering data according to certain classes (e.g. dialog acts), such latent variables still have no exact meanings. In our model, we connect each latent variable with a word in the vocabulary, thus each latent variable has an exact semantic meaning. Besides, they focus on multi-turn dialogue generation and presented an unsupervised discrete sentence representation learning method learned from the context while our concentration is primarily on single-turn dialogue generation with no context information.
Proposed Models ::: DCVAE and Basic Network Modules
Following previous CVAE-based generation models BIBREF14, we introduce a latent variable $z$ for each input sequence and our goal is to maximize the lower bound on the conditional data log-likelihood $p(\mathbf {y}|\mathbf {x})$, where $\mathbf {x}$ is the input query sequence and $\mathbf {y}$ is the target response sequence:
Here, $p(z|\mathbf {x})$/$q(z|\mathbf {y},\mathbf {x})$/$p(\mathbf {y}|\mathbf {x},z)$ is parameterized by the prior/posterior/generation network respectively. $D_{KL}(q(z|\mathbf {y},\mathbf {x})||p(z|\mathbf {x}))$ is the Kullback-Leibler (KL) divergence between the posterior and prior distribution. Generally, $z$ is set to follow a Gaussian distribution in both the prior and posterior networks. As mentioned in the related work, directly using the above CVAE formulation causes the low output diversity problem. This observation is also validated in the short-text conversation task in our experiments.
Now, we introduce our basic discrete CVAE formulation to alleviate the low output diversity problem. We change the continuous latent variable $z$ to a discrete latent one with an explicit interpretable meaning, which could actively control the generation of the response. An intuitive way is to connect each latent variable with a word in the vocabulary. With a sampled latent $z$ from the prior (in testing)/posterior network (in training), the generation network will take the query representation together with the word embedding of this latent variable as the input to decode the response. Here, we assume that a single word is enough to drive the generation network to output diverse responses for short text conversation, in which the response is generally short and compact.
A major advantage of our DCVAE is that for words with far different meanings, their word embeddings (especially that we use a good pre-trained word embedding corpus) generally have a large distance and drive the generation network to decode scattered responses, thus improve the output diversity. In the standard CVAE, $z$'s assumed in a continuous space may not maintain the semantic distance as in the embedding space and diverse $z$'s may point to the same semantic meaning, in which case the generation network is hard to train well with such confusing information. Moreover, we can make use of the semantic distance between latent variables to perform better sampling to approximate the objective during optimization, which will be introduced in Section SECREF10.
The latent variable $z$ is thus set to follow a categorical distribution with each dimension corresponding to a word in the vocabulary. Therefore the prior and posterior networks should output categorical probability distributions:
where $\theta $ and $\phi $ are parameters of the two networks respectively. The KL distance of these two distributions can be calculated in a closed form solution:
where $Z$ contains all words in the vocabulary.
In the following, we present the details of the prior, posterior and generation network.
Prior network $p(z|\mathbf {x})$: It aims at inferring the latent variable $z$ given the input sequence $x$. We first obtain an input representation $\mathbf {h}_{\mathbf {x}}^{p}$ by encoding the input query $\mathbf {x}$ with a bi-directional GRU and then compute $g_{\theta }(\mathbf {x})$ in Eq. as follows:
where $\theta $ contains parameters in both the bidirectional GRU and Eq. DISPLAY_FORM8.
Posterior network $q(z|\mathbf {y}, \mathbf {x})$: It infers a latent variable $z$ given a input query $\mathbf {x}$ and its target response $\mathbf {y}$. We construct both representations for the input and the target sequence by separated bi-directional GRU's, then add them up to compute $f_{\phi }(\mathbf {y}, \mathbf {x})$ in Eq. to predict the probability of $z$:
where $\phi $ contains parameters in the two encoding functions and Eq. DISPLAY_FORM9. Note that the parameters of the encoding functions are not shared in the prior and posterior network.
Generation network $p(\mathbf {y}|\mathbf {x},z)$: We adopt an encoder-decoder model with attention BIBREF28 used in the decoder. With a sampled latent variable $z$, a typical strategy is to combine its representation, which in this case is the word embedding $\mathbf {e}_z$ of $z$, only in the beginning of decoding. However, many previous works observe that the influence of the added information will vanish over time BIBREF12, BIBREF21. Thus, after obtaining an attentional hidden state at each decoding step, we concatenate the representation $\mathbf {h}_z$ of the latent variable and the current hidden state to produce a final output in our generation network.
Proposed Models ::: A Two-Stage Sampling Approach
When the CVAE models are optimized, they tend to converge to a solution with a vanishingly small KL term, thus failing to encode meaningful information in $z$. To address this problem, we follow the idea in BIBREF14, which introduces an auxiliary loss that requires the decoder in the generation network to predict the bag-of-words in the response $\mathbf {y}$. Specifically, the response $\mathbf {y}$ is now represented by two sequences simultaneously: $\mathbf {y}_o$ with word order and $\mathbf {y}_{bow}$ without order. These two sequences are assumed to be conditionally independent given $z$ and $\mathbf {x}$. Then our training objective can be rewritten as:
where $p(\mathbf {y}_{bow}|\mathbf {x}, z)$ is obtained by a multilayer perceptron $\mathbf {h}^{b} = \mbox{MLP}(\mathbf {x}, z)$:
where $|\mathbf {y}|$ is the length of $\mathbf {y}$, $y_t$ is the word index of $t$-th word in $\mathbf {y}$, and $V$ is the vocabulary size.
During training, we generally approximate $\mathbb {E}_{z \sim q(z|\mathbf {y},\mathbf {x})}[\log p(\mathbf {y}|\mathbf {x},z)]$ by sampling $N$ times of $z$ from the distribution $q(z|\mathbf {y}, \mathbf {x})$. In our model, the latent space is discrete but generally large since we set it as the vocabulary in the dataset . The vocabulary consists of words that are similar in syntactic or semantic. Directly sampling $z$ from the categorical distribution in Eq. cannot make use of such word similarity information.
Hence, we propose to modify our model in Section SECREF4 to consider the word similarity for sampling multiple accurate and diverse latent $z$'s. We first cluster $z \in Z$ into $K$ clusters $c_1,\ldots , c_K$. Each $z$ belongs to only one of the $K$ clusters and dissimilar words lie in distinctive groups. We use the K-means clustering algorithm to group $z$'s using a pre-trained embedding corpus BIBREF29. Then we revise the posterior network to perform a two-stage cluster sampling by decomposing $q(z|\mathbf {y}, \mathbf {x})$ as :
That is, we first compute $q(c_{k_z}|\mathbf {y}, \mathbf {x})$, which is the probability of the cluster that $z$ belongs to conditioned on both $\mathbf {x}$ and $\mathbf {y}$. Next, we compute $q(z|\mathbf {x}, \mathbf {y}, c_{k_z})$, which is the probability distribution of $z$ conditioned on the $\mathbf {x}$, $\mathbf {y}$ and the cluster $c_{k_z}$. When we perform sampling from $q(z|\mathbf {x}, \mathbf {y})$, we can exploit the following two-stage sampling approach: first sample the cluster based on $q( c_{k} |\mathbf {x}, \mathbf {y})$; next sample a specific $z$ from $z$'s within the sampled cluster based on $q(z|\mathbf {x}, \mathbf {y}, c_{k_z})$.
Similarly, we can decompose the prior distribution $p(z| \mathbf {x})$ accordingly for consistency:
In testing, we can perform the two-stage sampling according to $p(c_{k}|\mathbf {x})$ and $p(z|\mathbf {x}, c_{k_z})$. Our full model is illustrated in Figure FIGREF3.
Network structure modification: To modify the network structure for the two-stage sampling method, we first compute the probability of each cluster given $\mathbf {x}$ in the prior network (or $\mathbf {x}$ and $\mathbf {y}$ in the posterior network) with a softmax layer (Eq. DISPLAY_FORM8 or Eq. DISPLAY_FORM9 followed by a softmax function). We then add the input representation and the cluster embedding $\mathbf {e}_{c_z}$ of a sampled cluster $c_{z}$, and use another softmax layer to compute the probability of each $z$ within the sampled cluster. In the generation network, the representation of $z$ is the sum of the cluster embedding $\mathbf {e}_{c_z}$ and its word embedding $\mathbf {e}_{z}$.
Network pre-training: To speed up the convergence of our model, we pre-extract keywords from each query using the TF-IDF method. Then we use these keywords to pre-train the prior and posterior networks. The generation network is not pre-trained because in practice it converges fast in only a few epochs.
Experimental Settings
Next, we describe our experimental settings including the dataset, implementation details, all compared methods, and the evaluation metrics.
Experimental Settings ::: Dataset
We conduct our experiments on a short-text conversation benchmark dataset BIBREF2 which contains about 4 million post-response pairs from the Sina Weibo , a Chinese social platforms. We employ the Jieba Chinese word segmenter to tokenize the queries and responses into sequences of Chinese words. We use a vocabulary of 50,000 words (a mixture of Chinese words and characters), which covers 99.98% of words in the dataset. All other words are replaced with $<$UNK$>$.
Experimental Settings ::: Implementation Details
We use single-layer bi-directional GRU for the encoder in the prior/posterior/generation network, and one-layer GRU for the decoder in the generation network. The dimension of all hidden vectors is 1024. The cluster embedding dimension is 620. Except that the word embeddings are initialized by the word embedding corpus BIBREF29, all other parameters are initialized by sampling from a uniform distribution $[-0.1,0.1]$. The batch size is 128. We use Adam optimizer with a learning rate of 0.0001. For the number of clusters $K$ in our method, we evaluate four different values $(5, 10, 100, 1000)$ using automatic metrics and set $K$ to 10 which tops the four options empirically. It takes about one day for every two epochs of our model on a Tesla P40 GPU, and we train ten epochs in total. During testing, we use beam search with a beam size of 10.
Experimental Settings ::: Compared Methods
In our work, we focus on comparing various methods that model $p(\mathbf {y}|\mathbf {x})$ differently. We compare our proposed discrete CVAE (DCVAE) with the two-stage sampling approach to three categories of response generation models:
Baselines: Seq2seq, the basic encoder-decoder model with soft attention mechanism BIBREF30 used in decoding and beam search used in testing; MMI-bidi BIBREF5, which uses the MMI to re-rank results from beam search.
CVAE BIBREF14: We adjust the original work which is for multi-round conversation for our single-round setting. For a fair comparison, we utilize the same keywords used in our network pre-training as the knowledge-guided features in this model.
Other enhanced encoder-decoder models: Hierarchical Gated Fusion Unit (HGFU) BIBREF12, which incorporates a cue word extracted using pointwise mutual information (PMI) into the decoder to generate meaningful responses; Mechanism-Aware Neural Machine (MANM) BIBREF13, which introduces latent embeddings to allow for multiple diverse response generation.
Here, we do not compare RL/GAN-based methods because all our compared methods can replace their objectives with the use of reward functions in the RL-based methods or add a discriminator in the GAN-based methods to further improve the overall performance. However, these are not the contribution of our work, which we leave to future work to discuss the usefulness of our model as well as other enhanced generation models combined with the RL/GAN-based methods.
Experimental Settings ::: Evaluation
To evaluate the responses generated by all compared methods, we compute the following automatic metrics on our test set:
BLEU: BLEU-n measures the average n-gram precision on a set of reference responses. We report BLEU-n with n=1,2,3,4.
Distinct-1 & distinct-2 BIBREF5: We count the numbers of distinct uni-grams and bi-grams in the generated responses and divide the numbers by the total number of generated uni-grams and bi-grams in the test set. These metrics can be regarded as an automatic metric to evaluate the diversity of the responses.
Three annotators from a commercial annotation company are recruited to conduct our human evaluation. Responses from different models are shuffled for labeling. 300 test queries are randomly selected out, and annotators are asked to independently score the results of these queries with different points in terms of their quality: (1) Good (3 points): The response is grammatical, semantically relevant to the query, and more importantly informative and interesting; (2) Acceptable (2 points): The response is grammatical, semantically relevant to the query, but too trivial or generic (e.g.,“我不知道(I don't know)", “我也是(Me too)”, “我喜欢(I like it)" etc.); (3) Failed (1 point): The response has grammar mistakes or irrelevant to the query.
Experimental Results and Analysis
In the following, we will present results of all compared methods and conduct a case study on such results. Then, we will perform further analysis of our proposed method by varying different settings of the components designed in our model.
Experimental Results and Analysis ::: Results on All Compared Methods
Results on automatic metrics are shown on the left-hand side of Table TABREF20. From the results we can see that our proposed DCVAE achieves the best BLEU scores and the second best distinct ratios. The HGFU has the best dist-2 ratio, but its BLEU scores are the worst. These results indicate that the responses generated by the HGFU are less close to the ground true references. Although the automatic evaluation generally indicates the quality of generated responses, it can not accurately evaluate the generated response and the automatic metrics may not be consistent with human perceptions BIBREF31. Thus, we consider human evaluation results more reliable.
For the human evaluation results on the right-hand side of Table TABREF20, we show the mean and standard deviation of all test results as well as the percentage of acceptable responses (2 or 3 points) and good responses (3 points only). Our proposed DCVAE has the best quality score among all compared methods. Moreover, DCVAE achieves a much higher good ratio, which means it generates more informative and interesting responses. Besides, the HGFU's acceptable and good ratios are much lower than our model indicating that it may not maintain enough response relevance when encouraging diversity. This is consistent with the results of the automatic evaluation in Table TABREF20. We also notice that the CVAE achieves the worst human annotation score. This validates that the original CVAE for open-domain response generation does not work well and our proposed DCVAE is an effective way to improve the CVAE for better output diversity.
Experimental Results and Analysis ::: Case Study
Figure FIGREF29 shows four example queries with their responses generated by all compared methods. The Seq2seq baseline tends to generate less informative responses. Though MMI-bidi can select different words to be used, its generated responses are still far from informative. MANM can avoid generating generic responses in most cases, but sometimes its generated response is irrelevant to the query, as shown in the left bottom case. Moreover, the latent responding mechanisms in MANM have no explicit or interpretable meaning. Similar results can be observed from HGFU. If the PMI selects irrelevant cue words, the resulting response may not be relevant. Meanwhile, responses generated by our DCVAE are more informative as well as relevant to input queries.
Experimental Results and Analysis ::: Different Sizes of the Latent Space
We vary the size of the latent space (i.e., sampled word space $Z$) used in our proposed DCVAE. Figure FIGREF32 shows the automatic and human evaluation results on the latent space setting to the top 10k, 20k, all words in the vocabulary. On the automatic evaluation results, if the sampled latent space is getting larger, the BLEU-4 score increases but the distinct ratios drop. We find out that though the DCVAE with a small latent space has a higher distinct-1/2 ratio, many generated sentences are grammatically incorrect. This is also why the BLEU-4 score decreases. On the human evaluation results, all metrics improve with the use of a larger latent space. This is consistent with our motivation that open-domain short-text conversation covers a wide range of topics and areas, and the top frequent words are not enough to capture the content of most training pairs. Thus a small latent space, i.e. the top frequent words only, is not feasible to model enough latent information and a large latent space is generally favored in our proposed model.
Experimental Results and Analysis ::: Analysis on the Two-Stage Sampling
We further look into whether the two-stage sampling method is effective in the proposed DCVAE. Here, the One-Stage method corresponds to the basic formulation in Section SECREF4 with no use of the clustering information in the prior or posterior network. Results on both automatic and human evaluation metrics are shown in Figure. FIGREF37 and FIGREF38. We can observe that the performance of the DCVAE without the two-stage sampling method drops drastically. This means that the proposed two-stage sampling method is important for the DCVAE to work well.
Besides, to validate the effectiveness of clustering, we implemented a modified DCVAE (DCVAE-CD) that uses a pure categorical distribution in which each variable has no exact meaning. That is, the embedding of each latent variable does not correspond to any word embedding. Automatic evaluation results of this modified model are shown in Figure. FIGREF39. We can see that DCVAE-CD performs worse, which means the distribution on word vocabulary is important in our model.
Conclusion
In this paper, we have presented a novel response generation model for short-text conversation via a discrete CVAE. We replace the continuous latent variable in the standard CVAE by an interpretable discrete variable, which is set to a word in the vocabulary. The sampled latent word has an explicit semantic meaning, acting as a guide to the generation of informative and diverse responses. We also propose to use a two-stage sampling approach to enable efficient selection of diverse variables from a large latent space, which is very essential for our model. Experimental results show that our model outperforms various kinds of generation models under both automatic and human evaluations.
Acknowledgements
This work was supported by National Natural Science Foundation of China (Grant No. 61751206, 61876120). | BLEU, Distinct-1 & distinct-2 |
4f1a5eed730fdcf0e570f9118fc09ef2173c6a1b | 4f1a5eed730fdcf0e570f9118fc09ef2173c6a1b_0 | Q: What other kinds of generation models are used in experiments?
Text: Introduction
Open-domain response generation BIBREF0, BIBREF1 for single-round short text conversation BIBREF2, aims at generating a meaningful and interesting response given a query from human users. Neural generation models are of growing interest in this topic due to their potential to leverage massive conversational datasets on the web. These generation models such as encoder-decoder models BIBREF3, BIBREF2, BIBREF4, directly build a mapping from the input query to its output response, which treats all query-response pairs uniformly and optimizes the maximum likelihood estimation (MLE). However, when the models converge, they tend to output bland and generic responses BIBREF5, BIBREF6, BIBREF7.
Many enhanced encoder-decoder approaches have been proposed to improve the quality of generated responses. They can be broadly classified into two categories (see Section SECREF2 for details): (1) One that does not change the encoder-decoder framework itself. These approaches only change the decoding strategy, such as encouraging diverse tokens to be selected in beam search BIBREF5, BIBREF8; or adding more components based on the encoder-decoder framework, such as the Generative Adversarial Network (GAN)-based methods BIBREF9, BIBREF10, BIBREF11 which add discriminators to perform adversarial training; (2) The second category modifies the encoder-decoder framework directly by incorporating useful information as latent variables in order to generate more specific responses BIBREF12, BIBREF13. However, all these enhanced methods still optimize the MLE of the log-likelihood or the complete log-likelihood conditioned on their assumed latent information, and models estimated by the MLE naturally favor to output frequent patterns in training data.
Instead of optimizing the MLE, some researchers propose to use the conditional variational autoencoder (CVAE), which maximizes the lower bound on the conditional data log-likelihood on a continuous latent variable BIBREF14, BIBREF15. Open-domain response generation is a one-to-many problem, in which a query can be associated with many valid responses. The CVAE-based models generally assume the latent variable follows a multivariate Gaussian distribution with a diagonal covariance matrix, which can capture the latent distribution over all valid responses. With different sampled latent variables, the model is expected to decode diverse responses. Due to the advantage of the CVAE in modeling the response generation process, we focus on improving the performance of the CVAE-based response generation models.
Although the CVAE has achieved impressive results on many generation problems BIBREF16, BIBREF17, recent results on response generation show that the CVAE-based generation models still suffer from the low output diversity problem. That is multiple sampled latent variables result in responses with similar semantic meanings. To address this problem, extra guided signals are often used to improve the basic CVAE. BIBREF14 zhao2017learning use dialogue acts to capture the discourse variations in multi-round dialogues as guided knowledge. However, such discourse information can hardly be extracted for short-text conversation.
In our work, we propose a discrete CVAE (DCVAE), which utilizes a discrete latent variable with an explicit semantic meaning in the CVAE for short-text conversation. Our model mitigates the low output diversity problem in the CVAE by exploiting the semantic distance between the latent variables to maintain good diversity between the sampled latent variables. Accordingly, we propose a two-stage sampling approach to enable efficient selection of diverse variables from a large latent space assumed in the short-text conversation task.
To summarize, this work makes three contributions: (1) We propose a response generation model for short-text conversation based on a DCVAE, which utilizes a discrete latent variable with an explicit semantic meaning and could generate high-quality responses. (2) A two-stage sampling approach is devised to enable efficient selection of diverse variables from a large latent space assumed in the short-text conversation task. (3) Experimental results show that the proposed DCVAE with the two-stage sampling approach outperforms various kinds of generation models under both automatic and human evaluations, and generates more high-quality responses. All our code and datasets are available at https://ai.tencent.com/ailab/nlp/dialogue.
Related Work
In this section, we briefly review recent advancement in encoder-decoder models and CVAE-based models for response generation.
Related Work ::: Encoder-decoder models
Encoder-decoder models for short-text conversation BIBREF3, BIBREF2 maximize the likelihood of responses given queries. During testing, a decoder sequentially generates a response using search strategies such as beam search. However, these models frequently generate bland and generic responses.
Some early work improves the quality of generated responses by modifying the decoding strategy. For example, BIBREF5 li2016diversity propose to use the maximum mutual information (MMI) to penalize general responses in beam search during testing. Some later studies alter the data distributions according to different sample weighting schemes, encouraging the model to put more emphasis on learning samples with rare words BIBREF18, BIBREF19. As can be seen, these methods focus on either pre-processing the dataset before training or post-processing the results in testing, with no change to encoder-decoder models themselves.
Some other work use encoder-decoder models as the basis and add more components to refine the response generation process. BIBREF9 xu2017neural present a GAN-based model with an approximate embedding layer. zhang2018generating employ an adversarial learning method to directly optimize the lower bounder of the MMI objective BIBREF5 in model training. These models employ the encoder-decoder models as the generator and focus on how to design the discriminator and optimize the generator and discriminator jointly. Deep reinforcement learning is also applied to model future reward in chatbot after an encoder-decoder model converges BIBREF6, BIBREF11. The above methods directly integrate the encoder-decoder models as one of their model modules and still do not actually modify the encoder-decoder models.
Many attentions have turned to incorporate useful information as latent variables in the encoder-decoder framework to improve the quality of generated responses. BIBREF12 yao2017towards consider that a response is generated by a query and a pre-computed cue word jointly. BIBREF13 zhou2017mechanism utilize a set of latent embeddings to model diverse responding mechanisms. BIBREF20 xing2017topic introduce pre-defined topics from an external corpus to augment the information used in response generation. BIBREF21 gao2019generating propose a model that infers latent words to generate multiple responses. These studies indicate that many factors in conversation are useful to model the variation of a generated response, but it is nontrivial to extract all of them. Also, these methods still optimize the MLE of the complete log-likelihood conditioned on their assumed latent information, and the model optimized with the MLE naturally favors to output frequent patterns in the training data. Note that we apply a similar latent space assumption as used in BIBREF12, BIBREF21, i.e. the latent variables are words from the vocabulary. However, they use a latent word in a factorized encoder-decoder model, but our model uses it to construct a discrete CVAE and our optimization algorithm is entirely different from theirs.
Related Work ::: The CVAE-based models
A few works indicate that it is worth trying to apply the CVAE to dialogue generation which is originally used in image generation BIBREF16, BIBREF17 and optimized with the variational lower bound of the conditional log-likelihood. For task-oriented dialogues, BIBREF22 wen2017latent use the latent variable to model intentions in the framework of neural variational inference. For chit-chat multi-round conversations, BIBREF23 serban2017hierarchical model the generative process with multiple levels of variability based on a hierarchical sequence-to-sequence model with a continuous high-dimensional latent variable. BIBREF14 zhao2017learning make use of the CVAE and the latent variable is used to capture discourse-level variations. BIBREF24 gu2018dialogwae propose to induce the latent variables by transforming context-dependent Gaussian noise. BIBREF15 shen2017conditional present a conditional variational framework for generating specific responses based on specific attributes. Yet, it is observed in other tasks such as image captioning BIBREF25 and question generation BIBREF26 that the CVAE-based generation models suffer from the low output diversity problem, i.e. multiple sampled variables point to the same generated sequences. In this work, we utilize a discrete latent variable with an interpretable meaning to alleviate this low output diversity problem on short-text conversation.
We find that BIBREF27 zhao2018unsupervised make use of a set of discrete variables that define high-level attributes of a response. Although they interpret meanings of the learned discrete latent variables by clustering data according to certain classes (e.g. dialog acts), such latent variables still have no exact meanings. In our model, we connect each latent variable with a word in the vocabulary, thus each latent variable has an exact semantic meaning. Besides, they focus on multi-turn dialogue generation and presented an unsupervised discrete sentence representation learning method learned from the context while our concentration is primarily on single-turn dialogue generation with no context information.
Proposed Models ::: DCVAE and Basic Network Modules
Following previous CVAE-based generation models BIBREF14, we introduce a latent variable $z$ for each input sequence and our goal is to maximize the lower bound on the conditional data log-likelihood $p(\mathbf {y}|\mathbf {x})$, where $\mathbf {x}$ is the input query sequence and $\mathbf {y}$ is the target response sequence:
Here, $p(z|\mathbf {x})$/$q(z|\mathbf {y},\mathbf {x})$/$p(\mathbf {y}|\mathbf {x},z)$ is parameterized by the prior/posterior/generation network respectively. $D_{KL}(q(z|\mathbf {y},\mathbf {x})||p(z|\mathbf {x}))$ is the Kullback-Leibler (KL) divergence between the posterior and prior distribution. Generally, $z$ is set to follow a Gaussian distribution in both the prior and posterior networks. As mentioned in the related work, directly using the above CVAE formulation causes the low output diversity problem. This observation is also validated in the short-text conversation task in our experiments.
Now, we introduce our basic discrete CVAE formulation to alleviate the low output diversity problem. We change the continuous latent variable $z$ to a discrete latent one with an explicit interpretable meaning, which could actively control the generation of the response. An intuitive way is to connect each latent variable with a word in the vocabulary. With a sampled latent $z$ from the prior (in testing)/posterior network (in training), the generation network will take the query representation together with the word embedding of this latent variable as the input to decode the response. Here, we assume that a single word is enough to drive the generation network to output diverse responses for short text conversation, in which the response is generally short and compact.
A major advantage of our DCVAE is that for words with far different meanings, their word embeddings (especially that we use a good pre-trained word embedding corpus) generally have a large distance and drive the generation network to decode scattered responses, thus improve the output diversity. In the standard CVAE, $z$'s assumed in a continuous space may not maintain the semantic distance as in the embedding space and diverse $z$'s may point to the same semantic meaning, in which case the generation network is hard to train well with such confusing information. Moreover, we can make use of the semantic distance between latent variables to perform better sampling to approximate the objective during optimization, which will be introduced in Section SECREF10.
The latent variable $z$ is thus set to follow a categorical distribution with each dimension corresponding to a word in the vocabulary. Therefore the prior and posterior networks should output categorical probability distributions:
where $\theta $ and $\phi $ are parameters of the two networks respectively. The KL distance of these two distributions can be calculated in a closed form solution:
where $Z$ contains all words in the vocabulary.
In the following, we present the details of the prior, posterior and generation network.
Prior network $p(z|\mathbf {x})$: It aims at inferring the latent variable $z$ given the input sequence $x$. We first obtain an input representation $\mathbf {h}_{\mathbf {x}}^{p}$ by encoding the input query $\mathbf {x}$ with a bi-directional GRU and then compute $g_{\theta }(\mathbf {x})$ in Eq. as follows:
where $\theta $ contains parameters in both the bidirectional GRU and Eq. DISPLAY_FORM8.
Posterior network $q(z|\mathbf {y}, \mathbf {x})$: It infers a latent variable $z$ given a input query $\mathbf {x}$ and its target response $\mathbf {y}$. We construct both representations for the input and the target sequence by separated bi-directional GRU's, then add them up to compute $f_{\phi }(\mathbf {y}, \mathbf {x})$ in Eq. to predict the probability of $z$:
where $\phi $ contains parameters in the two encoding functions and Eq. DISPLAY_FORM9. Note that the parameters of the encoding functions are not shared in the prior and posterior network.
Generation network $p(\mathbf {y}|\mathbf {x},z)$: We adopt an encoder-decoder model with attention BIBREF28 used in the decoder. With a sampled latent variable $z$, a typical strategy is to combine its representation, which in this case is the word embedding $\mathbf {e}_z$ of $z$, only in the beginning of decoding. However, many previous works observe that the influence of the added information will vanish over time BIBREF12, BIBREF21. Thus, after obtaining an attentional hidden state at each decoding step, we concatenate the representation $\mathbf {h}_z$ of the latent variable and the current hidden state to produce a final output in our generation network.
Proposed Models ::: A Two-Stage Sampling Approach
When the CVAE models are optimized, they tend to converge to a solution with a vanishingly small KL term, thus failing to encode meaningful information in $z$. To address this problem, we follow the idea in BIBREF14, which introduces an auxiliary loss that requires the decoder in the generation network to predict the bag-of-words in the response $\mathbf {y}$. Specifically, the response $\mathbf {y}$ is now represented by two sequences simultaneously: $\mathbf {y}_o$ with word order and $\mathbf {y}_{bow}$ without order. These two sequences are assumed to be conditionally independent given $z$ and $\mathbf {x}$. Then our training objective can be rewritten as:
where $p(\mathbf {y}_{bow}|\mathbf {x}, z)$ is obtained by a multilayer perceptron $\mathbf {h}^{b} = \mbox{MLP}(\mathbf {x}, z)$:
where $|\mathbf {y}|$ is the length of $\mathbf {y}$, $y_t$ is the word index of $t$-th word in $\mathbf {y}$, and $V$ is the vocabulary size.
During training, we generally approximate $\mathbb {E}_{z \sim q(z|\mathbf {y},\mathbf {x})}[\log p(\mathbf {y}|\mathbf {x},z)]$ by sampling $N$ times of $z$ from the distribution $q(z|\mathbf {y}, \mathbf {x})$. In our model, the latent space is discrete but generally large since we set it as the vocabulary in the dataset . The vocabulary consists of words that are similar in syntactic or semantic. Directly sampling $z$ from the categorical distribution in Eq. cannot make use of such word similarity information.
Hence, we propose to modify our model in Section SECREF4 to consider the word similarity for sampling multiple accurate and diverse latent $z$'s. We first cluster $z \in Z$ into $K$ clusters $c_1,\ldots , c_K$. Each $z$ belongs to only one of the $K$ clusters and dissimilar words lie in distinctive groups. We use the K-means clustering algorithm to group $z$'s using a pre-trained embedding corpus BIBREF29. Then we revise the posterior network to perform a two-stage cluster sampling by decomposing $q(z|\mathbf {y}, \mathbf {x})$ as :
That is, we first compute $q(c_{k_z}|\mathbf {y}, \mathbf {x})$, which is the probability of the cluster that $z$ belongs to conditioned on both $\mathbf {x}$ and $\mathbf {y}$. Next, we compute $q(z|\mathbf {x}, \mathbf {y}, c_{k_z})$, which is the probability distribution of $z$ conditioned on the $\mathbf {x}$, $\mathbf {y}$ and the cluster $c_{k_z}$. When we perform sampling from $q(z|\mathbf {x}, \mathbf {y})$, we can exploit the following two-stage sampling approach: first sample the cluster based on $q( c_{k} |\mathbf {x}, \mathbf {y})$; next sample a specific $z$ from $z$'s within the sampled cluster based on $q(z|\mathbf {x}, \mathbf {y}, c_{k_z})$.
Similarly, we can decompose the prior distribution $p(z| \mathbf {x})$ accordingly for consistency:
In testing, we can perform the two-stage sampling according to $p(c_{k}|\mathbf {x})$ and $p(z|\mathbf {x}, c_{k_z})$. Our full model is illustrated in Figure FIGREF3.
Network structure modification: To modify the network structure for the two-stage sampling method, we first compute the probability of each cluster given $\mathbf {x}$ in the prior network (or $\mathbf {x}$ and $\mathbf {y}$ in the posterior network) with a softmax layer (Eq. DISPLAY_FORM8 or Eq. DISPLAY_FORM9 followed by a softmax function). We then add the input representation and the cluster embedding $\mathbf {e}_{c_z}$ of a sampled cluster $c_{z}$, and use another softmax layer to compute the probability of each $z$ within the sampled cluster. In the generation network, the representation of $z$ is the sum of the cluster embedding $\mathbf {e}_{c_z}$ and its word embedding $\mathbf {e}_{z}$.
Network pre-training: To speed up the convergence of our model, we pre-extract keywords from each query using the TF-IDF method. Then we use these keywords to pre-train the prior and posterior networks. The generation network is not pre-trained because in practice it converges fast in only a few epochs.
Experimental Settings
Next, we describe our experimental settings including the dataset, implementation details, all compared methods, and the evaluation metrics.
Experimental Settings ::: Dataset
We conduct our experiments on a short-text conversation benchmark dataset BIBREF2 which contains about 4 million post-response pairs from the Sina Weibo , a Chinese social platforms. We employ the Jieba Chinese word segmenter to tokenize the queries and responses into sequences of Chinese words. We use a vocabulary of 50,000 words (a mixture of Chinese words and characters), which covers 99.98% of words in the dataset. All other words are replaced with $<$UNK$>$.
Experimental Settings ::: Implementation Details
We use single-layer bi-directional GRU for the encoder in the prior/posterior/generation network, and one-layer GRU for the decoder in the generation network. The dimension of all hidden vectors is 1024. The cluster embedding dimension is 620. Except that the word embeddings are initialized by the word embedding corpus BIBREF29, all other parameters are initialized by sampling from a uniform distribution $[-0.1,0.1]$. The batch size is 128. We use Adam optimizer with a learning rate of 0.0001. For the number of clusters $K$ in our method, we evaluate four different values $(5, 10, 100, 1000)$ using automatic metrics and set $K$ to 10 which tops the four options empirically. It takes about one day for every two epochs of our model on a Tesla P40 GPU, and we train ten epochs in total. During testing, we use beam search with a beam size of 10.
Experimental Settings ::: Compared Methods
In our work, we focus on comparing various methods that model $p(\mathbf {y}|\mathbf {x})$ differently. We compare our proposed discrete CVAE (DCVAE) with the two-stage sampling approach to three categories of response generation models:
Baselines: Seq2seq, the basic encoder-decoder model with soft attention mechanism BIBREF30 used in decoding and beam search used in testing; MMI-bidi BIBREF5, which uses the MMI to re-rank results from beam search.
CVAE BIBREF14: We adjust the original work which is for multi-round conversation for our single-round setting. For a fair comparison, we utilize the same keywords used in our network pre-training as the knowledge-guided features in this model.
Other enhanced encoder-decoder models: Hierarchical Gated Fusion Unit (HGFU) BIBREF12, which incorporates a cue word extracted using pointwise mutual information (PMI) into the decoder to generate meaningful responses; Mechanism-Aware Neural Machine (MANM) BIBREF13, which introduces latent embeddings to allow for multiple diverse response generation.
Here, we do not compare RL/GAN-based methods because all our compared methods can replace their objectives with the use of reward functions in the RL-based methods or add a discriminator in the GAN-based methods to further improve the overall performance. However, these are not the contribution of our work, which we leave to future work to discuss the usefulness of our model as well as other enhanced generation models combined with the RL/GAN-based methods.
Experimental Settings ::: Evaluation
To evaluate the responses generated by all compared methods, we compute the following automatic metrics on our test set:
BLEU: BLEU-n measures the average n-gram precision on a set of reference responses. We report BLEU-n with n=1,2,3,4.
Distinct-1 & distinct-2 BIBREF5: We count the numbers of distinct uni-grams and bi-grams in the generated responses and divide the numbers by the total number of generated uni-grams and bi-grams in the test set. These metrics can be regarded as an automatic metric to evaluate the diversity of the responses.
Three annotators from a commercial annotation company are recruited to conduct our human evaluation. Responses from different models are shuffled for labeling. 300 test queries are randomly selected out, and annotators are asked to independently score the results of these queries with different points in terms of their quality: (1) Good (3 points): The response is grammatical, semantically relevant to the query, and more importantly informative and interesting; (2) Acceptable (2 points): The response is grammatical, semantically relevant to the query, but too trivial or generic (e.g.,“我不知道(I don't know)", “我也是(Me too)”, “我喜欢(I like it)" etc.); (3) Failed (1 point): The response has grammar mistakes or irrelevant to the query.
Experimental Results and Analysis
In the following, we will present results of all compared methods and conduct a case study on such results. Then, we will perform further analysis of our proposed method by varying different settings of the components designed in our model.
Experimental Results and Analysis ::: Results on All Compared Methods
Results on automatic metrics are shown on the left-hand side of Table TABREF20. From the results we can see that our proposed DCVAE achieves the best BLEU scores and the second best distinct ratios. The HGFU has the best dist-2 ratio, but its BLEU scores are the worst. These results indicate that the responses generated by the HGFU are less close to the ground true references. Although the automatic evaluation generally indicates the quality of generated responses, it can not accurately evaluate the generated response and the automatic metrics may not be consistent with human perceptions BIBREF31. Thus, we consider human evaluation results more reliable.
For the human evaluation results on the right-hand side of Table TABREF20, we show the mean and standard deviation of all test results as well as the percentage of acceptable responses (2 or 3 points) and good responses (3 points only). Our proposed DCVAE has the best quality score among all compared methods. Moreover, DCVAE achieves a much higher good ratio, which means it generates more informative and interesting responses. Besides, the HGFU's acceptable and good ratios are much lower than our model indicating that it may not maintain enough response relevance when encouraging diversity. This is consistent with the results of the automatic evaluation in Table TABREF20. We also notice that the CVAE achieves the worst human annotation score. This validates that the original CVAE for open-domain response generation does not work well and our proposed DCVAE is an effective way to improve the CVAE for better output diversity.
Experimental Results and Analysis ::: Case Study
Figure FIGREF29 shows four example queries with their responses generated by all compared methods. The Seq2seq baseline tends to generate less informative responses. Though MMI-bidi can select different words to be used, its generated responses are still far from informative. MANM can avoid generating generic responses in most cases, but sometimes its generated response is irrelevant to the query, as shown in the left bottom case. Moreover, the latent responding mechanisms in MANM have no explicit or interpretable meaning. Similar results can be observed from HGFU. If the PMI selects irrelevant cue words, the resulting response may not be relevant. Meanwhile, responses generated by our DCVAE are more informative as well as relevant to input queries.
Experimental Results and Analysis ::: Different Sizes of the Latent Space
We vary the size of the latent space (i.e., sampled word space $Z$) used in our proposed DCVAE. Figure FIGREF32 shows the automatic and human evaluation results on the latent space setting to the top 10k, 20k, all words in the vocabulary. On the automatic evaluation results, if the sampled latent space is getting larger, the BLEU-4 score increases but the distinct ratios drop. We find out that though the DCVAE with a small latent space has a higher distinct-1/2 ratio, many generated sentences are grammatically incorrect. This is also why the BLEU-4 score decreases. On the human evaluation results, all metrics improve with the use of a larger latent space. This is consistent with our motivation that open-domain short-text conversation covers a wide range of topics and areas, and the top frequent words are not enough to capture the content of most training pairs. Thus a small latent space, i.e. the top frequent words only, is not feasible to model enough latent information and a large latent space is generally favored in our proposed model.
Experimental Results and Analysis ::: Analysis on the Two-Stage Sampling
We further look into whether the two-stage sampling method is effective in the proposed DCVAE. Here, the One-Stage method corresponds to the basic formulation in Section SECREF4 with no use of the clustering information in the prior or posterior network. Results on both automatic and human evaluation metrics are shown in Figure. FIGREF37 and FIGREF38. We can observe that the performance of the DCVAE without the two-stage sampling method drops drastically. This means that the proposed two-stage sampling method is important for the DCVAE to work well.
Besides, to validate the effectiveness of clustering, we implemented a modified DCVAE (DCVAE-CD) that uses a pure categorical distribution in which each variable has no exact meaning. That is, the embedding of each latent variable does not correspond to any word embedding. Automatic evaluation results of this modified model are shown in Figure. FIGREF39. We can see that DCVAE-CD performs worse, which means the distribution on word vocabulary is important in our model.
Conclusion
In this paper, we have presented a novel response generation model for short-text conversation via a discrete CVAE. We replace the continuous latent variable in the standard CVAE by an interpretable discrete variable, which is set to a word in the vocabulary. The sampled latent word has an explicit semantic meaning, acting as a guide to the generation of informative and diverse responses. We also propose to use a two-stage sampling approach to enable efficient selection of diverse variables from a large latent space, which is very essential for our model. Experimental results show that our model outperforms various kinds of generation models under both automatic and human evaluations.
Acknowledgements
This work was supported by National Natural Science Foundation of China (Grant No. 61751206, 61876120). | Seq2seq, CVAE, Hierarchical Gated Fusion Unit (HGFU), Mechanism-Aware Neural Machine (MANM) |
4bdad5a20750c878d1a891ef255621f6172b6a79 | 4bdad5a20750c878d1a891ef255621f6172b6a79_0 | Q: How does discrete latent variable has an explicit semantic meaning to improve the CVAE on short-text conversation?
Text: Introduction
Open-domain response generation BIBREF0, BIBREF1 for single-round short text conversation BIBREF2, aims at generating a meaningful and interesting response given a query from human users. Neural generation models are of growing interest in this topic due to their potential to leverage massive conversational datasets on the web. These generation models such as encoder-decoder models BIBREF3, BIBREF2, BIBREF4, directly build a mapping from the input query to its output response, which treats all query-response pairs uniformly and optimizes the maximum likelihood estimation (MLE). However, when the models converge, they tend to output bland and generic responses BIBREF5, BIBREF6, BIBREF7.
Many enhanced encoder-decoder approaches have been proposed to improve the quality of generated responses. They can be broadly classified into two categories (see Section SECREF2 for details): (1) One that does not change the encoder-decoder framework itself. These approaches only change the decoding strategy, such as encouraging diverse tokens to be selected in beam search BIBREF5, BIBREF8; or adding more components based on the encoder-decoder framework, such as the Generative Adversarial Network (GAN)-based methods BIBREF9, BIBREF10, BIBREF11 which add discriminators to perform adversarial training; (2) The second category modifies the encoder-decoder framework directly by incorporating useful information as latent variables in order to generate more specific responses BIBREF12, BIBREF13. However, all these enhanced methods still optimize the MLE of the log-likelihood or the complete log-likelihood conditioned on their assumed latent information, and models estimated by the MLE naturally favor to output frequent patterns in training data.
Instead of optimizing the MLE, some researchers propose to use the conditional variational autoencoder (CVAE), which maximizes the lower bound on the conditional data log-likelihood on a continuous latent variable BIBREF14, BIBREF15. Open-domain response generation is a one-to-many problem, in which a query can be associated with many valid responses. The CVAE-based models generally assume the latent variable follows a multivariate Gaussian distribution with a diagonal covariance matrix, which can capture the latent distribution over all valid responses. With different sampled latent variables, the model is expected to decode diverse responses. Due to the advantage of the CVAE in modeling the response generation process, we focus on improving the performance of the CVAE-based response generation models.
Although the CVAE has achieved impressive results on many generation problems BIBREF16, BIBREF17, recent results on response generation show that the CVAE-based generation models still suffer from the low output diversity problem. That is multiple sampled latent variables result in responses with similar semantic meanings. To address this problem, extra guided signals are often used to improve the basic CVAE. BIBREF14 zhao2017learning use dialogue acts to capture the discourse variations in multi-round dialogues as guided knowledge. However, such discourse information can hardly be extracted for short-text conversation.
In our work, we propose a discrete CVAE (DCVAE), which utilizes a discrete latent variable with an explicit semantic meaning in the CVAE for short-text conversation. Our model mitigates the low output diversity problem in the CVAE by exploiting the semantic distance between the latent variables to maintain good diversity between the sampled latent variables. Accordingly, we propose a two-stage sampling approach to enable efficient selection of diverse variables from a large latent space assumed in the short-text conversation task.
To summarize, this work makes three contributions: (1) We propose a response generation model for short-text conversation based on a DCVAE, which utilizes a discrete latent variable with an explicit semantic meaning and could generate high-quality responses. (2) A two-stage sampling approach is devised to enable efficient selection of diverse variables from a large latent space assumed in the short-text conversation task. (3) Experimental results show that the proposed DCVAE with the two-stage sampling approach outperforms various kinds of generation models under both automatic and human evaluations, and generates more high-quality responses. All our code and datasets are available at https://ai.tencent.com/ailab/nlp/dialogue.
Related Work
In this section, we briefly review recent advancement in encoder-decoder models and CVAE-based models for response generation.
Related Work ::: Encoder-decoder models
Encoder-decoder models for short-text conversation BIBREF3, BIBREF2 maximize the likelihood of responses given queries. During testing, a decoder sequentially generates a response using search strategies such as beam search. However, these models frequently generate bland and generic responses.
Some early work improves the quality of generated responses by modifying the decoding strategy. For example, BIBREF5 li2016diversity propose to use the maximum mutual information (MMI) to penalize general responses in beam search during testing. Some later studies alter the data distributions according to different sample weighting schemes, encouraging the model to put more emphasis on learning samples with rare words BIBREF18, BIBREF19. As can be seen, these methods focus on either pre-processing the dataset before training or post-processing the results in testing, with no change to encoder-decoder models themselves.
Some other work use encoder-decoder models as the basis and add more components to refine the response generation process. BIBREF9 xu2017neural present a GAN-based model with an approximate embedding layer. zhang2018generating employ an adversarial learning method to directly optimize the lower bounder of the MMI objective BIBREF5 in model training. These models employ the encoder-decoder models as the generator and focus on how to design the discriminator and optimize the generator and discriminator jointly. Deep reinforcement learning is also applied to model future reward in chatbot after an encoder-decoder model converges BIBREF6, BIBREF11. The above methods directly integrate the encoder-decoder models as one of their model modules and still do not actually modify the encoder-decoder models.
Many attentions have turned to incorporate useful information as latent variables in the encoder-decoder framework to improve the quality of generated responses. BIBREF12 yao2017towards consider that a response is generated by a query and a pre-computed cue word jointly. BIBREF13 zhou2017mechanism utilize a set of latent embeddings to model diverse responding mechanisms. BIBREF20 xing2017topic introduce pre-defined topics from an external corpus to augment the information used in response generation. BIBREF21 gao2019generating propose a model that infers latent words to generate multiple responses. These studies indicate that many factors in conversation are useful to model the variation of a generated response, but it is nontrivial to extract all of them. Also, these methods still optimize the MLE of the complete log-likelihood conditioned on their assumed latent information, and the model optimized with the MLE naturally favors to output frequent patterns in the training data. Note that we apply a similar latent space assumption as used in BIBREF12, BIBREF21, i.e. the latent variables are words from the vocabulary. However, they use a latent word in a factorized encoder-decoder model, but our model uses it to construct a discrete CVAE and our optimization algorithm is entirely different from theirs.
Related Work ::: The CVAE-based models
A few works indicate that it is worth trying to apply the CVAE to dialogue generation which is originally used in image generation BIBREF16, BIBREF17 and optimized with the variational lower bound of the conditional log-likelihood. For task-oriented dialogues, BIBREF22 wen2017latent use the latent variable to model intentions in the framework of neural variational inference. For chit-chat multi-round conversations, BIBREF23 serban2017hierarchical model the generative process with multiple levels of variability based on a hierarchical sequence-to-sequence model with a continuous high-dimensional latent variable. BIBREF14 zhao2017learning make use of the CVAE and the latent variable is used to capture discourse-level variations. BIBREF24 gu2018dialogwae propose to induce the latent variables by transforming context-dependent Gaussian noise. BIBREF15 shen2017conditional present a conditional variational framework for generating specific responses based on specific attributes. Yet, it is observed in other tasks such as image captioning BIBREF25 and question generation BIBREF26 that the CVAE-based generation models suffer from the low output diversity problem, i.e. multiple sampled variables point to the same generated sequences. In this work, we utilize a discrete latent variable with an interpretable meaning to alleviate this low output diversity problem on short-text conversation.
We find that BIBREF27 zhao2018unsupervised make use of a set of discrete variables that define high-level attributes of a response. Although they interpret meanings of the learned discrete latent variables by clustering data according to certain classes (e.g. dialog acts), such latent variables still have no exact meanings. In our model, we connect each latent variable with a word in the vocabulary, thus each latent variable has an exact semantic meaning. Besides, they focus on multi-turn dialogue generation and presented an unsupervised discrete sentence representation learning method learned from the context while our concentration is primarily on single-turn dialogue generation with no context information.
Proposed Models ::: DCVAE and Basic Network Modules
Following previous CVAE-based generation models BIBREF14, we introduce a latent variable $z$ for each input sequence and our goal is to maximize the lower bound on the conditional data log-likelihood $p(\mathbf {y}|\mathbf {x})$, where $\mathbf {x}$ is the input query sequence and $\mathbf {y}$ is the target response sequence:
Here, $p(z|\mathbf {x})$/$q(z|\mathbf {y},\mathbf {x})$/$p(\mathbf {y}|\mathbf {x},z)$ is parameterized by the prior/posterior/generation network respectively. $D_{KL}(q(z|\mathbf {y},\mathbf {x})||p(z|\mathbf {x}))$ is the Kullback-Leibler (KL) divergence between the posterior and prior distribution. Generally, $z$ is set to follow a Gaussian distribution in both the prior and posterior networks. As mentioned in the related work, directly using the above CVAE formulation causes the low output diversity problem. This observation is also validated in the short-text conversation task in our experiments.
Now, we introduce our basic discrete CVAE formulation to alleviate the low output diversity problem. We change the continuous latent variable $z$ to a discrete latent one with an explicit interpretable meaning, which could actively control the generation of the response. An intuitive way is to connect each latent variable with a word in the vocabulary. With a sampled latent $z$ from the prior (in testing)/posterior network (in training), the generation network will take the query representation together with the word embedding of this latent variable as the input to decode the response. Here, we assume that a single word is enough to drive the generation network to output diverse responses for short text conversation, in which the response is generally short and compact.
A major advantage of our DCVAE is that for words with far different meanings, their word embeddings (especially that we use a good pre-trained word embedding corpus) generally have a large distance and drive the generation network to decode scattered responses, thus improve the output diversity. In the standard CVAE, $z$'s assumed in a continuous space may not maintain the semantic distance as in the embedding space and diverse $z$'s may point to the same semantic meaning, in which case the generation network is hard to train well with such confusing information. Moreover, we can make use of the semantic distance between latent variables to perform better sampling to approximate the objective during optimization, which will be introduced in Section SECREF10.
The latent variable $z$ is thus set to follow a categorical distribution with each dimension corresponding to a word in the vocabulary. Therefore the prior and posterior networks should output categorical probability distributions:
where $\theta $ and $\phi $ are parameters of the two networks respectively. The KL distance of these two distributions can be calculated in a closed form solution:
where $Z$ contains all words in the vocabulary.
In the following, we present the details of the prior, posterior and generation network.
Prior network $p(z|\mathbf {x})$: It aims at inferring the latent variable $z$ given the input sequence $x$. We first obtain an input representation $\mathbf {h}_{\mathbf {x}}^{p}$ by encoding the input query $\mathbf {x}$ with a bi-directional GRU and then compute $g_{\theta }(\mathbf {x})$ in Eq. as follows:
where $\theta $ contains parameters in both the bidirectional GRU and Eq. DISPLAY_FORM8.
Posterior network $q(z|\mathbf {y}, \mathbf {x})$: It infers a latent variable $z$ given a input query $\mathbf {x}$ and its target response $\mathbf {y}$. We construct both representations for the input and the target sequence by separated bi-directional GRU's, then add them up to compute $f_{\phi }(\mathbf {y}, \mathbf {x})$ in Eq. to predict the probability of $z$:
where $\phi $ contains parameters in the two encoding functions and Eq. DISPLAY_FORM9. Note that the parameters of the encoding functions are not shared in the prior and posterior network.
Generation network $p(\mathbf {y}|\mathbf {x},z)$: We adopt an encoder-decoder model with attention BIBREF28 used in the decoder. With a sampled latent variable $z$, a typical strategy is to combine its representation, which in this case is the word embedding $\mathbf {e}_z$ of $z$, only in the beginning of decoding. However, many previous works observe that the influence of the added information will vanish over time BIBREF12, BIBREF21. Thus, after obtaining an attentional hidden state at each decoding step, we concatenate the representation $\mathbf {h}_z$ of the latent variable and the current hidden state to produce a final output in our generation network.
Proposed Models ::: A Two-Stage Sampling Approach
When the CVAE models are optimized, they tend to converge to a solution with a vanishingly small KL term, thus failing to encode meaningful information in $z$. To address this problem, we follow the idea in BIBREF14, which introduces an auxiliary loss that requires the decoder in the generation network to predict the bag-of-words in the response $\mathbf {y}$. Specifically, the response $\mathbf {y}$ is now represented by two sequences simultaneously: $\mathbf {y}_o$ with word order and $\mathbf {y}_{bow}$ without order. These two sequences are assumed to be conditionally independent given $z$ and $\mathbf {x}$. Then our training objective can be rewritten as:
where $p(\mathbf {y}_{bow}|\mathbf {x}, z)$ is obtained by a multilayer perceptron $\mathbf {h}^{b} = \mbox{MLP}(\mathbf {x}, z)$:
where $|\mathbf {y}|$ is the length of $\mathbf {y}$, $y_t$ is the word index of $t$-th word in $\mathbf {y}$, and $V$ is the vocabulary size.
During training, we generally approximate $\mathbb {E}_{z \sim q(z|\mathbf {y},\mathbf {x})}[\log p(\mathbf {y}|\mathbf {x},z)]$ by sampling $N$ times of $z$ from the distribution $q(z|\mathbf {y}, \mathbf {x})$. In our model, the latent space is discrete but generally large since we set it as the vocabulary in the dataset . The vocabulary consists of words that are similar in syntactic or semantic. Directly sampling $z$ from the categorical distribution in Eq. cannot make use of such word similarity information.
Hence, we propose to modify our model in Section SECREF4 to consider the word similarity for sampling multiple accurate and diverse latent $z$'s. We first cluster $z \in Z$ into $K$ clusters $c_1,\ldots , c_K$. Each $z$ belongs to only one of the $K$ clusters and dissimilar words lie in distinctive groups. We use the K-means clustering algorithm to group $z$'s using a pre-trained embedding corpus BIBREF29. Then we revise the posterior network to perform a two-stage cluster sampling by decomposing $q(z|\mathbf {y}, \mathbf {x})$ as :
That is, we first compute $q(c_{k_z}|\mathbf {y}, \mathbf {x})$, which is the probability of the cluster that $z$ belongs to conditioned on both $\mathbf {x}$ and $\mathbf {y}$. Next, we compute $q(z|\mathbf {x}, \mathbf {y}, c_{k_z})$, which is the probability distribution of $z$ conditioned on the $\mathbf {x}$, $\mathbf {y}$ and the cluster $c_{k_z}$. When we perform sampling from $q(z|\mathbf {x}, \mathbf {y})$, we can exploit the following two-stage sampling approach: first sample the cluster based on $q( c_{k} |\mathbf {x}, \mathbf {y})$; next sample a specific $z$ from $z$'s within the sampled cluster based on $q(z|\mathbf {x}, \mathbf {y}, c_{k_z})$.
Similarly, we can decompose the prior distribution $p(z| \mathbf {x})$ accordingly for consistency:
In testing, we can perform the two-stage sampling according to $p(c_{k}|\mathbf {x})$ and $p(z|\mathbf {x}, c_{k_z})$. Our full model is illustrated in Figure FIGREF3.
Network structure modification: To modify the network structure for the two-stage sampling method, we first compute the probability of each cluster given $\mathbf {x}$ in the prior network (or $\mathbf {x}$ and $\mathbf {y}$ in the posterior network) with a softmax layer (Eq. DISPLAY_FORM8 or Eq. DISPLAY_FORM9 followed by a softmax function). We then add the input representation and the cluster embedding $\mathbf {e}_{c_z}$ of a sampled cluster $c_{z}$, and use another softmax layer to compute the probability of each $z$ within the sampled cluster. In the generation network, the representation of $z$ is the sum of the cluster embedding $\mathbf {e}_{c_z}$ and its word embedding $\mathbf {e}_{z}$.
Network pre-training: To speed up the convergence of our model, we pre-extract keywords from each query using the TF-IDF method. Then we use these keywords to pre-train the prior and posterior networks. The generation network is not pre-trained because in practice it converges fast in only a few epochs.
Experimental Settings
Next, we describe our experimental settings including the dataset, implementation details, all compared methods, and the evaluation metrics.
Experimental Settings ::: Dataset
We conduct our experiments on a short-text conversation benchmark dataset BIBREF2 which contains about 4 million post-response pairs from the Sina Weibo , a Chinese social platforms. We employ the Jieba Chinese word segmenter to tokenize the queries and responses into sequences of Chinese words. We use a vocabulary of 50,000 words (a mixture of Chinese words and characters), which covers 99.98% of words in the dataset. All other words are replaced with $<$UNK$>$.
Experimental Settings ::: Implementation Details
We use single-layer bi-directional GRU for the encoder in the prior/posterior/generation network, and one-layer GRU for the decoder in the generation network. The dimension of all hidden vectors is 1024. The cluster embedding dimension is 620. Except that the word embeddings are initialized by the word embedding corpus BIBREF29, all other parameters are initialized by sampling from a uniform distribution $[-0.1,0.1]$. The batch size is 128. We use Adam optimizer with a learning rate of 0.0001. For the number of clusters $K$ in our method, we evaluate four different values $(5, 10, 100, 1000)$ using automatic metrics and set $K$ to 10 which tops the four options empirically. It takes about one day for every two epochs of our model on a Tesla P40 GPU, and we train ten epochs in total. During testing, we use beam search with a beam size of 10.
Experimental Settings ::: Compared Methods
In our work, we focus on comparing various methods that model $p(\mathbf {y}|\mathbf {x})$ differently. We compare our proposed discrete CVAE (DCVAE) with the two-stage sampling approach to three categories of response generation models:
Baselines: Seq2seq, the basic encoder-decoder model with soft attention mechanism BIBREF30 used in decoding and beam search used in testing; MMI-bidi BIBREF5, which uses the MMI to re-rank results from beam search.
CVAE BIBREF14: We adjust the original work which is for multi-round conversation for our single-round setting. For a fair comparison, we utilize the same keywords used in our network pre-training as the knowledge-guided features in this model.
Other enhanced encoder-decoder models: Hierarchical Gated Fusion Unit (HGFU) BIBREF12, which incorporates a cue word extracted using pointwise mutual information (PMI) into the decoder to generate meaningful responses; Mechanism-Aware Neural Machine (MANM) BIBREF13, which introduces latent embeddings to allow for multiple diverse response generation.
Here, we do not compare RL/GAN-based methods because all our compared methods can replace their objectives with the use of reward functions in the RL-based methods or add a discriminator in the GAN-based methods to further improve the overall performance. However, these are not the contribution of our work, which we leave to future work to discuss the usefulness of our model as well as other enhanced generation models combined with the RL/GAN-based methods.
Experimental Settings ::: Evaluation
To evaluate the responses generated by all compared methods, we compute the following automatic metrics on our test set:
BLEU: BLEU-n measures the average n-gram precision on a set of reference responses. We report BLEU-n with n=1,2,3,4.
Distinct-1 & distinct-2 BIBREF5: We count the numbers of distinct uni-grams and bi-grams in the generated responses and divide the numbers by the total number of generated uni-grams and bi-grams in the test set. These metrics can be regarded as an automatic metric to evaluate the diversity of the responses.
Three annotators from a commercial annotation company are recruited to conduct our human evaluation. Responses from different models are shuffled for labeling. 300 test queries are randomly selected out, and annotators are asked to independently score the results of these queries with different points in terms of their quality: (1) Good (3 points): The response is grammatical, semantically relevant to the query, and more importantly informative and interesting; (2) Acceptable (2 points): The response is grammatical, semantically relevant to the query, but too trivial or generic (e.g.,“我不知道(I don't know)", “我也是(Me too)”, “我喜欢(I like it)" etc.); (3) Failed (1 point): The response has grammar mistakes or irrelevant to the query.
Experimental Results and Analysis
In the following, we will present results of all compared methods and conduct a case study on such results. Then, we will perform further analysis of our proposed method by varying different settings of the components designed in our model.
Experimental Results and Analysis ::: Results on All Compared Methods
Results on automatic metrics are shown on the left-hand side of Table TABREF20. From the results we can see that our proposed DCVAE achieves the best BLEU scores and the second best distinct ratios. The HGFU has the best dist-2 ratio, but its BLEU scores are the worst. These results indicate that the responses generated by the HGFU are less close to the ground true references. Although the automatic evaluation generally indicates the quality of generated responses, it can not accurately evaluate the generated response and the automatic metrics may not be consistent with human perceptions BIBREF31. Thus, we consider human evaluation results more reliable.
For the human evaluation results on the right-hand side of Table TABREF20, we show the mean and standard deviation of all test results as well as the percentage of acceptable responses (2 or 3 points) and good responses (3 points only). Our proposed DCVAE has the best quality score among all compared methods. Moreover, DCVAE achieves a much higher good ratio, which means it generates more informative and interesting responses. Besides, the HGFU's acceptable and good ratios are much lower than our model indicating that it may not maintain enough response relevance when encouraging diversity. This is consistent with the results of the automatic evaluation in Table TABREF20. We also notice that the CVAE achieves the worst human annotation score. This validates that the original CVAE for open-domain response generation does not work well and our proposed DCVAE is an effective way to improve the CVAE for better output diversity.
Experimental Results and Analysis ::: Case Study
Figure FIGREF29 shows four example queries with their responses generated by all compared methods. The Seq2seq baseline tends to generate less informative responses. Though MMI-bidi can select different words to be used, its generated responses are still far from informative. MANM can avoid generating generic responses in most cases, but sometimes its generated response is irrelevant to the query, as shown in the left bottom case. Moreover, the latent responding mechanisms in MANM have no explicit or interpretable meaning. Similar results can be observed from HGFU. If the PMI selects irrelevant cue words, the resulting response may not be relevant. Meanwhile, responses generated by our DCVAE are more informative as well as relevant to input queries.
Experimental Results and Analysis ::: Different Sizes of the Latent Space
We vary the size of the latent space (i.e., sampled word space $Z$) used in our proposed DCVAE. Figure FIGREF32 shows the automatic and human evaluation results on the latent space setting to the top 10k, 20k, all words in the vocabulary. On the automatic evaluation results, if the sampled latent space is getting larger, the BLEU-4 score increases but the distinct ratios drop. We find out that though the DCVAE with a small latent space has a higher distinct-1/2 ratio, many generated sentences are grammatically incorrect. This is also why the BLEU-4 score decreases. On the human evaluation results, all metrics improve with the use of a larger latent space. This is consistent with our motivation that open-domain short-text conversation covers a wide range of topics and areas, and the top frequent words are not enough to capture the content of most training pairs. Thus a small latent space, i.e. the top frequent words only, is not feasible to model enough latent information and a large latent space is generally favored in our proposed model.
Experimental Results and Analysis ::: Analysis on the Two-Stage Sampling
We further look into whether the two-stage sampling method is effective in the proposed DCVAE. Here, the One-Stage method corresponds to the basic formulation in Section SECREF4 with no use of the clustering information in the prior or posterior network. Results on both automatic and human evaluation metrics are shown in Figure. FIGREF37 and FIGREF38. We can observe that the performance of the DCVAE without the two-stage sampling method drops drastically. This means that the proposed two-stage sampling method is important for the DCVAE to work well.
Besides, to validate the effectiveness of clustering, we implemented a modified DCVAE (DCVAE-CD) that uses a pure categorical distribution in which each variable has no exact meaning. That is, the embedding of each latent variable does not correspond to any word embedding. Automatic evaluation results of this modified model are shown in Figure. FIGREF39. We can see that DCVAE-CD performs worse, which means the distribution on word vocabulary is important in our model.
Conclusion
In this paper, we have presented a novel response generation model for short-text conversation via a discrete CVAE. We replace the continuous latent variable in the standard CVAE by an interpretable discrete variable, which is set to a word in the vocabulary. The sampled latent word has an explicit semantic meaning, acting as a guide to the generation of informative and diverse responses. We also propose to use a two-stage sampling approach to enable efficient selection of diverse variables from a large latent space, which is very essential for our model. Experimental results show that our model outperforms various kinds of generation models under both automatic and human evaluations.
Acknowledgements
This work was supported by National Natural Science Foundation of China (Grant No. 61751206, 61876120). | we connect each latent variable with a word in the vocabulary, thus each latent variable has an exact semantic meaning. |
2e3265d83d2a595293ed458152d3ee76ad19e244 | 2e3265d83d2a595293ed458152d3ee76ad19e244_0 | Q: What news dataset was used?
Text: Introduction
The web has provided researchers with vast amounts of unlabeled text data, and enabled the development of increasingly sophisticated language models which can achieve state of the art performance despite having no task specific training BIBREF0, BIBREF1, BIBREF2. It is desirable to adapt these models for bespoke tasks such as short text classification.
Short-text is nuanced, difficult to model statistically, and sparse in features, hindering traditional analysis BIBREF3. These difficulties become further compounded when training is limited, as is the case for many practical applications.
This paper provides a method to expand short-text with additional keywords, generated using a pre-trained language model. The method takes advantage of general language understanding to suggest contextually relevant new words, without necessitating additional domain data. The method can form both derivatives of the input vocabulary, and entirely new words arising from contextualised word interactions and is ideally suited for applications where data volume is limited.
figureBinary Classification of short headlines into 'WorldPost' or 'Crime' categories, shows improved performance with extended pseudo headlines when the training set is small. Using: Random forest classifier, 1000 test examples, 10-fold cross validation.
Literature Review
Document expansion methods have typically focused on creating new features with the help of custom models. Word co-occurrence models BIBREF4, topic modeling BIBREF5, latent concept expansion BIBREF6, and word embedding clustering BIBREF7, are all examples of document expansion methods that must first be trained using either the original dataset or an external dataset from within the same domain. The expansion models may therefore only be used when there is a sufficiently large training set.
Transfer learning was developed as a method of reducing the need for training data by adapting models trained mostly from external data BIBREF8. Transfer learning can be an effective method for short-text classification and requires little domain specific training data BIBREF9, BIBREF10, however it demands training a new model for every new classification task and does not offer a general solution to sparse data enrichment.
Recently, multi-task language models have been developed and trained using ultra-large online datasets without being confined to any narrow applications BIBREF0, BIBREF1, BIBREF2. It is now possible to benefit from the information these models contain by adapting them to the task of text expansion and text classification.
This paper is a novel approach which combines the advantages of document expansion, transfer learning, and multitask modeling. It expends documents with new and relevant keywords by using the BERT pre-trained learning model, thus taking advantage of transfer learning acquired during BERT's pretraining. It is also unsupervised and requires no task specific training, thus allowing the same model to be applied to many different tasks or domains.
Procedures ::: Dataset
The News Category Dataset BIBREF11 is a collection of headlines published by HuffPost BIBREF12 between 2012 and 2018, and was obtained online from Kaggle BIBREF13. The full dataset contains 200k news headlines with category labels, publication dates, and short text descriptions. For this analysis, a sample of roughly 33k headlines spanning 23 categories was used. Further analysis can be found in table SECREF12 in the appendix.
Procedures ::: Word Generation
Words were generated using the BERT pre-trained model developed and trained by Google AI Language BIBREF0. BERT creates contextualized word embedding by passing a list of word tokens through 12 hidden transformer layers and generating encoded word vectors. To generate extended text, an original short-text document was passed to pre-trained BERT. At each transformer layer a new word embedding was formed and saved. BERT's vector decoder was then used to convert hidden word vectors to candidate words, the top three candidate words at each encoder layer were kept.
Each input word produced 48 candidate words, however many were duplicates. Examples of generated words per layer can be found in table SECREF12 and SECREF12 in the appendix. The generated words were sorted based on frequency, duplicate words from the original input were removed, as were stop-words, punctuation, and incomplete words. The generated words were then appended to the original document to create extended pseudo documents, the extended document was limited to 120 words in order to normalize each feature set. Further analysis can be found in table SECREF12 in the appendix.
figureThe proposed method uses the BERT pre-trained word embedding model to generate new words which are appended to the orignal text creating extended pseudo documents.
Procedures ::: Topic Evaluation
To test the proposed methods ability to generate unsupervised words, it was necessary to devise a method of measuring word relevance. Topic modeling was used based on the assumption that words found in the same topic are more relevant to one another then words from different topics BIBREF14. The complete 200k headline dataset BIBREF11 was modeled using a Naïve Bayes Algorithm BIBREF15 to create a word-category co-occurrence model. The top 200 most relevant words were then found for each category and used to create the topic table SECREF12. It was assumed that each category represented its own unique topic.
The number of relevant output words as a function of the headline’s category label were measured, and can be found in figure SECREF4. The results demonstrate that the proposed method could correctly identify new words relevant to the input topic at a signal to noise ratio of 4 to 1.
figureThe number of generated words within each topic was counted, topics which matched the original headline label were considered 'on target'. Results indicate that the unsupervised generation method produced far more words relating to the label category then to other topics. Tested on 7600 examples spanning 23 topics.
Procedures ::: Binary and Multi-class Classification Experiments
Three datasets were formed by taking equal length samples from each category label. The new datastes are ‘Worldpost vs Crime’, ‘Politics vs Entertainment’, and ‘Sports vs Comedy’, a fourth multiclass dataset was formed by combining the three above sets.
For each example three feature options were created by extending every headline by 0, 15 and 120 words. Before every run, a test set was removed and held aside. The remaining data was sampled based on the desired training size. Each feature option was one-hot encoded using a unique tfidf-vectorizer BIBREF16 and used to train a random-forest classifier BIBREF17 with 300-estimators for binary predictions and 900-estimators for multiclass.
Random forest was chosen since it performs well on small datasets and is resistant to overfitting BIBREF18. Each feature option was evaluated against its corresponding test set. 10 runs were completed for each dataset.
Results and Analysis ::: Evaluating word relevance
It is desirable to generate new words which are relevant to the target topics and increase predictive signal, while avoiding words which are irrelevant, add noise, and mislead predictions.
The strategy, described in section SECREF4, was created to measure word relevance and quantify the unsupervised model performance. It can be seen from fig SECREF4 and SECREF12 in the appendix that the proposed expansion method is effective at generating words which relate to topics of the input sentence, even from very little data. From the context of just a single word, the method can generate 3 new relevant words, and can generate as many as 10 new relevant words from sentences which contain 5 topic related words SECREF12. While the method is susceptible to noise, producing on average 1 word related to each irrelevant topic, the number of correct predictions statistically exceed the noise.
Furthermore, because the proposed method does not have any prior knowledge of its target topics, it remains completely domain agnostic, and can be applied generally for short text of any topic.
Results and Analysis ::: Evaluating word relevance ::: Binary Classification
Comparing the performance of extended pseudo documents on three separate binary classification datasets shows significant improvement from baseline in the sparse data region of 100 to 1000 training examples.
The ‘Worldpost vs Crime’ dataset showed the most improvement as seen in figure SECREF1. Within the sparse data region the extended pseudo documents could achieve similar performance as original headlines with only half the data, and improve F1 score between 13.9% and 1.7%
The ‘Comedy vs Sports’ dataset, seen in figure SECREF11, showed an average improvement of 2% within the sparse region.
The ‘Politics vs Entertainment’ dataset, figure SECREF11, was unique. It is the only dataset for which a 15-word extended feature set surpassed the 120-words feature set. It demonstrates that the length of the extended pseudo documents can behave like a hyper parameter for certain datasets, and should be tuned according to the train-size.
Results and Analysis ::: Evaluating word relevance ::: Multiclass Classification
The Extended pseudo documents improved multiclass performance by 4.6% on average, in the region of 100 to 3000 training examples, as seen in figure SECREF11. The results indicate the effectiveness of the proposed method at suggesting relevant words within a narrow topic domain, even without any previous domain knowledge.
In each instance it was found that the extended pseudo documents only improved performance on small training sizes. This demonstrates that while the extended pseudo docs are effective at generating artificial data, they also produce a lot of noise. Once the training size exceeds a certain threshold, it becomes no longer necessary to create additional data, and using extended documents simply adds noise to an otherwise well trained model.
figureBinary Classification of 'Politics' or 'Entertainment' demonstrates that the number of added words can behave like a hyper paremeter and should be tuned based on training size. Tested on 1000 examples with 10-fold cross validation
figureBinary Classification of 'Politics' vs 'Sports' has less improvement compared to other datasets which indicates that the proposed method, while constructed to be domain agnostic, shows better performance towards certain topics. Tested on 1000 examples with 10-fold cross validation.
figureAdded Words improve Multiclass Classification between 1.5% and 13% in the range of 150 to 2000 training examples. Tests were conducted using equal size samples of Headlines categorized into 'World-Post', 'Crime', 'Politics', 'Entertainment', 'Sports' or 'Comedy'. A 900 Estimator Random Forest classifier was trained for each each data point, tested using 2000 examples, and averaged using 10-fold cross validation.
2
Discussion
Generating new words based solely on ultra small prompts of 10 words or fewer is a major challenge. A short sentence is often characterized by a just a single keyword, and modeling topics from such little data is difficult. Any method of keyword generation that overly relies on the individual words will lack context and fail to add new information, while attempting to freely form new words without any prior domain knowledge is uncertain and leads to misleading suggestions.
This method attempts to find balance between synonym and free-form word generation, by constraining words to fit the original sentence while still allowing for word-word and word-sentence interactions to create novel outputs.
The word vectors must move through the transformer layers together and therefore maintain the same token order and semantic meaning, however they also receive new input from the surrounding words at each layer. The result, as can be seen from table SECREF12 and SECREF12 in the appendix, is that the first few transformer layers are mostly synonyms of the input sentence since the word vectors have not been greatly modified. The central transformer layers are relevant and novel, since they are still slightly constrained but also have been greatly influenced by sentence context. And the final transformer layers are mostly non-sensical, since they have been completely altered from their original state and lost their ability to retrieve real words.
This method is unique since it avoids needing a prior dataset by using the information found within the weights of a general language model. Word embedding models, and BERT in particular, contain vast amounts of information collected through the course of their training. BERT Base for instance, has 110 Million parameters and was trained on both Wikipedea Corpus and BooksCorpus BIBREF0, a combined collection of over 3 Billion words. The full potential of such vastly trained general language models is still unfolding. This paper demonstrates that by carefully prompting and analysing these models, it is possible to extract new information from them, and extend short-text analysis beyond the limitations posed by word count.
Appendix ::: Additional Tables and Figures
figureA Topic table, created from the category labels of the complete headline dataset, can be used to measure the relevance of generated words.
An original headline was analyzed by counting the number of words which related to each topic. The generated words were then analyzed in the same way. The change in word count between input topics and output topics was measured and plotted as seen in figure SECREF12.
figureBox plot of the number of generated words within a topic as a function of the number of input words within the same topic. Results indicate that additional related words can be generated by increasing the signal of the input prompt. Tested on 7600 examples spanning 23 topics.
figureInformation regarding the original headlines, and generated words used to create extended pseudo headlines.
figureTop 3 guesses for each token position at each later of a BERT pretrained embedding model. Given the input sentence '2 peoplpe injured in Indiana school shooting', the full list of generated words can be obtainedfrom the values in the table.
figureTop 3 guesses for each token position at each later of a BERT pretrained embedding model. | collection of headlines published by HuffPost BIBREF12 between 2012 and 2018 |
c2432884287dca4af355698a543bc0db67a8c091 | c2432884287dca4af355698a543bc0db67a8c091_0 | Q: How do they determine similarity between predicted word and topics?
Text: Introduction
The web has provided researchers with vast amounts of unlabeled text data, and enabled the development of increasingly sophisticated language models which can achieve state of the art performance despite having no task specific training BIBREF0, BIBREF1, BIBREF2. It is desirable to adapt these models for bespoke tasks such as short text classification.
Short-text is nuanced, difficult to model statistically, and sparse in features, hindering traditional analysis BIBREF3. These difficulties become further compounded when training is limited, as is the case for many practical applications.
This paper provides a method to expand short-text with additional keywords, generated using a pre-trained language model. The method takes advantage of general language understanding to suggest contextually relevant new words, without necessitating additional domain data. The method can form both derivatives of the input vocabulary, and entirely new words arising from contextualised word interactions and is ideally suited for applications where data volume is limited.
figureBinary Classification of short headlines into 'WorldPost' or 'Crime' categories, shows improved performance with extended pseudo headlines when the training set is small. Using: Random forest classifier, 1000 test examples, 10-fold cross validation.
Literature Review
Document expansion methods have typically focused on creating new features with the help of custom models. Word co-occurrence models BIBREF4, topic modeling BIBREF5, latent concept expansion BIBREF6, and word embedding clustering BIBREF7, are all examples of document expansion methods that must first be trained using either the original dataset or an external dataset from within the same domain. The expansion models may therefore only be used when there is a sufficiently large training set.
Transfer learning was developed as a method of reducing the need for training data by adapting models trained mostly from external data BIBREF8. Transfer learning can be an effective method for short-text classification and requires little domain specific training data BIBREF9, BIBREF10, however it demands training a new model for every new classification task and does not offer a general solution to sparse data enrichment.
Recently, multi-task language models have been developed and trained using ultra-large online datasets without being confined to any narrow applications BIBREF0, BIBREF1, BIBREF2. It is now possible to benefit from the information these models contain by adapting them to the task of text expansion and text classification.
This paper is a novel approach which combines the advantages of document expansion, transfer learning, and multitask modeling. It expends documents with new and relevant keywords by using the BERT pre-trained learning model, thus taking advantage of transfer learning acquired during BERT's pretraining. It is also unsupervised and requires no task specific training, thus allowing the same model to be applied to many different tasks or domains.
Procedures ::: Dataset
The News Category Dataset BIBREF11 is a collection of headlines published by HuffPost BIBREF12 between 2012 and 2018, and was obtained online from Kaggle BIBREF13. The full dataset contains 200k news headlines with category labels, publication dates, and short text descriptions. For this analysis, a sample of roughly 33k headlines spanning 23 categories was used. Further analysis can be found in table SECREF12 in the appendix.
Procedures ::: Word Generation
Words were generated using the BERT pre-trained model developed and trained by Google AI Language BIBREF0. BERT creates contextualized word embedding by passing a list of word tokens through 12 hidden transformer layers and generating encoded word vectors. To generate extended text, an original short-text document was passed to pre-trained BERT. At each transformer layer a new word embedding was formed and saved. BERT's vector decoder was then used to convert hidden word vectors to candidate words, the top three candidate words at each encoder layer were kept.
Each input word produced 48 candidate words, however many were duplicates. Examples of generated words per layer can be found in table SECREF12 and SECREF12 in the appendix. The generated words were sorted based on frequency, duplicate words from the original input were removed, as were stop-words, punctuation, and incomplete words. The generated words were then appended to the original document to create extended pseudo documents, the extended document was limited to 120 words in order to normalize each feature set. Further analysis can be found in table SECREF12 in the appendix.
figureThe proposed method uses the BERT pre-trained word embedding model to generate new words which are appended to the orignal text creating extended pseudo documents.
Procedures ::: Topic Evaluation
To test the proposed methods ability to generate unsupervised words, it was necessary to devise a method of measuring word relevance. Topic modeling was used based on the assumption that words found in the same topic are more relevant to one another then words from different topics BIBREF14. The complete 200k headline dataset BIBREF11 was modeled using a Naïve Bayes Algorithm BIBREF15 to create a word-category co-occurrence model. The top 200 most relevant words were then found for each category and used to create the topic table SECREF12. It was assumed that each category represented its own unique topic.
The number of relevant output words as a function of the headline’s category label were measured, and can be found in figure SECREF4. The results demonstrate that the proposed method could correctly identify new words relevant to the input topic at a signal to noise ratio of 4 to 1.
figureThe number of generated words within each topic was counted, topics which matched the original headline label were considered 'on target'. Results indicate that the unsupervised generation method produced far more words relating to the label category then to other topics. Tested on 7600 examples spanning 23 topics.
Procedures ::: Binary and Multi-class Classification Experiments
Three datasets were formed by taking equal length samples from each category label. The new datastes are ‘Worldpost vs Crime’, ‘Politics vs Entertainment’, and ‘Sports vs Comedy’, a fourth multiclass dataset was formed by combining the three above sets.
For each example three feature options were created by extending every headline by 0, 15 and 120 words. Before every run, a test set was removed and held aside. The remaining data was sampled based on the desired training size. Each feature option was one-hot encoded using a unique tfidf-vectorizer BIBREF16 and used to train a random-forest classifier BIBREF17 with 300-estimators for binary predictions and 900-estimators for multiclass.
Random forest was chosen since it performs well on small datasets and is resistant to overfitting BIBREF18. Each feature option was evaluated against its corresponding test set. 10 runs were completed for each dataset.
Results and Analysis ::: Evaluating word relevance
It is desirable to generate new words which are relevant to the target topics and increase predictive signal, while avoiding words which are irrelevant, add noise, and mislead predictions.
The strategy, described in section SECREF4, was created to measure word relevance and quantify the unsupervised model performance. It can be seen from fig SECREF4 and SECREF12 in the appendix that the proposed expansion method is effective at generating words which relate to topics of the input sentence, even from very little data. From the context of just a single word, the method can generate 3 new relevant words, and can generate as many as 10 new relevant words from sentences which contain 5 topic related words SECREF12. While the method is susceptible to noise, producing on average 1 word related to each irrelevant topic, the number of correct predictions statistically exceed the noise.
Furthermore, because the proposed method does not have any prior knowledge of its target topics, it remains completely domain agnostic, and can be applied generally for short text of any topic.
Results and Analysis ::: Evaluating word relevance ::: Binary Classification
Comparing the performance of extended pseudo documents on three separate binary classification datasets shows significant improvement from baseline in the sparse data region of 100 to 1000 training examples.
The ‘Worldpost vs Crime’ dataset showed the most improvement as seen in figure SECREF1. Within the sparse data region the extended pseudo documents could achieve similar performance as original headlines with only half the data, and improve F1 score between 13.9% and 1.7%
The ‘Comedy vs Sports’ dataset, seen in figure SECREF11, showed an average improvement of 2% within the sparse region.
The ‘Politics vs Entertainment’ dataset, figure SECREF11, was unique. It is the only dataset for which a 15-word extended feature set surpassed the 120-words feature set. It demonstrates that the length of the extended pseudo documents can behave like a hyper parameter for certain datasets, and should be tuned according to the train-size.
Results and Analysis ::: Evaluating word relevance ::: Multiclass Classification
The Extended pseudo documents improved multiclass performance by 4.6% on average, in the region of 100 to 3000 training examples, as seen in figure SECREF11. The results indicate the effectiveness of the proposed method at suggesting relevant words within a narrow topic domain, even without any previous domain knowledge.
In each instance it was found that the extended pseudo documents only improved performance on small training sizes. This demonstrates that while the extended pseudo docs are effective at generating artificial data, they also produce a lot of noise. Once the training size exceeds a certain threshold, it becomes no longer necessary to create additional data, and using extended documents simply adds noise to an otherwise well trained model.
figureBinary Classification of 'Politics' or 'Entertainment' demonstrates that the number of added words can behave like a hyper paremeter and should be tuned based on training size. Tested on 1000 examples with 10-fold cross validation
figureBinary Classification of 'Politics' vs 'Sports' has less improvement compared to other datasets which indicates that the proposed method, while constructed to be domain agnostic, shows better performance towards certain topics. Tested on 1000 examples with 10-fold cross validation.
figureAdded Words improve Multiclass Classification between 1.5% and 13% in the range of 150 to 2000 training examples. Tests were conducted using equal size samples of Headlines categorized into 'World-Post', 'Crime', 'Politics', 'Entertainment', 'Sports' or 'Comedy'. A 900 Estimator Random Forest classifier was trained for each each data point, tested using 2000 examples, and averaged using 10-fold cross validation.
2
Discussion
Generating new words based solely on ultra small prompts of 10 words or fewer is a major challenge. A short sentence is often characterized by a just a single keyword, and modeling topics from such little data is difficult. Any method of keyword generation that overly relies on the individual words will lack context and fail to add new information, while attempting to freely form new words without any prior domain knowledge is uncertain and leads to misleading suggestions.
This method attempts to find balance between synonym and free-form word generation, by constraining words to fit the original sentence while still allowing for word-word and word-sentence interactions to create novel outputs.
The word vectors must move through the transformer layers together and therefore maintain the same token order and semantic meaning, however they also receive new input from the surrounding words at each layer. The result, as can be seen from table SECREF12 and SECREF12 in the appendix, is that the first few transformer layers are mostly synonyms of the input sentence since the word vectors have not been greatly modified. The central transformer layers are relevant and novel, since they are still slightly constrained but also have been greatly influenced by sentence context. And the final transformer layers are mostly non-sensical, since they have been completely altered from their original state and lost their ability to retrieve real words.
This method is unique since it avoids needing a prior dataset by using the information found within the weights of a general language model. Word embedding models, and BERT in particular, contain vast amounts of information collected through the course of their training. BERT Base for instance, has 110 Million parameters and was trained on both Wikipedea Corpus and BooksCorpus BIBREF0, a combined collection of over 3 Billion words. The full potential of such vastly trained general language models is still unfolding. This paper demonstrates that by carefully prompting and analysing these models, it is possible to extract new information from them, and extend short-text analysis beyond the limitations posed by word count.
Appendix ::: Additional Tables and Figures
figureA Topic table, created from the category labels of the complete headline dataset, can be used to measure the relevance of generated words.
An original headline was analyzed by counting the number of words which related to each topic. The generated words were then analyzed in the same way. The change in word count between input topics and output topics was measured and plotted as seen in figure SECREF12.
figureBox plot of the number of generated words within a topic as a function of the number of input words within the same topic. Results indicate that additional related words can be generated by increasing the signal of the input prompt. Tested on 7600 examples spanning 23 topics.
figureInformation regarding the original headlines, and generated words used to create extended pseudo headlines.
figureTop 3 guesses for each token position at each later of a BERT pretrained embedding model. Given the input sentence '2 peoplpe injured in Indiana school shooting', the full list of generated words can be obtainedfrom the values in the table.
figureTop 3 guesses for each token position at each later of a BERT pretrained embedding model. | number of relevant output words as a function of the headline’s category label |
226ae469a65611f041de3ae545be0e386dba7d19 | 226ae469a65611f041de3ae545be0e386dba7d19_0 | Q: What is the language model pre-trained on?
Text: Introduction
The web has provided researchers with vast amounts of unlabeled text data, and enabled the development of increasingly sophisticated language models which can achieve state of the art performance despite having no task specific training BIBREF0, BIBREF1, BIBREF2. It is desirable to adapt these models for bespoke tasks such as short text classification.
Short-text is nuanced, difficult to model statistically, and sparse in features, hindering traditional analysis BIBREF3. These difficulties become further compounded when training is limited, as is the case for many practical applications.
This paper provides a method to expand short-text with additional keywords, generated using a pre-trained language model. The method takes advantage of general language understanding to suggest contextually relevant new words, without necessitating additional domain data. The method can form both derivatives of the input vocabulary, and entirely new words arising from contextualised word interactions and is ideally suited for applications where data volume is limited.
figureBinary Classification of short headlines into 'WorldPost' or 'Crime' categories, shows improved performance with extended pseudo headlines when the training set is small. Using: Random forest classifier, 1000 test examples, 10-fold cross validation.
Literature Review
Document expansion methods have typically focused on creating new features with the help of custom models. Word co-occurrence models BIBREF4, topic modeling BIBREF5, latent concept expansion BIBREF6, and word embedding clustering BIBREF7, are all examples of document expansion methods that must first be trained using either the original dataset or an external dataset from within the same domain. The expansion models may therefore only be used when there is a sufficiently large training set.
Transfer learning was developed as a method of reducing the need for training data by adapting models trained mostly from external data BIBREF8. Transfer learning can be an effective method for short-text classification and requires little domain specific training data BIBREF9, BIBREF10, however it demands training a new model for every new classification task and does not offer a general solution to sparse data enrichment.
Recently, multi-task language models have been developed and trained using ultra-large online datasets without being confined to any narrow applications BIBREF0, BIBREF1, BIBREF2. It is now possible to benefit from the information these models contain by adapting them to the task of text expansion and text classification.
This paper is a novel approach which combines the advantages of document expansion, transfer learning, and multitask modeling. It expends documents with new and relevant keywords by using the BERT pre-trained learning model, thus taking advantage of transfer learning acquired during BERT's pretraining. It is also unsupervised and requires no task specific training, thus allowing the same model to be applied to many different tasks or domains.
Procedures ::: Dataset
The News Category Dataset BIBREF11 is a collection of headlines published by HuffPost BIBREF12 between 2012 and 2018, and was obtained online from Kaggle BIBREF13. The full dataset contains 200k news headlines with category labels, publication dates, and short text descriptions. For this analysis, a sample of roughly 33k headlines spanning 23 categories was used. Further analysis can be found in table SECREF12 in the appendix.
Procedures ::: Word Generation
Words were generated using the BERT pre-trained model developed and trained by Google AI Language BIBREF0. BERT creates contextualized word embedding by passing a list of word tokens through 12 hidden transformer layers and generating encoded word vectors. To generate extended text, an original short-text document was passed to pre-trained BERT. At each transformer layer a new word embedding was formed and saved. BERT's vector decoder was then used to convert hidden word vectors to candidate words, the top three candidate words at each encoder layer were kept.
Each input word produced 48 candidate words, however many were duplicates. Examples of generated words per layer can be found in table SECREF12 and SECREF12 in the appendix. The generated words were sorted based on frequency, duplicate words from the original input were removed, as were stop-words, punctuation, and incomplete words. The generated words were then appended to the original document to create extended pseudo documents, the extended document was limited to 120 words in order to normalize each feature set. Further analysis can be found in table SECREF12 in the appendix.
figureThe proposed method uses the BERT pre-trained word embedding model to generate new words which are appended to the orignal text creating extended pseudo documents.
Procedures ::: Topic Evaluation
To test the proposed methods ability to generate unsupervised words, it was necessary to devise a method of measuring word relevance. Topic modeling was used based on the assumption that words found in the same topic are more relevant to one another then words from different topics BIBREF14. The complete 200k headline dataset BIBREF11 was modeled using a Naïve Bayes Algorithm BIBREF15 to create a word-category co-occurrence model. The top 200 most relevant words were then found for each category and used to create the topic table SECREF12. It was assumed that each category represented its own unique topic.
The number of relevant output words as a function of the headline’s category label were measured, and can be found in figure SECREF4. The results demonstrate that the proposed method could correctly identify new words relevant to the input topic at a signal to noise ratio of 4 to 1.
figureThe number of generated words within each topic was counted, topics which matched the original headline label were considered 'on target'. Results indicate that the unsupervised generation method produced far more words relating to the label category then to other topics. Tested on 7600 examples spanning 23 topics.
Procedures ::: Binary and Multi-class Classification Experiments
Three datasets were formed by taking equal length samples from each category label. The new datastes are ‘Worldpost vs Crime’, ‘Politics vs Entertainment’, and ‘Sports vs Comedy’, a fourth multiclass dataset was formed by combining the three above sets.
For each example three feature options were created by extending every headline by 0, 15 and 120 words. Before every run, a test set was removed and held aside. The remaining data was sampled based on the desired training size. Each feature option was one-hot encoded using a unique tfidf-vectorizer BIBREF16 and used to train a random-forest classifier BIBREF17 with 300-estimators for binary predictions and 900-estimators for multiclass.
Random forest was chosen since it performs well on small datasets and is resistant to overfitting BIBREF18. Each feature option was evaluated against its corresponding test set. 10 runs were completed for each dataset.
Results and Analysis ::: Evaluating word relevance
It is desirable to generate new words which are relevant to the target topics and increase predictive signal, while avoiding words which are irrelevant, add noise, and mislead predictions.
The strategy, described in section SECREF4, was created to measure word relevance and quantify the unsupervised model performance. It can be seen from fig SECREF4 and SECREF12 in the appendix that the proposed expansion method is effective at generating words which relate to topics of the input sentence, even from very little data. From the context of just a single word, the method can generate 3 new relevant words, and can generate as many as 10 new relevant words from sentences which contain 5 topic related words SECREF12. While the method is susceptible to noise, producing on average 1 word related to each irrelevant topic, the number of correct predictions statistically exceed the noise.
Furthermore, because the proposed method does not have any prior knowledge of its target topics, it remains completely domain agnostic, and can be applied generally for short text of any topic.
Results and Analysis ::: Evaluating word relevance ::: Binary Classification
Comparing the performance of extended pseudo documents on three separate binary classification datasets shows significant improvement from baseline in the sparse data region of 100 to 1000 training examples.
The ‘Worldpost vs Crime’ dataset showed the most improvement as seen in figure SECREF1. Within the sparse data region the extended pseudo documents could achieve similar performance as original headlines with only half the data, and improve F1 score between 13.9% and 1.7%
The ‘Comedy vs Sports’ dataset, seen in figure SECREF11, showed an average improvement of 2% within the sparse region.
The ‘Politics vs Entertainment’ dataset, figure SECREF11, was unique. It is the only dataset for which a 15-word extended feature set surpassed the 120-words feature set. It demonstrates that the length of the extended pseudo documents can behave like a hyper parameter for certain datasets, and should be tuned according to the train-size.
Results and Analysis ::: Evaluating word relevance ::: Multiclass Classification
The Extended pseudo documents improved multiclass performance by 4.6% on average, in the region of 100 to 3000 training examples, as seen in figure SECREF11. The results indicate the effectiveness of the proposed method at suggesting relevant words within a narrow topic domain, even without any previous domain knowledge.
In each instance it was found that the extended pseudo documents only improved performance on small training sizes. This demonstrates that while the extended pseudo docs are effective at generating artificial data, they also produce a lot of noise. Once the training size exceeds a certain threshold, it becomes no longer necessary to create additional data, and using extended documents simply adds noise to an otherwise well trained model.
figureBinary Classification of 'Politics' or 'Entertainment' demonstrates that the number of added words can behave like a hyper paremeter and should be tuned based on training size. Tested on 1000 examples with 10-fold cross validation
figureBinary Classification of 'Politics' vs 'Sports' has less improvement compared to other datasets which indicates that the proposed method, while constructed to be domain agnostic, shows better performance towards certain topics. Tested on 1000 examples with 10-fold cross validation.
figureAdded Words improve Multiclass Classification between 1.5% and 13% in the range of 150 to 2000 training examples. Tests were conducted using equal size samples of Headlines categorized into 'World-Post', 'Crime', 'Politics', 'Entertainment', 'Sports' or 'Comedy'. A 900 Estimator Random Forest classifier was trained for each each data point, tested using 2000 examples, and averaged using 10-fold cross validation.
2
Discussion
Generating new words based solely on ultra small prompts of 10 words or fewer is a major challenge. A short sentence is often characterized by a just a single keyword, and modeling topics from such little data is difficult. Any method of keyword generation that overly relies on the individual words will lack context and fail to add new information, while attempting to freely form new words without any prior domain knowledge is uncertain and leads to misleading suggestions.
This method attempts to find balance between synonym and free-form word generation, by constraining words to fit the original sentence while still allowing for word-word and word-sentence interactions to create novel outputs.
The word vectors must move through the transformer layers together and therefore maintain the same token order and semantic meaning, however they also receive new input from the surrounding words at each layer. The result, as can be seen from table SECREF12 and SECREF12 in the appendix, is that the first few transformer layers are mostly synonyms of the input sentence since the word vectors have not been greatly modified. The central transformer layers are relevant and novel, since they are still slightly constrained but also have been greatly influenced by sentence context. And the final transformer layers are mostly non-sensical, since they have been completely altered from their original state and lost their ability to retrieve real words.
This method is unique since it avoids needing a prior dataset by using the information found within the weights of a general language model. Word embedding models, and BERT in particular, contain vast amounts of information collected through the course of their training. BERT Base for instance, has 110 Million parameters and was trained on both Wikipedea Corpus and BooksCorpus BIBREF0, a combined collection of over 3 Billion words. The full potential of such vastly trained general language models is still unfolding. This paper demonstrates that by carefully prompting and analysing these models, it is possible to extract new information from them, and extend short-text analysis beyond the limitations posed by word count.
Appendix ::: Additional Tables and Figures
figureA Topic table, created from the category labels of the complete headline dataset, can be used to measure the relevance of generated words.
An original headline was analyzed by counting the number of words which related to each topic. The generated words were then analyzed in the same way. The change in word count between input topics and output topics was measured and plotted as seen in figure SECREF12.
figureBox plot of the number of generated words within a topic as a function of the number of input words within the same topic. Results indicate that additional related words can be generated by increasing the signal of the input prompt. Tested on 7600 examples spanning 23 topics.
figureInformation regarding the original headlines, and generated words used to create extended pseudo headlines.
figureTop 3 guesses for each token position at each later of a BERT pretrained embedding model. Given the input sentence '2 peoplpe injured in Indiana school shooting', the full list of generated words can be obtainedfrom the values in the table.
figureTop 3 guesses for each token position at each later of a BERT pretrained embedding model. | Wikipedea Corpus and BooksCorpus |
8ad815b29cc32c1861b77de938c7269c9259a064 | 8ad815b29cc32c1861b77de938c7269c9259a064_0 | Q: What languages are represented in the dataset?
Text: Introduction
Language Identification (LID) is the Natural Language Processing (NLP) task of automatically recognizing the language that a document is written in. While this task was called "solved" by some authors over a decade ago, it has seen a resurgence in recent years thanks to the rise in popularity of social media BIBREF0, BIBREF1, and the corresponding daily creation of millions of new messages in dozens of different languages including rare ones that are not often included in language identification systems. Moreover, these messages are typically very short (Twitter messages were until recently limited to 140 characters) and very noisy (including an abundance of spelling mistakes, non-word tokens like URLs, emoticons, or hashtags, as well as foreign-language words in messages of another language), whereas LID was solved using long and clean documents. Indeed, several studies have shown that LID systems trained to a high accuracy on traditional documents suffer significant drops in accuracy when applied to short social-media texts BIBREF2, BIBREF3.
Given its massive scale, multilingual nature, and popularity, Twitter has naturally attracted the attention of the LID research community. Several attempts have been made to construct LID datasets from that resource. However, a major challenge is to assign each tweet in the dataset to the correct language among the more than 70 languages used on the platform. The three commonly-used approaches are to rely on human labeling BIBREF4, BIBREF5, machine detection BIBREF5, BIBREF6, or user geolocation BIBREF3, BIBREF7, BIBREF8. Human labeling is an expensive process in terms of workload, and it is thus infeasible to apply it to create a massive dataset and get the full benefit of Twitter's scale. Automated LID labeling of this data creates a noisy and imperfect dataset, which is to be expected since the purpose of these datasets is to create new and better LID algorithms. And user geolocation is based on the assumption that users in a geographic region use the language of that region; an assumption that is not always correct, which is why this technique is usually paired with one of the other two. Our first contribution in this paper is to propose a new approach to build and automatically label a Twitter LID dataset, and to show that it scales up well by building a dataset of over 18 million labeled tweets. Our hope is that our new Twitter dataset will become a benchmarking standard in the LID literature.
Traditional LID models BIBREF2, BIBREF3, BIBREF9 proposed different ideas to design a set of useful features. This set of features is then passed to traditional machine learning algorithms such as Naive Bayes (NB). The resulting systems are capable of labeling thousands of inputs per second with moderate accuracy. Meanwhile, neural network models BIBREF10, BIBREF6 approach the problem by designing a deep and complex architecture like gated recurrent unit (GRU) or encoder-decoder net. These models use the message text itself as input using a sequence of character embeddings, and automatically learn its hidden structure via a deep neural network. Consequently, they obtain better results in the task but with an efficiency trade-off. To alleviate these drawbacks, our second contribution in this paper is to propose a shallow but efficient neural LID algorithm. We followed previous neural LID BIBREF10, BIBREF6 in using character embeddings as inputs. However, instead of using a deep neural net, we propose to use a shallow ngram-regional convolution neural network (CNN) with an attention mechanism to learn input representation. We experimentally prove that the ngram-regional CNN is the best choice to tackle the bottleneck problem in neural LID. We also illustrate the behaviour of the attention structure in focusing on the most important features in the text for the task. Compared with other benchmarks on our Twitter datasets, our proposed model consistently achieves new state-of-the-art results with an improvement of 5% in accuracy and F1 score and a competitive inference time.
The rest of this paper is structured as follows. After a background review in the next section, we will present our Twitter dataset in Section SECREF3. Our novel LID algorithm will be the topic of Section SECREF4. We will then present and analyze some experiments we conducted with our algorithm in Section SECREF5, along with benchmarking tests of popular and literature LID systems, before drawing some concluding remarks in Section SECREF6. Our Twitter dataset and our LID algorithm's source code are publicly available.
Related Work
In this section, we will consider recent advances on the specific challenge of language identification in short text messages. Readers interested in a general overview of the area of LID, including older work and other challenges in the area, are encouraged to read the thorough survey of BIBREF0.
Related Work ::: Probabilistic LID
One of the first, if not the first, systems for LID specialized for short text messages is the graph-based method of BIBREF5. Their graph is composed of vertices, or character n-grams (n = 3) observed in messages in all languages, and of edges, or connections between successive n-grams weighted by the observed frequency of that connection in each language. Identifying the language of a new message is then done by identifying the most probable path in the graph that generates that message. Their method achieves an accuracy of 0.975 on their own Twitter corpus.
Carter, Weerkamp, and Tsagkias proposed an approach for LID that exploits the very nature of social media text BIBREF3. Their approach computes the prior probability of the message being in a given language independently of the content of the message itself, in five different ways: by identifying the language of external content linked to by the message, the language of previous messages by the same user, the language used by other users explicitly mentioned in the message, the language of previous messages in the on-going conversation, and the language of other messages that share the same hashtags. They achieve a top accuracy of 0.972 when combining these five priors with a linear interpolation.
One of the most popular language identification packages is the langid.py library proposed in BIBREF2, thanks to the fact it is an open-source, ready-to-use library written in the Python programming language. It is a multinomial Naïve Bayes classifier trained on character n-grams (1 $\le $ n $\le $ 4) from 97 different languages. The training data comes from longer document sources, both formal ones (government publications, software documentation, and newswire) and informal ones (online encyclopedia articles and websites). While their system is not specialized for short messages, the authors claim their algorithm can generalize across domains off-the-shelf, and they conducted experiments using the Twitter datasets of BIBREF5 and BIBREF3 that achieved accuracies of 0.941 and 0.886 respectively, which is weaker than the specialized short-message LID systems of BIBREF5 and BIBREF3.
Starting from the basic observation of Zipf's Law, that each language has a small number of words that occur very frequently in most documents, the authors of BIBREF9 created a dictionary-based algorithm they called Quelingua. This algorithm includes ranked dictionaries of the 1,000 most popular words of each language it is trained to recognize. Given a new message, recognized words are given a weight based on their rank in each language, and the identified language is the one with the highest sum of word weights. Quelingua achieves an F1-score of 0.733 on the TweetLID competition corpus BIBREF11, a narrow improvement over a trigram Naïve Bayes classifier which achieves an F1-Score of 0.727 on the same corpus, but below the best results achieved in the competition.
Related Work ::: Neural Network LID
Neural network models have been applied on many NLP problems in recent years with great success, achieving excellent performance on challenges ranging from text classification BIBREF12 to sequence labeling BIBREF13. In LID, the authors of BIBREF1 built a hierarchical system of two neural networks. The first level is a Convolutional Neural Network (CNN) that converts white-space-delimited words into a word vector. The second level is a Long-Short-Term Memory (LSTM) network (a type of recurrent neural network (RNN)) that takes in sequences of word vectors outputted by the first level and maps them to language labels. They trained and tested their network on Twitter's official Twitter70 dataset, and achieved an F-score of 0.912, compared to langid.py's performance of 0.879 on the same dataset. They also trained and tested their system using the TweetLID corpus and achieved an F1-score of 0.762, above the system of BIBREF9 presented earlier, and above the top system of the TweetLID competition, the SVM LID system of BIBREF14 which achieved an F1-score of 0.752.
The authors of BIBREF10 also used a RNN system, but preferred the Gated Recurrent Unit (GRU) architecture to the LSTM, indicating it performed slightly better in their experiments. Their system breaks the text into non-overlapping 200-character segments, and feeds character n-grams (n = 8) into the GRU network to classify each letter into a probable language. The segment's language is simply the most probable language over all letters, and the text's language is the most probable language over all segments. The authors tested their system on short messages, but not on tweets; they built their own corpus of short messages by dividing their data into 200-character segments. On that corpus, they achieve an accuracy of 0.955, while langid.py achieves 0.912.
The authors of BIBREF6 also created a character-level LID network using a GRU architecture, in the form of a three-layer encoder-decoder RNN. They trained and tested their system using their own Twitter dataset, and achieved an F1-score of 0.982, while langid.py achieved 0.960 on the same dataset.
To summarize, we present the key results of the papers reviewed in this section in Table TABREF1, along with the results langid.py obtained on the same datasets as benchmark.
Our Twitter LID Datasets ::: Source Data and Language Labeling
Unlike other authors who built Twitter datasets, we chose not to mine tweets from Twitter directly through their API, but instead use tweets that have already been downloaded and archived on the Internet Archive. This has two important benefits: this site makes its content freely available for research purposes, unlike Twitter which comes with restrictions (especially on distribution), and the tweets are backed-up permanently, as opposed to Twitter where tweets may be deleted at any time and become unavailable for future research or replication of past studies. The Internet Archive has made available a set of 1.7 billion tweets collected over the year of 2017 in a 600GB JSON file which includes all tweet metadata attributes. Five of these attributes are of particular importance to us. They are $\it {tweet.id}$, $\it {tweet.user.id}$, $\it {tweet.text}$, $\it {tweet.lang}$, and $\it {tweet.user.lang}$, corresponding respectively to the unique tweet ID number, the unique user ID number, the text content of the tweet in UTF-8 characters, the tweet's language as determined by Twitter's automated LID software, and the user's self-declared language.
We begin by filtering the corpus to keep only those tweets where the user's self-declared language and the tweet's detected language correspond; that language becomes the tweet's correct language label. This operation cuts out roughly half the tweets, and leaves us with a corpus of about 900 million tweets in 54 different languages. Table TABREF6 shows the distribution of languages in that corpus. Unsurprisingly, it is a very imbalanced distribution of languages, with English and Japanese together accounting for 60% of all tweets. This is consistent with other studies and statistics of language use on Twitter, going as far back as 2013. It does however make it very difficult to use this corpus to train a LID system for other languages, especially for one of the dozens of seldom-used languages. This was our motivation for creating a balanced Twitter dataset.
Our Twitter LID Datasets ::: Our Balanced Datasets
When creating a balanced Twitter LID dataset, we face a design question: should our dataset seek to maximize the number of languages present, to make it more interesting and challenging for the task of LID, but at the cost of having fewer tweets per language to include seldom-used languages. Or should we maximize the number of tweets per language to make the dataset more useful for training deep neural networks, but at the cost of having fewer languages present and eliminating the seldom-used languages. To circumvent this issue, we propose to build three datasets: a small-scale one with more languages but fewer tweets, a large-scale one with more tweets but fewer languages, and a medium-scale one that is a compromise between the two extremes. Moreover, since we plan for our datasets to become standard benchmarking tools, we have subdivided the tweets of each language in each dataset into training, validation, and testing sets.
Small-scale dataset: This dataset is composed of 28 languages with 13,000 tweets per language, subdivided into 7,000 training set tweets, 3,000 validation set tweets, and 3,000 testing set tweets. There is thus a total of 364,000 tweets in this dataset. Referring to Table TABREF6, this dataset includes every language that represents 0.002% or more of the Twitter corpus. To be sure, it is possible to create a smaller dataset with all 54 languages but much fewer tweets per language, but we feel that this is the lower limit to be useful for training LID deep neural systems.
Medium scale dataset: This dataset keeps 22 of the 28 languages of the small-scale dataset, but has 10 times as many tweets per language. In other words, each language has a 70,000-tweet training set, a 30,000-tweet validation set, and a 30,000-tweet testing set, for a total of 2,860,000 tweets.
Large-scale dataset: Once again, we increased tenfold the number of tweets per language, and kept the 14 languages that had sufficient tweets in our initial 900 million tweet corpus. This gives us a dataset where each language has 700,000 tweets in its training set, 300,000 tweets in its validation set, and 300,000 tweets in its testing set, for a total 18,200,000 tweets. Referring to Table TABREF6, this dataset includes every language that represents 0.1% or more of the Twitter corpus.
Proposed Model
Since many languages have unclear word boundaries, character n-grams, rather than words, have become widely used as input in LID systems BIBREF2, BIBREF10, BIBREF5, BIBREF6. With this in mind, the LID problem can be defined as such: given a tweet $\mathit {tw}$ consisting of $n$ ordered characters ($\mathit {tw}=[ch_1, ch_2, ..., ch_n]$) selected within the vocabulary set $\mathit {char}$ of $V$ unique characters ($\mathit {char}=\lbrace ch_1,ch_2, ..., ch_V\rbrace $) and a set $\mathit {l}$ of $L$ languages ($\mathit {l}=\lbrace l_1,l_2, ..., l_L\rbrace $) , the aim is to predict the language $\mathit {\hat{l}}$ present in $tw$ using a classifier:
where $Score(l_i|tw)$ is a scoring function quantifying how likely language $l_i$ was used given the observed message $tw$.
Most statistical LID systems follow the model of BIBREF2. They start off by using what is called a one-hot encoding technique, which represents each character $ch_i$ as a one-hot vector $\mathbf {x}_i^{oh} \in \mathbb {Z}_2^V$ according to the index of this character in $\mathit {char}$. This transforms $tw$ into a matrix $\mathbf {X}^{oh}$:
The vector $\mathbf {X}^{oh}$ is passed to a feature extraction function, for example row-wise sum or tf-idf weighting, to obtain a feature vector $\mathbf {h}$. $\mathbf {h}$ is finally fed to a classifier model for either discriminative scoring (e.g. Support Vector Machine) or generative scoring (e.g. Naïve Bayes).
Unlike statistical methods, a typical neural network LID system, as illustrated in Figure FIGREF15, first pass this input through an embedding layer to map each character $ch_i \in tw$ to a low-dense vector $\mathbf {x}_i \in \mathbb {R}^d$, where $d$ denotes the dimension of character embedding. Given an input tweet $tw$, after passing through the embedding layer, we obtain an embedded matrix:
The embedded matrix $\mathbf {X}$ is then fed through a neural network architecture, which transforms it into an output vector $\mathbf {h}=f(\mathbf {X})$ of length L that represents the likelihood of each language, and which is passed through a $\mathit {Softmax}$ function. This updates equation DISPLAY_FORM18 as:
Tweets in particular are noisy messages which can contain a mix of multiple languages. To deal with this challenge, most previous neural network LID systems used deep sequence neural layers, such as an encoder-decoder BIBREF6 or a GRU BIBREF10, to extract global representations at a high computational cost. By contrast, we propose to employ a shallow (single-layer) convolution neural network (CNN) to locally learn region-based features. In addition, we propose to use an attention mechanism to proportionally merge together these local features for an entire tweet $tw$. We hypothesize that the attention mechanism will effectively capture which local features of a particular language are the dominant features of the tweet. There are two major advantages of our proposed architecture: first, the use of the CNN, which has the least number of parameters among other neural networks, simplifies the neural network model and decreases the inference latency; and second, the use of the attention mechanism makes it possible to model the mix of languages while maintaining a competitive performance.
Proposed Model ::: ngam-regional CNN Model
To begin, we present a traditional CNN with an ngam-regional constrain as our baseline. CNNs have been widely used in both image processing BIBREF15 and NLP BIBREF16. The convolution operation of a filter with a region size $m$ is parameterized by a weight matrix $\mathbf {W}_{cnn} \in \mathbb {R}^{d_{cnn}\times md}$ and a bias vector $\mathbf {b}_{cnn} \in \mathbb {R}^{d_{cnn}}$, where $d_{cnn}$ is the dimension of the CNN. The inputs are a sequence of $m$ consecutive input columns in $\mathbf {X}$, represented by a concatenated vector $\mathbf {X}[i:i+m-1] \in \mathbb {R}^{md}$. The region-based feature vector $\mathbf {c}_i$ is computed as follows:
where $\oplus $ denotes a concatenation operation and $g$ is a non-linear function. The region filter is slid from the beginning to the end of $\mathbf {X}$ to obtain a convolution matrix $\mathbf {C}$:
The first novelty of our CNN is that we add a zero-padding constrain at both sides of $\mathbf {X}$ to ensure that the number of columns in $\mathbf {C}$ is equal to the number of columns in $\mathbf {X}$. Consequently, each $\mathbf {c}_i$ feature vector corresponds to an $\mathbf {x}_i$ input vector at the same index position $i$, and is learned from concatenating the surrounding $m$-gram embeddings. Particularly:
where $p$ is the number of zero-padding columns. Finally, in a normal CNN, a row-wise max-pooling function is applied on $\mathbf {C}$ to extract the $d_{cnn}$ most salient features, as shown in Equation DISPLAY_FORM26. However, one weakness of this approach is that it extracts the most salient features out of sequence.
Proposed Model ::: Attention Mechanism
Instead of the traditional pooling function of Equation DISPLAY_FORM26, a second important innovation of our CNN model is to use an attention mechanism to model the interaction between region-based features from the beginning to the end of an input. Figure FIGREF15 illustrates our proposed model. Given a sequence of regional feature vectors $\mathbf {C}=[\mathbf {c}_1,\mathbf {c}_2,...,\mathbf {c}_n]$ as computed in Equation DISPLAY_FORM24, we pass it through a fully-connected hidden layer to learn a sequence of regional hidden vectors $\mathbf {H}=[\mathbf {h}_1,\mathbf {h}_2,...,\mathbf {h}_n] \in \mathbb {R}^{d_{hd} \times n}$ using Equation DISPLAY_FORM28.
where $g_2$ is a non-linear activation function, $\mathbf {W}_{hd}$ and $\mathbf {b}_{hd}$ denote model parameters, and $d_{hd}$ is the dimension of the hidden layer. We followed Yang et al. BIBREF17 in employing a regional context vector $\mathbf {u} \in \mathbb {R}^{d_{hd}}$ to measure the importance of each window-based hidden vector. The regional importance factors are computed by:
The importance factors are then fed to a $\mathit {Softmax}$ layer to obtain the normalized weight:
The final representation of a given input is computed by a weighted sum of its regional feature vectors:
Experimental Results ::: Benchmarks
For the benchmarks, we selected five systems. We picked first the langid.py library which is frequently used to compare systems in the literature. Since our work is in neural-network LID, we selected two neural network systems from the literature, specifically the encoder-decoder EquiLID system of BIBREF6 and the GRU neural network LanideNN system of BIBREF10. Finally, we included CLD2 and CLD3, two implementations of the Naïve Bayes LID software used by Google in their Chrome web browser BIBREF4, BIBREF0, BIBREF8 and sometimes used as a comparison system in the LID literature BIBREF7, BIBREF6, BIBREF8, BIBREF2, BIBREF10. We obtained publicly-available implementations of each of these algorithms, and test them all against our three datasets. In Table TABREF33, we report each algorithm's accuracy and F1 score, the two metrics usually reported in the LID literature. We also included precision and recall values, which are necessary for computing F1 score. And finally we included the speed in number of messages handled per second. This metric is not often discussed in the LID literature, but is of particular importance when dealing with a massive dataset such as ours or a massive streaming source such as Twitter.
We compare these benchmarks to our two models: the improved CNN as described in Section SECREF22 and our proposed CNN model with an attention mechanism of Section SECREF27. These are labelled CNN and Attention CNN in Table TABREF33. In both models, we filter out characters that appear less than 5 times and apply a dropout approach with a dropout rate of $0.5$. ADAM optimization algorithm and early stopping techniques are employed during training. The full list of parameters and settings is given in Table TABREF32. It is worth noting that we randomly select this configuration without any tuning process.
Experimental Results ::: Analysis
The first thing that appears from these results is the speed difference between algorithms. CLD3 and langid.py both can process several thousands of messages per second, and CLD2 is even an order of magnitude better, but the two neural network software have considerably worse performances, at less than a dozen messages per second. This is the efficiency trade-off of neural-network LID systems we mentioned in Section SECREF1; although to be fair, we should also point out that those two systems are research prototypes and thus may not have been fully optimized.
In terms of accuracy and F1 score, langid.py, LanideNN, and EquiLID have very similar performances. All three consistently score above 0.90, and each achieves the best accuracy or the best F1 score at some point, if only by 0.002. By contrast, CLD2 and CLD3 have weaker performances; significantly so in the case of CLD3. In all cases, using our small-, medium-, or large-scale test set does not significantly affect the results.
All the benchmark systems were tested using the pre-trained models they come with. For comparison purposes, we retrained langid.py from scratch using the training and validation portion of our datasets, and ran the tests again. Surprisingly, we find that the results are worse for all metrics compared to using their pre-trained model, and moreover that using the medium- and large-scale datasets give significantly worse results than using the small-scale dataset. This may be a result of the fact the corpus the langid.py software was trained with and optimized for originally is drastically different from ours: it is a imbalanced dataset of 18,269 tweets in 9 languages. Our larger corpora, being more drastically different from the original, give increasingly worse performances. This observation may also explain the almost 10% variation in performance of langid.py reported in the literature and reproduced in Table TABREF1. The fact that the message handling performance of the library drops massively compared to its pre-trained results further indicates how the software was optimized to use its corpus. Based on this initial result, we decided not to retrain the other benchmark systems.
The last two lines of Table TABREF33 report the results of our basic CNN and our attention CNN LID systems. It can be seen that both of them outperform the benchmark systems in accuracy, precision, recall, and F1 score in all experiments. Moreover, the attention CNN outperforms the basic CNN in every metric (we will explore the benefit of the attention mechanism in the next subsection). In terms of processing speed, only the CLD2 system surpasses ours, but it does so at the cost of a 10% drop in accuracy and F1 score. Looking at the choice of datasets, it can be seen that training with the large-scale dataset leads to a nearly 1% improvement compared to the medium-sized dataset, which also gives a 1% improvement compared to the small-scale dataset. While it is expected that using more training data will lead to a better system and better results, the small improvement indicates that even our small-scale dataset has sufficient messages to allow the network training to converge.
Experimental Results ::: Impact of Attention Mechanism
We can further illustrate the impact of our attention mechanism by displaying the importance factor $\alpha _i$ corresponding to each character $ch_i$ in selected tweets. Table TABREF41 shows a set of tweets that were correctly identified by the attention CNN but misclassified by the regular CNN in three different languages: English, French, and Vietnamese. The color intensity of a letter's cell is proportional to the attention mechanism's normalized weight $\alpha _i$, or on the focus the network puts on that character. In order words, the attention CNN puts more importance on the features that have the darkest color. The case studies of Table TABREF41 show the noise-tolerance that comes from the attention mechanism. It can be seen that the system puts virtually no weight on URL links (e.g. $tw_{en_1}$, $tw_{fr_2}$, $tw_{vi_2}$), on hashtags (e.g. $tw_{en_3}$), or on usernames (e.g. $tw_{en_2}$, $tw_{fr_1}$, $tw_{vi_1}$). We should emphasize that our system does not implement any text preprocessing steps; the input tweets are kept as-is. Despite that, the network learned to distinguish between words and non-words, and to focus mainly on the former. In fact, when the network does put attention on these elements, it is when they appear to use real words (e.g. “star" and “seed" in the username of $tw_{en_2}$, “mother" and “none" in the hashtag of $tw_{en_3}$). This also illustrates how the attention mechanism can pick out fine-grained features within noisy text: in those examples, it was able to focus on real-word components of longer non-word strings.
The examples of Table TABREF41 also show that the attention CNN learns to focus on common words to recognize languages. Some of the highest-weighted characters in the example tweets are found in common determiners, adverbs, and verbs of each language. These include “in" ($tw_{en_1}$), “des" ($tw_{fr_1}$), “le" ($tw_{fr_2}$), “est" ($tw_{fr_3}$), vietnamese“quá" ($tw_{vi_2}$), and vietnamese“nhất" ($tw_{vi_3}$). These letters and words significantly contribute in identifying the language of a given input.
Finally, when multiple languages are found within a tweet, the network successfully captures all of them. For example, $tw_{fr_3}$ switches from French to Spanish and $tw_{vi_2}$ mixes both English and Vietnamese. In both cases, the network identifies features of both languages; it focuses strongly on “est" and “y" in $tw_{fr_3}$, and on “Don't" and vietnamese“bài" in $tw_{vi_2}$. The message of $tw_{vi_3}$ mixes three languages, Vietnamese, English, and Korean, and the network focuses on all three parts, by picking out vietnamese“nhật" and vietnamese“mừng" in Vietnamese, “#생일축하해" and “#태형생일" in Korean, and “$\textbf {h}ave$" in English. Since our system is setup to classify each tweet into a single language, the strongest feature of each tweet wins out and the message is classified in the corresponding language. Nonetheless, it is significant to see that features of all languages present in the tweet are picked out, and a future version of our system could successfully decompose the tweets into portions of each language.
Conclusion
In this paper, we first demonstrated how to build balanced, automatically-labelled, and massive LID datasets. These datasets are taken from Twitter, and are thus composed of real-world and noisy messages. We applied our technique to build three datasets ranging from hundreds of thousands to tens of millions of short texts. Next, we proposed our new neural LID system, a CNN-based network with an attention mechanism to mitigate the performance bottleneck issue while still maintaining a state-of-the-art performance. The results obtained by our system surpassed five benchmark LID systems by 5% to 10%. Moreover, our analysis of the attention mechanism shed some light on the inner workings of the typically-black-box neural network, and demonstrated how it helps pick out the most important linguistic features while ignoring noise. All of our datasets and source code are publicly available at https://github.com/duytinvo/LID_NN. | EN, JA, ES, AR, PT, KO, TH, FR, TR, RU, IT, DE, PL, NL, EL, SV, FA, VI, FI, CS, UK, HI, DA, HU, NO, RO, SR, LV, BG, UR, TA, MR, BN, IN, KN, ET, SL, GU, CY, ZH, CKB, IS, LT, ML, SI, IW, NE, KM, MY, TL, KA, BO |
3f9ef59ac06db3f99b8b6f082308610eb2d3626a | 3f9ef59ac06db3f99b8b6f082308610eb2d3626a_0 | Q: Which existing language ID systems are tested?
Text: Introduction
Language Identification (LID) is the Natural Language Processing (NLP) task of automatically recognizing the language that a document is written in. While this task was called "solved" by some authors over a decade ago, it has seen a resurgence in recent years thanks to the rise in popularity of social media BIBREF0, BIBREF1, and the corresponding daily creation of millions of new messages in dozens of different languages including rare ones that are not often included in language identification systems. Moreover, these messages are typically very short (Twitter messages were until recently limited to 140 characters) and very noisy (including an abundance of spelling mistakes, non-word tokens like URLs, emoticons, or hashtags, as well as foreign-language words in messages of another language), whereas LID was solved using long and clean documents. Indeed, several studies have shown that LID systems trained to a high accuracy on traditional documents suffer significant drops in accuracy when applied to short social-media texts BIBREF2, BIBREF3.
Given its massive scale, multilingual nature, and popularity, Twitter has naturally attracted the attention of the LID research community. Several attempts have been made to construct LID datasets from that resource. However, a major challenge is to assign each tweet in the dataset to the correct language among the more than 70 languages used on the platform. The three commonly-used approaches are to rely on human labeling BIBREF4, BIBREF5, machine detection BIBREF5, BIBREF6, or user geolocation BIBREF3, BIBREF7, BIBREF8. Human labeling is an expensive process in terms of workload, and it is thus infeasible to apply it to create a massive dataset and get the full benefit of Twitter's scale. Automated LID labeling of this data creates a noisy and imperfect dataset, which is to be expected since the purpose of these datasets is to create new and better LID algorithms. And user geolocation is based on the assumption that users in a geographic region use the language of that region; an assumption that is not always correct, which is why this technique is usually paired with one of the other two. Our first contribution in this paper is to propose a new approach to build and automatically label a Twitter LID dataset, and to show that it scales up well by building a dataset of over 18 million labeled tweets. Our hope is that our new Twitter dataset will become a benchmarking standard in the LID literature.
Traditional LID models BIBREF2, BIBREF3, BIBREF9 proposed different ideas to design a set of useful features. This set of features is then passed to traditional machine learning algorithms such as Naive Bayes (NB). The resulting systems are capable of labeling thousands of inputs per second with moderate accuracy. Meanwhile, neural network models BIBREF10, BIBREF6 approach the problem by designing a deep and complex architecture like gated recurrent unit (GRU) or encoder-decoder net. These models use the message text itself as input using a sequence of character embeddings, and automatically learn its hidden structure via a deep neural network. Consequently, they obtain better results in the task but with an efficiency trade-off. To alleviate these drawbacks, our second contribution in this paper is to propose a shallow but efficient neural LID algorithm. We followed previous neural LID BIBREF10, BIBREF6 in using character embeddings as inputs. However, instead of using a deep neural net, we propose to use a shallow ngram-regional convolution neural network (CNN) with an attention mechanism to learn input representation. We experimentally prove that the ngram-regional CNN is the best choice to tackle the bottleneck problem in neural LID. We also illustrate the behaviour of the attention structure in focusing on the most important features in the text for the task. Compared with other benchmarks on our Twitter datasets, our proposed model consistently achieves new state-of-the-art results with an improvement of 5% in accuracy and F1 score and a competitive inference time.
The rest of this paper is structured as follows. After a background review in the next section, we will present our Twitter dataset in Section SECREF3. Our novel LID algorithm will be the topic of Section SECREF4. We will then present and analyze some experiments we conducted with our algorithm in Section SECREF5, along with benchmarking tests of popular and literature LID systems, before drawing some concluding remarks in Section SECREF6. Our Twitter dataset and our LID algorithm's source code are publicly available.
Related Work
In this section, we will consider recent advances on the specific challenge of language identification in short text messages. Readers interested in a general overview of the area of LID, including older work and other challenges in the area, are encouraged to read the thorough survey of BIBREF0.
Related Work ::: Probabilistic LID
One of the first, if not the first, systems for LID specialized for short text messages is the graph-based method of BIBREF5. Their graph is composed of vertices, or character n-grams (n = 3) observed in messages in all languages, and of edges, or connections between successive n-grams weighted by the observed frequency of that connection in each language. Identifying the language of a new message is then done by identifying the most probable path in the graph that generates that message. Their method achieves an accuracy of 0.975 on their own Twitter corpus.
Carter, Weerkamp, and Tsagkias proposed an approach for LID that exploits the very nature of social media text BIBREF3. Their approach computes the prior probability of the message being in a given language independently of the content of the message itself, in five different ways: by identifying the language of external content linked to by the message, the language of previous messages by the same user, the language used by other users explicitly mentioned in the message, the language of previous messages in the on-going conversation, and the language of other messages that share the same hashtags. They achieve a top accuracy of 0.972 when combining these five priors with a linear interpolation.
One of the most popular language identification packages is the langid.py library proposed in BIBREF2, thanks to the fact it is an open-source, ready-to-use library written in the Python programming language. It is a multinomial Naïve Bayes classifier trained on character n-grams (1 $\le $ n $\le $ 4) from 97 different languages. The training data comes from longer document sources, both formal ones (government publications, software documentation, and newswire) and informal ones (online encyclopedia articles and websites). While their system is not specialized for short messages, the authors claim their algorithm can generalize across domains off-the-shelf, and they conducted experiments using the Twitter datasets of BIBREF5 and BIBREF3 that achieved accuracies of 0.941 and 0.886 respectively, which is weaker than the specialized short-message LID systems of BIBREF5 and BIBREF3.
Starting from the basic observation of Zipf's Law, that each language has a small number of words that occur very frequently in most documents, the authors of BIBREF9 created a dictionary-based algorithm they called Quelingua. This algorithm includes ranked dictionaries of the 1,000 most popular words of each language it is trained to recognize. Given a new message, recognized words are given a weight based on their rank in each language, and the identified language is the one with the highest sum of word weights. Quelingua achieves an F1-score of 0.733 on the TweetLID competition corpus BIBREF11, a narrow improvement over a trigram Naïve Bayes classifier which achieves an F1-Score of 0.727 on the same corpus, but below the best results achieved in the competition.
Related Work ::: Neural Network LID
Neural network models have been applied on many NLP problems in recent years with great success, achieving excellent performance on challenges ranging from text classification BIBREF12 to sequence labeling BIBREF13. In LID, the authors of BIBREF1 built a hierarchical system of two neural networks. The first level is a Convolutional Neural Network (CNN) that converts white-space-delimited words into a word vector. The second level is a Long-Short-Term Memory (LSTM) network (a type of recurrent neural network (RNN)) that takes in sequences of word vectors outputted by the first level and maps them to language labels. They trained and tested their network on Twitter's official Twitter70 dataset, and achieved an F-score of 0.912, compared to langid.py's performance of 0.879 on the same dataset. They also trained and tested their system using the TweetLID corpus and achieved an F1-score of 0.762, above the system of BIBREF9 presented earlier, and above the top system of the TweetLID competition, the SVM LID system of BIBREF14 which achieved an F1-score of 0.752.
The authors of BIBREF10 also used a RNN system, but preferred the Gated Recurrent Unit (GRU) architecture to the LSTM, indicating it performed slightly better in their experiments. Their system breaks the text into non-overlapping 200-character segments, and feeds character n-grams (n = 8) into the GRU network to classify each letter into a probable language. The segment's language is simply the most probable language over all letters, and the text's language is the most probable language over all segments. The authors tested their system on short messages, but not on tweets; they built their own corpus of short messages by dividing their data into 200-character segments. On that corpus, they achieve an accuracy of 0.955, while langid.py achieves 0.912.
The authors of BIBREF6 also created a character-level LID network using a GRU architecture, in the form of a three-layer encoder-decoder RNN. They trained and tested their system using their own Twitter dataset, and achieved an F1-score of 0.982, while langid.py achieved 0.960 on the same dataset.
To summarize, we present the key results of the papers reviewed in this section in Table TABREF1, along with the results langid.py obtained on the same datasets as benchmark.
Our Twitter LID Datasets ::: Source Data and Language Labeling
Unlike other authors who built Twitter datasets, we chose not to mine tweets from Twitter directly through their API, but instead use tweets that have already been downloaded and archived on the Internet Archive. This has two important benefits: this site makes its content freely available for research purposes, unlike Twitter which comes with restrictions (especially on distribution), and the tweets are backed-up permanently, as opposed to Twitter where tweets may be deleted at any time and become unavailable for future research or replication of past studies. The Internet Archive has made available a set of 1.7 billion tweets collected over the year of 2017 in a 600GB JSON file which includes all tweet metadata attributes. Five of these attributes are of particular importance to us. They are $\it {tweet.id}$, $\it {tweet.user.id}$, $\it {tweet.text}$, $\it {tweet.lang}$, and $\it {tweet.user.lang}$, corresponding respectively to the unique tweet ID number, the unique user ID number, the text content of the tweet in UTF-8 characters, the tweet's language as determined by Twitter's automated LID software, and the user's self-declared language.
We begin by filtering the corpus to keep only those tweets where the user's self-declared language and the tweet's detected language correspond; that language becomes the tweet's correct language label. This operation cuts out roughly half the tweets, and leaves us with a corpus of about 900 million tweets in 54 different languages. Table TABREF6 shows the distribution of languages in that corpus. Unsurprisingly, it is a very imbalanced distribution of languages, with English and Japanese together accounting for 60% of all tweets. This is consistent with other studies and statistics of language use on Twitter, going as far back as 2013. It does however make it very difficult to use this corpus to train a LID system for other languages, especially for one of the dozens of seldom-used languages. This was our motivation for creating a balanced Twitter dataset.
Our Twitter LID Datasets ::: Our Balanced Datasets
When creating a balanced Twitter LID dataset, we face a design question: should our dataset seek to maximize the number of languages present, to make it more interesting and challenging for the task of LID, but at the cost of having fewer tweets per language to include seldom-used languages. Or should we maximize the number of tweets per language to make the dataset more useful for training deep neural networks, but at the cost of having fewer languages present and eliminating the seldom-used languages. To circumvent this issue, we propose to build three datasets: a small-scale one with more languages but fewer tweets, a large-scale one with more tweets but fewer languages, and a medium-scale one that is a compromise between the two extremes. Moreover, since we plan for our datasets to become standard benchmarking tools, we have subdivided the tweets of each language in each dataset into training, validation, and testing sets.
Small-scale dataset: This dataset is composed of 28 languages with 13,000 tweets per language, subdivided into 7,000 training set tweets, 3,000 validation set tweets, and 3,000 testing set tweets. There is thus a total of 364,000 tweets in this dataset. Referring to Table TABREF6, this dataset includes every language that represents 0.002% or more of the Twitter corpus. To be sure, it is possible to create a smaller dataset with all 54 languages but much fewer tweets per language, but we feel that this is the lower limit to be useful for training LID deep neural systems.
Medium scale dataset: This dataset keeps 22 of the 28 languages of the small-scale dataset, but has 10 times as many tweets per language. In other words, each language has a 70,000-tweet training set, a 30,000-tweet validation set, and a 30,000-tweet testing set, for a total of 2,860,000 tweets.
Large-scale dataset: Once again, we increased tenfold the number of tweets per language, and kept the 14 languages that had sufficient tweets in our initial 900 million tweet corpus. This gives us a dataset where each language has 700,000 tweets in its training set, 300,000 tweets in its validation set, and 300,000 tweets in its testing set, for a total 18,200,000 tweets. Referring to Table TABREF6, this dataset includes every language that represents 0.1% or more of the Twitter corpus.
Proposed Model
Since many languages have unclear word boundaries, character n-grams, rather than words, have become widely used as input in LID systems BIBREF2, BIBREF10, BIBREF5, BIBREF6. With this in mind, the LID problem can be defined as such: given a tweet $\mathit {tw}$ consisting of $n$ ordered characters ($\mathit {tw}=[ch_1, ch_2, ..., ch_n]$) selected within the vocabulary set $\mathit {char}$ of $V$ unique characters ($\mathit {char}=\lbrace ch_1,ch_2, ..., ch_V\rbrace $) and a set $\mathit {l}$ of $L$ languages ($\mathit {l}=\lbrace l_1,l_2, ..., l_L\rbrace $) , the aim is to predict the language $\mathit {\hat{l}}$ present in $tw$ using a classifier:
where $Score(l_i|tw)$ is a scoring function quantifying how likely language $l_i$ was used given the observed message $tw$.
Most statistical LID systems follow the model of BIBREF2. They start off by using what is called a one-hot encoding technique, which represents each character $ch_i$ as a one-hot vector $\mathbf {x}_i^{oh} \in \mathbb {Z}_2^V$ according to the index of this character in $\mathit {char}$. This transforms $tw$ into a matrix $\mathbf {X}^{oh}$:
The vector $\mathbf {X}^{oh}$ is passed to a feature extraction function, for example row-wise sum or tf-idf weighting, to obtain a feature vector $\mathbf {h}$. $\mathbf {h}$ is finally fed to a classifier model for either discriminative scoring (e.g. Support Vector Machine) or generative scoring (e.g. Naïve Bayes).
Unlike statistical methods, a typical neural network LID system, as illustrated in Figure FIGREF15, first pass this input through an embedding layer to map each character $ch_i \in tw$ to a low-dense vector $\mathbf {x}_i \in \mathbb {R}^d$, where $d$ denotes the dimension of character embedding. Given an input tweet $tw$, after passing through the embedding layer, we obtain an embedded matrix:
The embedded matrix $\mathbf {X}$ is then fed through a neural network architecture, which transforms it into an output vector $\mathbf {h}=f(\mathbf {X})$ of length L that represents the likelihood of each language, and which is passed through a $\mathit {Softmax}$ function. This updates equation DISPLAY_FORM18 as:
Tweets in particular are noisy messages which can contain a mix of multiple languages. To deal with this challenge, most previous neural network LID systems used deep sequence neural layers, such as an encoder-decoder BIBREF6 or a GRU BIBREF10, to extract global representations at a high computational cost. By contrast, we propose to employ a shallow (single-layer) convolution neural network (CNN) to locally learn region-based features. In addition, we propose to use an attention mechanism to proportionally merge together these local features for an entire tweet $tw$. We hypothesize that the attention mechanism will effectively capture which local features of a particular language are the dominant features of the tweet. There are two major advantages of our proposed architecture: first, the use of the CNN, which has the least number of parameters among other neural networks, simplifies the neural network model and decreases the inference latency; and second, the use of the attention mechanism makes it possible to model the mix of languages while maintaining a competitive performance.
Proposed Model ::: ngam-regional CNN Model
To begin, we present a traditional CNN with an ngam-regional constrain as our baseline. CNNs have been widely used in both image processing BIBREF15 and NLP BIBREF16. The convolution operation of a filter with a region size $m$ is parameterized by a weight matrix $\mathbf {W}_{cnn} \in \mathbb {R}^{d_{cnn}\times md}$ and a bias vector $\mathbf {b}_{cnn} \in \mathbb {R}^{d_{cnn}}$, where $d_{cnn}$ is the dimension of the CNN. The inputs are a sequence of $m$ consecutive input columns in $\mathbf {X}$, represented by a concatenated vector $\mathbf {X}[i:i+m-1] \in \mathbb {R}^{md}$. The region-based feature vector $\mathbf {c}_i$ is computed as follows:
where $\oplus $ denotes a concatenation operation and $g$ is a non-linear function. The region filter is slid from the beginning to the end of $\mathbf {X}$ to obtain a convolution matrix $\mathbf {C}$:
The first novelty of our CNN is that we add a zero-padding constrain at both sides of $\mathbf {X}$ to ensure that the number of columns in $\mathbf {C}$ is equal to the number of columns in $\mathbf {X}$. Consequently, each $\mathbf {c}_i$ feature vector corresponds to an $\mathbf {x}_i$ input vector at the same index position $i$, and is learned from concatenating the surrounding $m$-gram embeddings. Particularly:
where $p$ is the number of zero-padding columns. Finally, in a normal CNN, a row-wise max-pooling function is applied on $\mathbf {C}$ to extract the $d_{cnn}$ most salient features, as shown in Equation DISPLAY_FORM26. However, one weakness of this approach is that it extracts the most salient features out of sequence.
Proposed Model ::: Attention Mechanism
Instead of the traditional pooling function of Equation DISPLAY_FORM26, a second important innovation of our CNN model is to use an attention mechanism to model the interaction between region-based features from the beginning to the end of an input. Figure FIGREF15 illustrates our proposed model. Given a sequence of regional feature vectors $\mathbf {C}=[\mathbf {c}_1,\mathbf {c}_2,...,\mathbf {c}_n]$ as computed in Equation DISPLAY_FORM24, we pass it through a fully-connected hidden layer to learn a sequence of regional hidden vectors $\mathbf {H}=[\mathbf {h}_1,\mathbf {h}_2,...,\mathbf {h}_n] \in \mathbb {R}^{d_{hd} \times n}$ using Equation DISPLAY_FORM28.
where $g_2$ is a non-linear activation function, $\mathbf {W}_{hd}$ and $\mathbf {b}_{hd}$ denote model parameters, and $d_{hd}$ is the dimension of the hidden layer. We followed Yang et al. BIBREF17 in employing a regional context vector $\mathbf {u} \in \mathbb {R}^{d_{hd}}$ to measure the importance of each window-based hidden vector. The regional importance factors are computed by:
The importance factors are then fed to a $\mathit {Softmax}$ layer to obtain the normalized weight:
The final representation of a given input is computed by a weighted sum of its regional feature vectors:
Experimental Results ::: Benchmarks
For the benchmarks, we selected five systems. We picked first the langid.py library which is frequently used to compare systems in the literature. Since our work is in neural-network LID, we selected two neural network systems from the literature, specifically the encoder-decoder EquiLID system of BIBREF6 and the GRU neural network LanideNN system of BIBREF10. Finally, we included CLD2 and CLD3, two implementations of the Naïve Bayes LID software used by Google in their Chrome web browser BIBREF4, BIBREF0, BIBREF8 and sometimes used as a comparison system in the LID literature BIBREF7, BIBREF6, BIBREF8, BIBREF2, BIBREF10. We obtained publicly-available implementations of each of these algorithms, and test them all against our three datasets. In Table TABREF33, we report each algorithm's accuracy and F1 score, the two metrics usually reported in the LID literature. We also included precision and recall values, which are necessary for computing F1 score. And finally we included the speed in number of messages handled per second. This metric is not often discussed in the LID literature, but is of particular importance when dealing with a massive dataset such as ours or a massive streaming source such as Twitter.
We compare these benchmarks to our two models: the improved CNN as described in Section SECREF22 and our proposed CNN model with an attention mechanism of Section SECREF27. These are labelled CNN and Attention CNN in Table TABREF33. In both models, we filter out characters that appear less than 5 times and apply a dropout approach with a dropout rate of $0.5$. ADAM optimization algorithm and early stopping techniques are employed during training. The full list of parameters and settings is given in Table TABREF32. It is worth noting that we randomly select this configuration without any tuning process.
Experimental Results ::: Analysis
The first thing that appears from these results is the speed difference between algorithms. CLD3 and langid.py both can process several thousands of messages per second, and CLD2 is even an order of magnitude better, but the two neural network software have considerably worse performances, at less than a dozen messages per second. This is the efficiency trade-off of neural-network LID systems we mentioned in Section SECREF1; although to be fair, we should also point out that those two systems are research prototypes and thus may not have been fully optimized.
In terms of accuracy and F1 score, langid.py, LanideNN, and EquiLID have very similar performances. All three consistently score above 0.90, and each achieves the best accuracy or the best F1 score at some point, if only by 0.002. By contrast, CLD2 and CLD3 have weaker performances; significantly so in the case of CLD3. In all cases, using our small-, medium-, or large-scale test set does not significantly affect the results.
All the benchmark systems were tested using the pre-trained models they come with. For comparison purposes, we retrained langid.py from scratch using the training and validation portion of our datasets, and ran the tests again. Surprisingly, we find that the results are worse for all metrics compared to using their pre-trained model, and moreover that using the medium- and large-scale datasets give significantly worse results than using the small-scale dataset. This may be a result of the fact the corpus the langid.py software was trained with and optimized for originally is drastically different from ours: it is a imbalanced dataset of 18,269 tweets in 9 languages. Our larger corpora, being more drastically different from the original, give increasingly worse performances. This observation may also explain the almost 10% variation in performance of langid.py reported in the literature and reproduced in Table TABREF1. The fact that the message handling performance of the library drops massively compared to its pre-trained results further indicates how the software was optimized to use its corpus. Based on this initial result, we decided not to retrain the other benchmark systems.
The last two lines of Table TABREF33 report the results of our basic CNN and our attention CNN LID systems. It can be seen that both of them outperform the benchmark systems in accuracy, precision, recall, and F1 score in all experiments. Moreover, the attention CNN outperforms the basic CNN in every metric (we will explore the benefit of the attention mechanism in the next subsection). In terms of processing speed, only the CLD2 system surpasses ours, but it does so at the cost of a 10% drop in accuracy and F1 score. Looking at the choice of datasets, it can be seen that training with the large-scale dataset leads to a nearly 1% improvement compared to the medium-sized dataset, which also gives a 1% improvement compared to the small-scale dataset. While it is expected that using more training data will lead to a better system and better results, the small improvement indicates that even our small-scale dataset has sufficient messages to allow the network training to converge.
Experimental Results ::: Impact of Attention Mechanism
We can further illustrate the impact of our attention mechanism by displaying the importance factor $\alpha _i$ corresponding to each character $ch_i$ in selected tweets. Table TABREF41 shows a set of tweets that were correctly identified by the attention CNN but misclassified by the regular CNN in three different languages: English, French, and Vietnamese. The color intensity of a letter's cell is proportional to the attention mechanism's normalized weight $\alpha _i$, or on the focus the network puts on that character. In order words, the attention CNN puts more importance on the features that have the darkest color. The case studies of Table TABREF41 show the noise-tolerance that comes from the attention mechanism. It can be seen that the system puts virtually no weight on URL links (e.g. $tw_{en_1}$, $tw_{fr_2}$, $tw_{vi_2}$), on hashtags (e.g. $tw_{en_3}$), or on usernames (e.g. $tw_{en_2}$, $tw_{fr_1}$, $tw_{vi_1}$). We should emphasize that our system does not implement any text preprocessing steps; the input tweets are kept as-is. Despite that, the network learned to distinguish between words and non-words, and to focus mainly on the former. In fact, when the network does put attention on these elements, it is when they appear to use real words (e.g. “star" and “seed" in the username of $tw_{en_2}$, “mother" and “none" in the hashtag of $tw_{en_3}$). This also illustrates how the attention mechanism can pick out fine-grained features within noisy text: in those examples, it was able to focus on real-word components of longer non-word strings.
The examples of Table TABREF41 also show that the attention CNN learns to focus on common words to recognize languages. Some of the highest-weighted characters in the example tweets are found in common determiners, adverbs, and verbs of each language. These include “in" ($tw_{en_1}$), “des" ($tw_{fr_1}$), “le" ($tw_{fr_2}$), “est" ($tw_{fr_3}$), vietnamese“quá" ($tw_{vi_2}$), and vietnamese“nhất" ($tw_{vi_3}$). These letters and words significantly contribute in identifying the language of a given input.
Finally, when multiple languages are found within a tweet, the network successfully captures all of them. For example, $tw_{fr_3}$ switches from French to Spanish and $tw_{vi_2}$ mixes both English and Vietnamese. In both cases, the network identifies features of both languages; it focuses strongly on “est" and “y" in $tw_{fr_3}$, and on “Don't" and vietnamese“bài" in $tw_{vi_2}$. The message of $tw_{vi_3}$ mixes three languages, Vietnamese, English, and Korean, and the network focuses on all three parts, by picking out vietnamese“nhật" and vietnamese“mừng" in Vietnamese, “#생일축하해" and “#태형생일" in Korean, and “$\textbf {h}ave$" in English. Since our system is setup to classify each tweet into a single language, the strongest feature of each tweet wins out and the message is classified in the corresponding language. Nonetheless, it is significant to see that features of all languages present in the tweet are picked out, and a future version of our system could successfully decompose the tweets into portions of each language.
Conclusion
In this paper, we first demonstrated how to build balanced, automatically-labelled, and massive LID datasets. These datasets are taken from Twitter, and are thus composed of real-world and noisy messages. We applied our technique to build three datasets ranging from hundreds of thousands to tens of millions of short texts. Next, we proposed our new neural LID system, a CNN-based network with an attention mechanism to mitigate the performance bottleneck issue while still maintaining a state-of-the-art performance. The results obtained by our system surpassed five benchmark LID systems by 5% to 10%. Moreover, our analysis of the attention mechanism shed some light on the inner workings of the typically-black-box neural network, and demonstrated how it helps pick out the most important linguistic features while ignoring noise. All of our datasets and source code are publicly available at https://github.com/duytinvo/LID_NN. | langid.py library, encoder-decoder EquiLID system, GRU neural network LanideNN system, CLD2, CLD3 |
203d322743353aac8a3369220e1d023a78c2cae3 | 203d322743353aac8a3369220e1d023a78c2cae3_0 | Q: How was the one year worth of data collected?
Text: Introduction
Language Identification (LID) is the Natural Language Processing (NLP) task of automatically recognizing the language that a document is written in. While this task was called "solved" by some authors over a decade ago, it has seen a resurgence in recent years thanks to the rise in popularity of social media BIBREF0, BIBREF1, and the corresponding daily creation of millions of new messages in dozens of different languages including rare ones that are not often included in language identification systems. Moreover, these messages are typically very short (Twitter messages were until recently limited to 140 characters) and very noisy (including an abundance of spelling mistakes, non-word tokens like URLs, emoticons, or hashtags, as well as foreign-language words in messages of another language), whereas LID was solved using long and clean documents. Indeed, several studies have shown that LID systems trained to a high accuracy on traditional documents suffer significant drops in accuracy when applied to short social-media texts BIBREF2, BIBREF3.
Given its massive scale, multilingual nature, and popularity, Twitter has naturally attracted the attention of the LID research community. Several attempts have been made to construct LID datasets from that resource. However, a major challenge is to assign each tweet in the dataset to the correct language among the more than 70 languages used on the platform. The three commonly-used approaches are to rely on human labeling BIBREF4, BIBREF5, machine detection BIBREF5, BIBREF6, or user geolocation BIBREF3, BIBREF7, BIBREF8. Human labeling is an expensive process in terms of workload, and it is thus infeasible to apply it to create a massive dataset and get the full benefit of Twitter's scale. Automated LID labeling of this data creates a noisy and imperfect dataset, which is to be expected since the purpose of these datasets is to create new and better LID algorithms. And user geolocation is based on the assumption that users in a geographic region use the language of that region; an assumption that is not always correct, which is why this technique is usually paired with one of the other two. Our first contribution in this paper is to propose a new approach to build and automatically label a Twitter LID dataset, and to show that it scales up well by building a dataset of over 18 million labeled tweets. Our hope is that our new Twitter dataset will become a benchmarking standard in the LID literature.
Traditional LID models BIBREF2, BIBREF3, BIBREF9 proposed different ideas to design a set of useful features. This set of features is then passed to traditional machine learning algorithms such as Naive Bayes (NB). The resulting systems are capable of labeling thousands of inputs per second with moderate accuracy. Meanwhile, neural network models BIBREF10, BIBREF6 approach the problem by designing a deep and complex architecture like gated recurrent unit (GRU) or encoder-decoder net. These models use the message text itself as input using a sequence of character embeddings, and automatically learn its hidden structure via a deep neural network. Consequently, they obtain better results in the task but with an efficiency trade-off. To alleviate these drawbacks, our second contribution in this paper is to propose a shallow but efficient neural LID algorithm. We followed previous neural LID BIBREF10, BIBREF6 in using character embeddings as inputs. However, instead of using a deep neural net, we propose to use a shallow ngram-regional convolution neural network (CNN) with an attention mechanism to learn input representation. We experimentally prove that the ngram-regional CNN is the best choice to tackle the bottleneck problem in neural LID. We also illustrate the behaviour of the attention structure in focusing on the most important features in the text for the task. Compared with other benchmarks on our Twitter datasets, our proposed model consistently achieves new state-of-the-art results with an improvement of 5% in accuracy and F1 score and a competitive inference time.
The rest of this paper is structured as follows. After a background review in the next section, we will present our Twitter dataset in Section SECREF3. Our novel LID algorithm will be the topic of Section SECREF4. We will then present and analyze some experiments we conducted with our algorithm in Section SECREF5, along with benchmarking tests of popular and literature LID systems, before drawing some concluding remarks in Section SECREF6. Our Twitter dataset and our LID algorithm's source code are publicly available.
Related Work
In this section, we will consider recent advances on the specific challenge of language identification in short text messages. Readers interested in a general overview of the area of LID, including older work and other challenges in the area, are encouraged to read the thorough survey of BIBREF0.
Related Work ::: Probabilistic LID
One of the first, if not the first, systems for LID specialized for short text messages is the graph-based method of BIBREF5. Their graph is composed of vertices, or character n-grams (n = 3) observed in messages in all languages, and of edges, or connections between successive n-grams weighted by the observed frequency of that connection in each language. Identifying the language of a new message is then done by identifying the most probable path in the graph that generates that message. Their method achieves an accuracy of 0.975 on their own Twitter corpus.
Carter, Weerkamp, and Tsagkias proposed an approach for LID that exploits the very nature of social media text BIBREF3. Their approach computes the prior probability of the message being in a given language independently of the content of the message itself, in five different ways: by identifying the language of external content linked to by the message, the language of previous messages by the same user, the language used by other users explicitly mentioned in the message, the language of previous messages in the on-going conversation, and the language of other messages that share the same hashtags. They achieve a top accuracy of 0.972 when combining these five priors with a linear interpolation.
One of the most popular language identification packages is the langid.py library proposed in BIBREF2, thanks to the fact it is an open-source, ready-to-use library written in the Python programming language. It is a multinomial Naïve Bayes classifier trained on character n-grams (1 $\le $ n $\le $ 4) from 97 different languages. The training data comes from longer document sources, both formal ones (government publications, software documentation, and newswire) and informal ones (online encyclopedia articles and websites). While their system is not specialized for short messages, the authors claim their algorithm can generalize across domains off-the-shelf, and they conducted experiments using the Twitter datasets of BIBREF5 and BIBREF3 that achieved accuracies of 0.941 and 0.886 respectively, which is weaker than the specialized short-message LID systems of BIBREF5 and BIBREF3.
Starting from the basic observation of Zipf's Law, that each language has a small number of words that occur very frequently in most documents, the authors of BIBREF9 created a dictionary-based algorithm they called Quelingua. This algorithm includes ranked dictionaries of the 1,000 most popular words of each language it is trained to recognize. Given a new message, recognized words are given a weight based on their rank in each language, and the identified language is the one with the highest sum of word weights. Quelingua achieves an F1-score of 0.733 on the TweetLID competition corpus BIBREF11, a narrow improvement over a trigram Naïve Bayes classifier which achieves an F1-Score of 0.727 on the same corpus, but below the best results achieved in the competition.
Related Work ::: Neural Network LID
Neural network models have been applied on many NLP problems in recent years with great success, achieving excellent performance on challenges ranging from text classification BIBREF12 to sequence labeling BIBREF13. In LID, the authors of BIBREF1 built a hierarchical system of two neural networks. The first level is a Convolutional Neural Network (CNN) that converts white-space-delimited words into a word vector. The second level is a Long-Short-Term Memory (LSTM) network (a type of recurrent neural network (RNN)) that takes in sequences of word vectors outputted by the first level and maps them to language labels. They trained and tested their network on Twitter's official Twitter70 dataset, and achieved an F-score of 0.912, compared to langid.py's performance of 0.879 on the same dataset. They also trained and tested their system using the TweetLID corpus and achieved an F1-score of 0.762, above the system of BIBREF9 presented earlier, and above the top system of the TweetLID competition, the SVM LID system of BIBREF14 which achieved an F1-score of 0.752.
The authors of BIBREF10 also used a RNN system, but preferred the Gated Recurrent Unit (GRU) architecture to the LSTM, indicating it performed slightly better in their experiments. Their system breaks the text into non-overlapping 200-character segments, and feeds character n-grams (n = 8) into the GRU network to classify each letter into a probable language. The segment's language is simply the most probable language over all letters, and the text's language is the most probable language over all segments. The authors tested their system on short messages, but not on tweets; they built their own corpus of short messages by dividing their data into 200-character segments. On that corpus, they achieve an accuracy of 0.955, while langid.py achieves 0.912.
The authors of BIBREF6 also created a character-level LID network using a GRU architecture, in the form of a three-layer encoder-decoder RNN. They trained and tested their system using their own Twitter dataset, and achieved an F1-score of 0.982, while langid.py achieved 0.960 on the same dataset.
To summarize, we present the key results of the papers reviewed in this section in Table TABREF1, along with the results langid.py obtained on the same datasets as benchmark.
Our Twitter LID Datasets ::: Source Data and Language Labeling
Unlike other authors who built Twitter datasets, we chose not to mine tweets from Twitter directly through their API, but instead use tweets that have already been downloaded and archived on the Internet Archive. This has two important benefits: this site makes its content freely available for research purposes, unlike Twitter which comes with restrictions (especially on distribution), and the tweets are backed-up permanently, as opposed to Twitter where tweets may be deleted at any time and become unavailable for future research or replication of past studies. The Internet Archive has made available a set of 1.7 billion tweets collected over the year of 2017 in a 600GB JSON file which includes all tweet metadata attributes. Five of these attributes are of particular importance to us. They are $\it {tweet.id}$, $\it {tweet.user.id}$, $\it {tweet.text}$, $\it {tweet.lang}$, and $\it {tweet.user.lang}$, corresponding respectively to the unique tweet ID number, the unique user ID number, the text content of the tweet in UTF-8 characters, the tweet's language as determined by Twitter's automated LID software, and the user's self-declared language.
We begin by filtering the corpus to keep only those tweets where the user's self-declared language and the tweet's detected language correspond; that language becomes the tweet's correct language label. This operation cuts out roughly half the tweets, and leaves us with a corpus of about 900 million tweets in 54 different languages. Table TABREF6 shows the distribution of languages in that corpus. Unsurprisingly, it is a very imbalanced distribution of languages, with English and Japanese together accounting for 60% of all tweets. This is consistent with other studies and statistics of language use on Twitter, going as far back as 2013. It does however make it very difficult to use this corpus to train a LID system for other languages, especially for one of the dozens of seldom-used languages. This was our motivation for creating a balanced Twitter dataset.
Our Twitter LID Datasets ::: Our Balanced Datasets
When creating a balanced Twitter LID dataset, we face a design question: should our dataset seek to maximize the number of languages present, to make it more interesting and challenging for the task of LID, but at the cost of having fewer tweets per language to include seldom-used languages. Or should we maximize the number of tweets per language to make the dataset more useful for training deep neural networks, but at the cost of having fewer languages present and eliminating the seldom-used languages. To circumvent this issue, we propose to build three datasets: a small-scale one with more languages but fewer tweets, a large-scale one with more tweets but fewer languages, and a medium-scale one that is a compromise between the two extremes. Moreover, since we plan for our datasets to become standard benchmarking tools, we have subdivided the tweets of each language in each dataset into training, validation, and testing sets.
Small-scale dataset: This dataset is composed of 28 languages with 13,000 tweets per language, subdivided into 7,000 training set tweets, 3,000 validation set tweets, and 3,000 testing set tweets. There is thus a total of 364,000 tweets in this dataset. Referring to Table TABREF6, this dataset includes every language that represents 0.002% or more of the Twitter corpus. To be sure, it is possible to create a smaller dataset with all 54 languages but much fewer tweets per language, but we feel that this is the lower limit to be useful for training LID deep neural systems.
Medium scale dataset: This dataset keeps 22 of the 28 languages of the small-scale dataset, but has 10 times as many tweets per language. In other words, each language has a 70,000-tweet training set, a 30,000-tweet validation set, and a 30,000-tweet testing set, for a total of 2,860,000 tweets.
Large-scale dataset: Once again, we increased tenfold the number of tweets per language, and kept the 14 languages that had sufficient tweets in our initial 900 million tweet corpus. This gives us a dataset where each language has 700,000 tweets in its training set, 300,000 tweets in its validation set, and 300,000 tweets in its testing set, for a total 18,200,000 tweets. Referring to Table TABREF6, this dataset includes every language that represents 0.1% or more of the Twitter corpus.
Proposed Model
Since many languages have unclear word boundaries, character n-grams, rather than words, have become widely used as input in LID systems BIBREF2, BIBREF10, BIBREF5, BIBREF6. With this in mind, the LID problem can be defined as such: given a tweet $\mathit {tw}$ consisting of $n$ ordered characters ($\mathit {tw}=[ch_1, ch_2, ..., ch_n]$) selected within the vocabulary set $\mathit {char}$ of $V$ unique characters ($\mathit {char}=\lbrace ch_1,ch_2, ..., ch_V\rbrace $) and a set $\mathit {l}$ of $L$ languages ($\mathit {l}=\lbrace l_1,l_2, ..., l_L\rbrace $) , the aim is to predict the language $\mathit {\hat{l}}$ present in $tw$ using a classifier:
where $Score(l_i|tw)$ is a scoring function quantifying how likely language $l_i$ was used given the observed message $tw$.
Most statistical LID systems follow the model of BIBREF2. They start off by using what is called a one-hot encoding technique, which represents each character $ch_i$ as a one-hot vector $\mathbf {x}_i^{oh} \in \mathbb {Z}_2^V$ according to the index of this character in $\mathit {char}$. This transforms $tw$ into a matrix $\mathbf {X}^{oh}$:
The vector $\mathbf {X}^{oh}$ is passed to a feature extraction function, for example row-wise sum or tf-idf weighting, to obtain a feature vector $\mathbf {h}$. $\mathbf {h}$ is finally fed to a classifier model for either discriminative scoring (e.g. Support Vector Machine) or generative scoring (e.g. Naïve Bayes).
Unlike statistical methods, a typical neural network LID system, as illustrated in Figure FIGREF15, first pass this input through an embedding layer to map each character $ch_i \in tw$ to a low-dense vector $\mathbf {x}_i \in \mathbb {R}^d$, where $d$ denotes the dimension of character embedding. Given an input tweet $tw$, after passing through the embedding layer, we obtain an embedded matrix:
The embedded matrix $\mathbf {X}$ is then fed through a neural network architecture, which transforms it into an output vector $\mathbf {h}=f(\mathbf {X})$ of length L that represents the likelihood of each language, and which is passed through a $\mathit {Softmax}$ function. This updates equation DISPLAY_FORM18 as:
Tweets in particular are noisy messages which can contain a mix of multiple languages. To deal with this challenge, most previous neural network LID systems used deep sequence neural layers, such as an encoder-decoder BIBREF6 or a GRU BIBREF10, to extract global representations at a high computational cost. By contrast, we propose to employ a shallow (single-layer) convolution neural network (CNN) to locally learn region-based features. In addition, we propose to use an attention mechanism to proportionally merge together these local features for an entire tweet $tw$. We hypothesize that the attention mechanism will effectively capture which local features of a particular language are the dominant features of the tweet. There are two major advantages of our proposed architecture: first, the use of the CNN, which has the least number of parameters among other neural networks, simplifies the neural network model and decreases the inference latency; and second, the use of the attention mechanism makes it possible to model the mix of languages while maintaining a competitive performance.
Proposed Model ::: ngam-regional CNN Model
To begin, we present a traditional CNN with an ngam-regional constrain as our baseline. CNNs have been widely used in both image processing BIBREF15 and NLP BIBREF16. The convolution operation of a filter with a region size $m$ is parameterized by a weight matrix $\mathbf {W}_{cnn} \in \mathbb {R}^{d_{cnn}\times md}$ and a bias vector $\mathbf {b}_{cnn} \in \mathbb {R}^{d_{cnn}}$, where $d_{cnn}$ is the dimension of the CNN. The inputs are a sequence of $m$ consecutive input columns in $\mathbf {X}$, represented by a concatenated vector $\mathbf {X}[i:i+m-1] \in \mathbb {R}^{md}$. The region-based feature vector $\mathbf {c}_i$ is computed as follows:
where $\oplus $ denotes a concatenation operation and $g$ is a non-linear function. The region filter is slid from the beginning to the end of $\mathbf {X}$ to obtain a convolution matrix $\mathbf {C}$:
The first novelty of our CNN is that we add a zero-padding constrain at both sides of $\mathbf {X}$ to ensure that the number of columns in $\mathbf {C}$ is equal to the number of columns in $\mathbf {X}$. Consequently, each $\mathbf {c}_i$ feature vector corresponds to an $\mathbf {x}_i$ input vector at the same index position $i$, and is learned from concatenating the surrounding $m$-gram embeddings. Particularly:
where $p$ is the number of zero-padding columns. Finally, in a normal CNN, a row-wise max-pooling function is applied on $\mathbf {C}$ to extract the $d_{cnn}$ most salient features, as shown in Equation DISPLAY_FORM26. However, one weakness of this approach is that it extracts the most salient features out of sequence.
Proposed Model ::: Attention Mechanism
Instead of the traditional pooling function of Equation DISPLAY_FORM26, a second important innovation of our CNN model is to use an attention mechanism to model the interaction between region-based features from the beginning to the end of an input. Figure FIGREF15 illustrates our proposed model. Given a sequence of regional feature vectors $\mathbf {C}=[\mathbf {c}_1,\mathbf {c}_2,...,\mathbf {c}_n]$ as computed in Equation DISPLAY_FORM24, we pass it through a fully-connected hidden layer to learn a sequence of regional hidden vectors $\mathbf {H}=[\mathbf {h}_1,\mathbf {h}_2,...,\mathbf {h}_n] \in \mathbb {R}^{d_{hd} \times n}$ using Equation DISPLAY_FORM28.
where $g_2$ is a non-linear activation function, $\mathbf {W}_{hd}$ and $\mathbf {b}_{hd}$ denote model parameters, and $d_{hd}$ is the dimension of the hidden layer. We followed Yang et al. BIBREF17 in employing a regional context vector $\mathbf {u} \in \mathbb {R}^{d_{hd}}$ to measure the importance of each window-based hidden vector. The regional importance factors are computed by:
The importance factors are then fed to a $\mathit {Softmax}$ layer to obtain the normalized weight:
The final representation of a given input is computed by a weighted sum of its regional feature vectors:
Experimental Results ::: Benchmarks
For the benchmarks, we selected five systems. We picked first the langid.py library which is frequently used to compare systems in the literature. Since our work is in neural-network LID, we selected two neural network systems from the literature, specifically the encoder-decoder EquiLID system of BIBREF6 and the GRU neural network LanideNN system of BIBREF10. Finally, we included CLD2 and CLD3, two implementations of the Naïve Bayes LID software used by Google in their Chrome web browser BIBREF4, BIBREF0, BIBREF8 and sometimes used as a comparison system in the LID literature BIBREF7, BIBREF6, BIBREF8, BIBREF2, BIBREF10. We obtained publicly-available implementations of each of these algorithms, and test them all against our three datasets. In Table TABREF33, we report each algorithm's accuracy and F1 score, the two metrics usually reported in the LID literature. We also included precision and recall values, which are necessary for computing F1 score. And finally we included the speed in number of messages handled per second. This metric is not often discussed in the LID literature, but is of particular importance when dealing with a massive dataset such as ours or a massive streaming source such as Twitter.
We compare these benchmarks to our two models: the improved CNN as described in Section SECREF22 and our proposed CNN model with an attention mechanism of Section SECREF27. These are labelled CNN and Attention CNN in Table TABREF33. In both models, we filter out characters that appear less than 5 times and apply a dropout approach with a dropout rate of $0.5$. ADAM optimization algorithm and early stopping techniques are employed during training. The full list of parameters and settings is given in Table TABREF32. It is worth noting that we randomly select this configuration without any tuning process.
Experimental Results ::: Analysis
The first thing that appears from these results is the speed difference between algorithms. CLD3 and langid.py both can process several thousands of messages per second, and CLD2 is even an order of magnitude better, but the two neural network software have considerably worse performances, at less than a dozen messages per second. This is the efficiency trade-off of neural-network LID systems we mentioned in Section SECREF1; although to be fair, we should also point out that those two systems are research prototypes and thus may not have been fully optimized.
In terms of accuracy and F1 score, langid.py, LanideNN, and EquiLID have very similar performances. All three consistently score above 0.90, and each achieves the best accuracy or the best F1 score at some point, if only by 0.002. By contrast, CLD2 and CLD3 have weaker performances; significantly so in the case of CLD3. In all cases, using our small-, medium-, or large-scale test set does not significantly affect the results.
All the benchmark systems were tested using the pre-trained models they come with. For comparison purposes, we retrained langid.py from scratch using the training and validation portion of our datasets, and ran the tests again. Surprisingly, we find that the results are worse for all metrics compared to using their pre-trained model, and moreover that using the medium- and large-scale datasets give significantly worse results than using the small-scale dataset. This may be a result of the fact the corpus the langid.py software was trained with and optimized for originally is drastically different from ours: it is a imbalanced dataset of 18,269 tweets in 9 languages. Our larger corpora, being more drastically different from the original, give increasingly worse performances. This observation may also explain the almost 10% variation in performance of langid.py reported in the literature and reproduced in Table TABREF1. The fact that the message handling performance of the library drops massively compared to its pre-trained results further indicates how the software was optimized to use its corpus. Based on this initial result, we decided not to retrain the other benchmark systems.
The last two lines of Table TABREF33 report the results of our basic CNN and our attention CNN LID systems. It can be seen that both of them outperform the benchmark systems in accuracy, precision, recall, and F1 score in all experiments. Moreover, the attention CNN outperforms the basic CNN in every metric (we will explore the benefit of the attention mechanism in the next subsection). In terms of processing speed, only the CLD2 system surpasses ours, but it does so at the cost of a 10% drop in accuracy and F1 score. Looking at the choice of datasets, it can be seen that training with the large-scale dataset leads to a nearly 1% improvement compared to the medium-sized dataset, which also gives a 1% improvement compared to the small-scale dataset. While it is expected that using more training data will lead to a better system and better results, the small improvement indicates that even our small-scale dataset has sufficient messages to allow the network training to converge.
Experimental Results ::: Impact of Attention Mechanism
We can further illustrate the impact of our attention mechanism by displaying the importance factor $\alpha _i$ corresponding to each character $ch_i$ in selected tweets. Table TABREF41 shows a set of tweets that were correctly identified by the attention CNN but misclassified by the regular CNN in three different languages: English, French, and Vietnamese. The color intensity of a letter's cell is proportional to the attention mechanism's normalized weight $\alpha _i$, or on the focus the network puts on that character. In order words, the attention CNN puts more importance on the features that have the darkest color. The case studies of Table TABREF41 show the noise-tolerance that comes from the attention mechanism. It can be seen that the system puts virtually no weight on URL links (e.g. $tw_{en_1}$, $tw_{fr_2}$, $tw_{vi_2}$), on hashtags (e.g. $tw_{en_3}$), or on usernames (e.g. $tw_{en_2}$, $tw_{fr_1}$, $tw_{vi_1}$). We should emphasize that our system does not implement any text preprocessing steps; the input tweets are kept as-is. Despite that, the network learned to distinguish between words and non-words, and to focus mainly on the former. In fact, when the network does put attention on these elements, it is when they appear to use real words (e.g. “star" and “seed" in the username of $tw_{en_2}$, “mother" and “none" in the hashtag of $tw_{en_3}$). This also illustrates how the attention mechanism can pick out fine-grained features within noisy text: in those examples, it was able to focus on real-word components of longer non-word strings.
The examples of Table TABREF41 also show that the attention CNN learns to focus on common words to recognize languages. Some of the highest-weighted characters in the example tweets are found in common determiners, adverbs, and verbs of each language. These include “in" ($tw_{en_1}$), “des" ($tw_{fr_1}$), “le" ($tw_{fr_2}$), “est" ($tw_{fr_3}$), vietnamese“quá" ($tw_{vi_2}$), and vietnamese“nhất" ($tw_{vi_3}$). These letters and words significantly contribute in identifying the language of a given input.
Finally, when multiple languages are found within a tweet, the network successfully captures all of them. For example, $tw_{fr_3}$ switches from French to Spanish and $tw_{vi_2}$ mixes both English and Vietnamese. In both cases, the network identifies features of both languages; it focuses strongly on “est" and “y" in $tw_{fr_3}$, and on “Don't" and vietnamese“bài" in $tw_{vi_2}$. The message of $tw_{vi_3}$ mixes three languages, Vietnamese, English, and Korean, and the network focuses on all three parts, by picking out vietnamese“nhật" and vietnamese“mừng" in Vietnamese, “#생일축하해" and “#태형생일" in Korean, and “$\textbf {h}ave$" in English. Since our system is setup to classify each tweet into a single language, the strongest feature of each tweet wins out and the message is classified in the corresponding language. Nonetheless, it is significant to see that features of all languages present in the tweet are picked out, and a future version of our system could successfully decompose the tweets into portions of each language.
Conclusion
In this paper, we first demonstrated how to build balanced, automatically-labelled, and massive LID datasets. These datasets are taken from Twitter, and are thus composed of real-world and noisy messages. We applied our technique to build three datasets ranging from hundreds of thousands to tens of millions of short texts. Next, we proposed our new neural LID system, a CNN-based network with an attention mechanism to mitigate the performance bottleneck issue while still maintaining a state-of-the-art performance. The results obtained by our system surpassed five benchmark LID systems by 5% to 10%. Moreover, our analysis of the attention mechanism shed some light on the inner workings of the typically-black-box neural network, and demonstrated how it helps pick out the most important linguistic features while ignoring noise. All of our datasets and source code are publicly available at https://github.com/duytinvo/LID_NN. | Unanswerable |
557d1874f736d9d487eb823fe8f6dab4b17c3c42 | 557d1874f736d9d487eb823fe8f6dab4b17c3c42_0 | Q: Which language family does Mboshi belong to?
Text: Introduction
All over the world, languages are disappearing at an unprecedented rate, fostering the need for specific tools aimed to aid field linguists to collect, transcribe, analyze, and annotate endangered language data (e.g. BIBREF0, BIBREF1). A remarkable effort in this direction has improved the data collection procedures and tools BIBREF2, BIBREF3, enabling to collect corpora for an increasing number of endangered languages (e.g. BIBREF4).
One of the basic tasks of computational language documentation (CLD) is to identify word or morpheme boundaries in an unsegmented phonemic or orthographic stream. Several unsupervised monolingual word segmentation algorithms exist in the literature, based, for instance, on information-theoretic BIBREF5, BIBREF6 or nonparametric Bayesian techniques BIBREF7, BIBREF8. These techniques are, however, challenged in real-world settings by the small amount of available data.
A possible remedy is to take advantage of glosses or translations in a foreign, well-resourced language (WL), which often exist for such data, hoping that the bilingual context will provide additional cues to guide the segmentation algorithm. Such techniques have already been explored, for instance, in BIBREF9, BIBREF10 in the context of improving statistical alignment and translation models; and in BIBREF11, BIBREF12, BIBREF13 using Attentional Neural Machine Translation (NMT) models. In these latter studies, word segmentation is obtained by post-processing attention matrices, taking attention information as a noisy proxy to word alignment BIBREF14.
In this paper, we explore ways to exploit neural machine translation models to perform unsupervised boundary detection with bilingual information. Our main contribution is a new loss function for jointly learning alignment and segmentation in neural translation models, allowing us to better control the length of utterances. Our experiments with an actual under-resourced language (UL), Mboshi BIBREF17, show that this technique outperforms our bilingual segmentation baseline.
Recurrent architectures in NMT
In this section, we briefly review the main concepts of recurrent architectures for machine translation introduced in BIBREF18, BIBREF19, BIBREF20. In our setting, the source and target sentences are always observed and we are mostly interested in the attention mechanism that is used to induce word segmentation.
Recurrent architectures in NMT ::: RNN encoder-decoder
Sequence-to-sequence models transform a variable-length source sequence into a variable-length target output sequence. In our context, the source sequence is a sequence of words $w_1, \ldots , w_J$ and the target sequence is an unsegmented sequence of phonemes or characters $\omega _1, \ldots , \omega _I$. In the RNN encoder-decoder architecture, an encoder consisting of a RNN reads a sequence of word embeddings $e(w_1),\dots ,e(w_J)$ representing the source and produces a dense representation $c$ of this sentence in a low-dimensional vector space. Vector $c$ is then fed to an RNN decoder producing the output translation $\omega _1,\dots ,\omega _I$ sequentially.
At each step of the input sequence, the encoder hidden states $h_j$ are computed as:
In most cases, $\phi $ corresponds to a long short-term memory (LSTM) BIBREF24 unit or a gated recurrent unit (GRU) BIBREF25, and $h_J$ is used as the fixed-length context vector $c$ initializing the RNN decoder.
On the target side, the decoder predicts each word $\omega _i$, given the context vector $c$ (in the simplest case, $h_J$, the last hidden state of the encoder) and the previously predicted words, using the probability distribution over the output vocabulary $V_T$:
where $s_i$ is the hidden state of the decoder RNN and $g$ is a nonlinear function (e.g. a multi-layer perceptron with a softmax layer) computed by the output layer of the decoder. The hidden state $s_i$ is then updated according to:
where $f$ again corresponds to the function computed by an LSTM or GRU cell.
The encoder and the decoder are trained jointly to maximize the likelihood of the translation $\mathrm {\Omega }=\Omega _1, \dots , \Omega _I$ given the source sentence $\mathrm {w}=w_1,\dots ,w_J$. As reference target words are available during training, $\Omega _i$ (and the corresponding embedding) can be used instead of $\omega _i$ in Equations (DISPLAY_FORM5) and (DISPLAY_FORM6), a technique known as teacher forcing BIBREF26.
Recurrent architectures in NMT ::: The attention mechanism
Encoding a variable-length source sentence in a fixed-length vector can lead to poor translation results with long sentences BIBREF19. To address this problem, BIBREF20 introduces an attention mechanism which provides a flexible source context to better inform the decoder's decisions. This means that the fixed context vector $c$ in Equations (DISPLAY_FORM5) and (DISPLAY_FORM6) is replaced with a position-dependent context $c_i$, defined as:
where weights $\alpha _{ij}$ are computed by an attention model made of a multi-layer perceptron (MLP) followed by a softmax layer. Denoting $a$ the function computed by the MLP, then
where $e_{ij}$ is known as the energy associated to $\alpha _{ij}$. Lines in the attention matrix $A = (\alpha _{ij})$ sum to 1, and weights $\alpha _{ij}$ can be interpreted as the probability that target word $\omega _i$ is aligned to source word $w_j$. BIBREF20 qualitatively investigated such soft alignments and concluded that their model can correctly align target words to relevant source words (see also BIBREF27, BIBREF28). Our segmentation method (Section SECREF3) relies on the assumption that the same holds when aligning characters or phonemes on the target side to source words.
Attention-based word segmentation
Recall that our goal is to discover words in an unsegmented stream of target characters (or phonemes) in the under-resourced language. In this section, we first describe a baseline method inspired by the “align to segment” of BIBREF12, BIBREF13. We then propose two extensions providing the model with a signal relevant to the segmentation process, so as to move towards a joint learning of segmentation and alignment.
Attention-based word segmentation ::: Align to segment
An attention matrix $A = (\alpha _{ij})$ can be interpreted as a soft alignment matrix between target and source units, where each cell $\alpha _{ij}$ corresponds to the probability for target symbols $\omega _i$ (here, a phone) to be aligned to the source word $w_j$ (cf. Equation (DISPLAY_FORM10)). In our context, where words need to be discovered on the target side, we follow BIBREF12, BIBREF13 and perform word segmentation as follows:
train an attentional RNN encoder-decoder model with attention using teacher forcing (see Section SECREF2);
force-decode the entire corpus and extract one attention matrix for each sentence pair.
identify boundaries in the target sequences. For each target unit $\omega _i$ of the UL, we identify the source word $w_{a_i}$ to which it is most likely aligned : $\forall i, a_i = \operatornamewithlimits{argmax}_j \alpha _{ij}$. Given these alignment links, a word segmentation is computed by introducing a word boundary in the target whenever two adjacent units are not aligned with the same source word ($a_i \ne a_{i+1}$).
Considering a (simulated) low-resource setting, and building on BIBREF14's work, BIBREF11 propose to smooth attentional alignments, either by post-processing attention matrices, or by flattening the softmax function in the attention model (see Equation (DISPLAY_FORM10)) with a temperature parameter $T$. This makes sense as the authors examine attentional alignments obtained while training from UL phonemes to WL words. But when translating from WL words to UL characters, this seems less useful: smoothing will encourage a character to align to many words. This technique is further explored by BIBREF29, who make the temperature parameter trainable and specific to each decoding step, so that the model can learn how to control the softness or sharpness of attention distributions, depending on the current word being decoded.
Attention-based word segmentation ::: Towards joint alignment and segmentation
One limitation in the approach described above lies in the absence of signal relative to segmentation during RNN training. Attempting to move towards a joint learning of alignment and segmentation, we propose here two extensions aimed at introducing constraints derived from our segmentation heuristic in the training process.
Attention-based word segmentation ::: Towards joint alignment and segmentation ::: Word-length bias
Our first extension relies on the assumption that the length of aligned source and target words should correlate. Being in a relationship of mutual translation, aligned words are expected to have comparable frequencies and meaning, hence comparable lengths. This means that the longer a source word is, the more target units should be aligned to it. We implement this idea in the attention mechanism as a word-length bias, changing the computation of the context vector from Equation (DISPLAY_FORM9) to:
where $\psi $ is a monotonically increasing function of the length $|w_j|$ of word $w_j$. This will encourage target units to attend more to longer source words. In practice, we choose $\psi $ to be the identity function and renormalize so as to ensure that lines still sum to 1 in the attention matrices. The context vectors $c_i$ are now computed with attention weights $\tilde{\alpha }_{ij}$ as:
We finally derive the target segmentation from the attention matrix $A = (\tilde{\alpha }_{ij})$, following the method of Section SECREF11.
Attention-based word segmentation ::: Towards joint alignment and segmentation ::: Introducing an auxiliary loss function
Another way to inject segmentation awareness inside our training procedure is to control the number of target words that will be produced during post-processing. The intuition here is that notwithstanding typological discrepancies, the target segmentation should yield a number of target words that is close to the length of the source.
To this end, we complement the main loss function with an additional term $\mathcal {L}_\mathrm {AUX}$ defined as:
The rationale behind this additional term is as follows: recall that a boundary is then inserted on the target side whenever two consecutive units are not aligned to the same source word. The dot product between consecutive lines in the attention matrix will be close to 1 if consecutive target units are aligned to the same source word, and closer to 0 if they are not. The summation thus quantifies the number of target units that will not be followed by a word boundary after segmentation, and $I - \sum _{i=1}^{I-1} \alpha _{i,*}^\top \alpha _{i+1, *}$ measures the number of word boundaries that are produced on the target side. Minimizing this auxiliary term should guide the model towards learning attention matrices resulting in target segmentations that have the same number of words on the source and target sides.
Figure FIGREF25 illustrates the effect of our auxiliary loss on an example. Without auxiliary loss, the segmentation will yield, in this case, 8 target segments (Figure FIGREF25), while the attention learnt with auxiliary loss will yield 5 target segments (Figure FIGREF25); source sentence, on the other hand, has 4 tokens.
Experiments and discussion
In this section, we describe implementation details for our baseline segmentation system and for the extensions proposed in Section SECREF17, before presenting data and results.
Experiments and discussion ::: Implementation details
Our baseline system is our own reimplementation of Bahdanau's encoder-decoder with attention in PyTorch BIBREF31. The last version of our code, which handles mini-batches efficiently, heavily borrows from Joost Basting's code. Source sentences include an end-of-sentence (EOS) symbol (corresponding to $w_J$ in our notation) and target sentences include both a beginning-of-sentence (BOS) and an EOS symbol. Padding of source and target sentences in mini-batches is required, as well as masking in the attention matrices and during loss computation. Our architecture follows BIBREF20 very closely with some minor changes.
We use a single-layer bidirectional RNN BIBREF32 with GRU cells: these have been shown to perform similarly to LSTM-based RNNs BIBREF33, while computationally more efficient. We use 64-dimensional hidden states for the forward and backward RNNs, and for the embeddings, similarly to BIBREF12, BIBREF13. In Equation (DISPLAY_FORM4), $h_j$ corresponds to the concatenation of the forward and backward states for each step $j$ of the source sequence.
The alignment MLP model computes function $a$ from Equation (DISPLAY_FORM10) as $a(s_{i-1}, h_j)=v_a^\top \tanh (W_a s_{i-1} + U_a h_j)$ – see Appendix A.1.2 in BIBREF20 – where $v_a$, $W_a$, and $U_a$ are weight matrices. For the computation of weights $\tilde{\alpha _{ij}}$ in the word-length bias extension (Equation (DISPLAY_FORM21)), we arbitrarily attribute a length of 1 to the EOS symbol on the source side.
The decoder is initialized using the last backward state of the encoder and a non-linear function ($\tanh $) for state $s_0$. We use a single-layer GRU RNN; hidden states and output embeddings are 64-dimensional. In preliminary experiments, and as in BIBREF34, we observed better segmentations adopting a “generate first” approach during decoding, where we first generate the current target word, then update the current RNN state. Equations (DISPLAY_FORM5) and (DISPLAY_FORM6) are accordingly modified into:
During training and forced decoding, the hidden state $s_i$ is thus updated using ground-truth embeddings $e(\Omega _{i})$. $\Omega _0$ is the BOS symbol. Our implementation of the output layer ($g$) consists of a MLP and a softmax.
We train for 800 epochs on the whole corpus with Adam (the learning rate is 0.001). Parameters are updated after each mini-batch of 64 sentence pairs. A dropout layer BIBREF35 is applied to both source and target embedding layers, with a rate of 0.5. The weights in all linear layers are initialized with Glorot's normalized method (Equation (16) in BIBREF36) and bias vectors are initialized to 0. Embeddings are initialized with the normal distribution $\mathcal {N}(0, 0.1)$. Except for the bridge between the encoder and the decoder, the initialization of RNN weights is kept to PyTorch defaults. During training, we minimize the NLL loss $\mathcal {L}_\mathrm {NLL}$ (see Section SECREF3), adding optionally the auxiliary loss $\mathcal {L}_\mathrm {AUX}$ (Section SECREF22). When the auxiliary loss term is used, we schedule it to be integrated progressively so as to avoid degenerate solutions with coefficient $\lambda _\mathrm {AUX}(k)$ at epoch $k$ defined by:
where $K$ is the total number of epochs and $W$ a wait parameter. The complete loss at epoch $k$ is thus $\mathcal {L}_\mathrm {NLL} + \lambda _\mathrm {AUX} \cdot \mathcal {L}_\mathrm {AUX}$. After trying values ranging from 100 to 700, we set $W$ to 200. We approximate the absolute value in Equation (DISPLAY_FORM24) by $|x| \triangleq \sqrt{x^2 + 0.001}$, in order to make the auxiliary loss function differentiable.
Experiments and discussion ::: Data and evaluation
Our experiments are performed on an actual endangered language, Mboshi (Bantu C25), a language spoken in Congo-Brazzaville, using the bilingual French-Mboshi 5K corpus of BIBREF17. On the Mboshi side, we consider alphabetic representation with no tonal information. On the French side,we simply consider the default segmentation into words.
We denote the baseline segmentation system as base, the word-length bias extension as bias, and the auxiliary loss extensions as aux. We also report results for a variant of aux (aux+ratio), in which the auxiliary loss is computed with a factor corresponding to the true length ratio $r_\mathrm {MB/FR}$ between Mboshi and French averaged over the first 100 sentences of the corpus. In this variant, the auxiliary loss is computed as $\vert I - r_\mathrm {MB/FR} \cdot J - \sum _{i=1}^{I-1} \alpha _{i,*}^\top \alpha _{i+1, *} \vert $.
We report segmentation performance using precision, recall, and F-measure on boundaries (BP, BR, BF), and tokens (WP, WR, WF). We also report the exact-match (X) metric which computes the proportion of correctly segmented utterances. Our main results are in Figure FIGREF47, where we report averaged scores over 10 runs. As a comparison with another bilingual method inspired by the “align to segment” approach, we also include the results obtained using the statistical models of BIBREF9, denoted Pisa, in Table TABREF46.
Experiments and discussion ::: Discussion
A first observation is that our baseline method base improves vastly over Pisa's results (by a margin of about 30% on boundary F-measure, BF).
Experiments and discussion ::: Discussion ::: Effects of the word-length bias
The integration of a word-bias in the attention mechanism seems detrimental to segmentation performance, and results obtained with bias are lower than those obtained with base, except for the sentence exact-match metric (X). To assess whether the introduction of word-length bias actually encourages target units to “attend more” to longer source word in bias, we compute the correlation between the length of source word and the quantity of attention these words receive (for each source position, we sum attention column-wise: $\sum _i \tilde{\alpha }_{ij}$). Results for all segmentation methods are in Table TABREF50. bias increases the correlation between word lengths and attention, but this correlation being already high for all methods (base, or aux and aux+ratio), our attempt to increase it proves here detrimental to segmentation.
Experiments and discussion ::: Discussion ::: Effects of the auxiliary loss
For boundary F-measures (BF) in Figure FIGREF47, aux performs similarly to base, but with a much higher precision, and degraded recall, indicating that the new method does not oversegment as much as base. More insight can be gained from various statistics on the automatically segmented data presented in Table TABREF52. The average token and sentence lengths for aux are closer to their ground-truth values (resp. 4.19 characters and 5.96 words). The global number of tokens produced is also brought closer to its reference. On token metrics, a similar effect is observed, but the trade-off between a lower recall and an increased precision is more favorable and yields more than 3 points in F-measure. These results are encouraging for documentation purposes, where precision is arguably a more valuable metric than recall in a semi-supervised segmentation scenario.
They, however, rely on a crude heuristic that the source and target sides (here French and Mboshi) should have the same number of units, which are only valid for typologically related languages and not very accurate for our dataset.
As Mboshi is more agglutinative than French (5.96 words per sentence on average in the Mboshi 5K, vs. 8.22 for French), we also consider the lightly supervised setting where the true length ratio is provided. This again turns out to be detrimental to performance, except for the boundary precision (BP) and the sentence exact-match (X). Note also that precision becomes stronger than recall for both boundary and token metrics, indicating under-segmentation. This is confirmed by an average token length that exceeds the ground-truth (and an average sentence length below the true value, see Table TABREF52).
Here again, our control of the target length proves effective: compared to base, the auxiliary loss has the effect to decrease the average sentence length and move it closer to its observed value (5.96), yielding an increased precision, an effect that is amplified with aux+ratio. By tuning this ratio, it is expected that we could even get slightly better results.
Related work
The attention mechanism introduced by BIBREF20 has been further explored by many researchers. BIBREF37, for instance, compare a global to a local approach for attention, and examine several architectures to compute alignment weights $\alpha _{ij}$. BIBREF38 additionally propose a recurrent version of the attention mechanism, where a “dynamic memory” keeps track of the attention received by each source word, and demonstrate better translation results. A more general formulation of the attention mechanism can, lastly, be found in BIBREF39, where structural dependencies between source units can be modeled.
With the goal of improving alignment quality, BIBREF40 computes a distance between attentions and word alignments learnt with the reparameterization of IBM Model 2 from BIBREF41; this distance is then added to the cost function during training. To improve alignments also, BIBREF14 introduce several refinements to the attention mechanism, in the form of structural biases common in word-based alignment models. In this work, the attention model is enriched with features able to control positional bias, fertility, or symmetry in the alignments, which leads to better translations for some language pairs, under low-resource conditions. More work seeking to improve alignment and translation quality can be found in BIBREF42, BIBREF43, BIBREF44, BIBREF45, BIBREF46, BIBREF47.
Another important line of reseach related to work studies the relationship between segmentation and alignment quality: it is recognized that sub-lexical units such as BPE BIBREF48 help solve the unknown word problem; other notable works around these lines include BIBREF49 and BIBREF50.
CLD has also attracted a growing interest in recent years. Most recent work includes speech-to-text translation BIBREF51, BIBREF52, speech transcription using bilingual supervision BIBREF53, both speech transcription and translation BIBREF54, or automatic phonemic transcription of tonal languages BIBREF55.
Conclusion
In this paper, we explored neural segmentation methods extending the “align to segment” approach, and proposed extensions to move towards joint segmentation and alignment. This involved the introduction of a word-length bias in the attention mechanism and the design of an auxiliary loss. The latter approach yielded improvements over the baseline on all accounts, in particular for the precision metric.
Our results, however, lag behind the best monolingual performance for this dataset (see e.g. BIBREF56). This might be due to the difficulty of computing valid alignments between phonemes and words in very limited data conditions, which remains very challenging, as also demonstrated by the results of Pisa. However, unlike monolingual methods, bilingual methods generate word alignments and their real benefit should be assessed with alignment based metrics. This is left for future work, as reference word alignments are not yet available for our data.
Other extensions of this work will focus on ways to mitigate data sparsity with weak supervision information, either by using lists of frequent words or the presence of certain word boundaries on the target side or by using more sophisticated attention models in the spirit of BIBREF14 or BIBREF39. | Bantu |
f41c401a4c6e1be768f8e68f774af3661c890ffd | f41c401a4c6e1be768f8e68f774af3661c890ffd_0 | Q: Does the paper report any alignment-only baseline?
Text: Introduction
All over the world, languages are disappearing at an unprecedented rate, fostering the need for specific tools aimed to aid field linguists to collect, transcribe, analyze, and annotate endangered language data (e.g. BIBREF0, BIBREF1). A remarkable effort in this direction has improved the data collection procedures and tools BIBREF2, BIBREF3, enabling to collect corpora for an increasing number of endangered languages (e.g. BIBREF4).
One of the basic tasks of computational language documentation (CLD) is to identify word or morpheme boundaries in an unsegmented phonemic or orthographic stream. Several unsupervised monolingual word segmentation algorithms exist in the literature, based, for instance, on information-theoretic BIBREF5, BIBREF6 or nonparametric Bayesian techniques BIBREF7, BIBREF8. These techniques are, however, challenged in real-world settings by the small amount of available data.
A possible remedy is to take advantage of glosses or translations in a foreign, well-resourced language (WL), which often exist for such data, hoping that the bilingual context will provide additional cues to guide the segmentation algorithm. Such techniques have already been explored, for instance, in BIBREF9, BIBREF10 in the context of improving statistical alignment and translation models; and in BIBREF11, BIBREF12, BIBREF13 using Attentional Neural Machine Translation (NMT) models. In these latter studies, word segmentation is obtained by post-processing attention matrices, taking attention information as a noisy proxy to word alignment BIBREF14.
In this paper, we explore ways to exploit neural machine translation models to perform unsupervised boundary detection with bilingual information. Our main contribution is a new loss function for jointly learning alignment and segmentation in neural translation models, allowing us to better control the length of utterances. Our experiments with an actual under-resourced language (UL), Mboshi BIBREF17, show that this technique outperforms our bilingual segmentation baseline.
Recurrent architectures in NMT
In this section, we briefly review the main concepts of recurrent architectures for machine translation introduced in BIBREF18, BIBREF19, BIBREF20. In our setting, the source and target sentences are always observed and we are mostly interested in the attention mechanism that is used to induce word segmentation.
Recurrent architectures in NMT ::: RNN encoder-decoder
Sequence-to-sequence models transform a variable-length source sequence into a variable-length target output sequence. In our context, the source sequence is a sequence of words $w_1, \ldots , w_J$ and the target sequence is an unsegmented sequence of phonemes or characters $\omega _1, \ldots , \omega _I$. In the RNN encoder-decoder architecture, an encoder consisting of a RNN reads a sequence of word embeddings $e(w_1),\dots ,e(w_J)$ representing the source and produces a dense representation $c$ of this sentence in a low-dimensional vector space. Vector $c$ is then fed to an RNN decoder producing the output translation $\omega _1,\dots ,\omega _I$ sequentially.
At each step of the input sequence, the encoder hidden states $h_j$ are computed as:
In most cases, $\phi $ corresponds to a long short-term memory (LSTM) BIBREF24 unit or a gated recurrent unit (GRU) BIBREF25, and $h_J$ is used as the fixed-length context vector $c$ initializing the RNN decoder.
On the target side, the decoder predicts each word $\omega _i$, given the context vector $c$ (in the simplest case, $h_J$, the last hidden state of the encoder) and the previously predicted words, using the probability distribution over the output vocabulary $V_T$:
where $s_i$ is the hidden state of the decoder RNN and $g$ is a nonlinear function (e.g. a multi-layer perceptron with a softmax layer) computed by the output layer of the decoder. The hidden state $s_i$ is then updated according to:
where $f$ again corresponds to the function computed by an LSTM or GRU cell.
The encoder and the decoder are trained jointly to maximize the likelihood of the translation $\mathrm {\Omega }=\Omega _1, \dots , \Omega _I$ given the source sentence $\mathrm {w}=w_1,\dots ,w_J$. As reference target words are available during training, $\Omega _i$ (and the corresponding embedding) can be used instead of $\omega _i$ in Equations (DISPLAY_FORM5) and (DISPLAY_FORM6), a technique known as teacher forcing BIBREF26.
Recurrent architectures in NMT ::: The attention mechanism
Encoding a variable-length source sentence in a fixed-length vector can lead to poor translation results with long sentences BIBREF19. To address this problem, BIBREF20 introduces an attention mechanism which provides a flexible source context to better inform the decoder's decisions. This means that the fixed context vector $c$ in Equations (DISPLAY_FORM5) and (DISPLAY_FORM6) is replaced with a position-dependent context $c_i$, defined as:
where weights $\alpha _{ij}$ are computed by an attention model made of a multi-layer perceptron (MLP) followed by a softmax layer. Denoting $a$ the function computed by the MLP, then
where $e_{ij}$ is known as the energy associated to $\alpha _{ij}$. Lines in the attention matrix $A = (\alpha _{ij})$ sum to 1, and weights $\alpha _{ij}$ can be interpreted as the probability that target word $\omega _i$ is aligned to source word $w_j$. BIBREF20 qualitatively investigated such soft alignments and concluded that their model can correctly align target words to relevant source words (see also BIBREF27, BIBREF28). Our segmentation method (Section SECREF3) relies on the assumption that the same holds when aligning characters or phonemes on the target side to source words.
Attention-based word segmentation
Recall that our goal is to discover words in an unsegmented stream of target characters (or phonemes) in the under-resourced language. In this section, we first describe a baseline method inspired by the “align to segment” of BIBREF12, BIBREF13. We then propose two extensions providing the model with a signal relevant to the segmentation process, so as to move towards a joint learning of segmentation and alignment.
Attention-based word segmentation ::: Align to segment
An attention matrix $A = (\alpha _{ij})$ can be interpreted as a soft alignment matrix between target and source units, where each cell $\alpha _{ij}$ corresponds to the probability for target symbols $\omega _i$ (here, a phone) to be aligned to the source word $w_j$ (cf. Equation (DISPLAY_FORM10)). In our context, where words need to be discovered on the target side, we follow BIBREF12, BIBREF13 and perform word segmentation as follows:
train an attentional RNN encoder-decoder model with attention using teacher forcing (see Section SECREF2);
force-decode the entire corpus and extract one attention matrix for each sentence pair.
identify boundaries in the target sequences. For each target unit $\omega _i$ of the UL, we identify the source word $w_{a_i}$ to which it is most likely aligned : $\forall i, a_i = \operatornamewithlimits{argmax}_j \alpha _{ij}$. Given these alignment links, a word segmentation is computed by introducing a word boundary in the target whenever two adjacent units are not aligned with the same source word ($a_i \ne a_{i+1}$).
Considering a (simulated) low-resource setting, and building on BIBREF14's work, BIBREF11 propose to smooth attentional alignments, either by post-processing attention matrices, or by flattening the softmax function in the attention model (see Equation (DISPLAY_FORM10)) with a temperature parameter $T$. This makes sense as the authors examine attentional alignments obtained while training from UL phonemes to WL words. But when translating from WL words to UL characters, this seems less useful: smoothing will encourage a character to align to many words. This technique is further explored by BIBREF29, who make the temperature parameter trainable and specific to each decoding step, so that the model can learn how to control the softness or sharpness of attention distributions, depending on the current word being decoded.
Attention-based word segmentation ::: Towards joint alignment and segmentation
One limitation in the approach described above lies in the absence of signal relative to segmentation during RNN training. Attempting to move towards a joint learning of alignment and segmentation, we propose here two extensions aimed at introducing constraints derived from our segmentation heuristic in the training process.
Attention-based word segmentation ::: Towards joint alignment and segmentation ::: Word-length bias
Our first extension relies on the assumption that the length of aligned source and target words should correlate. Being in a relationship of mutual translation, aligned words are expected to have comparable frequencies and meaning, hence comparable lengths. This means that the longer a source word is, the more target units should be aligned to it. We implement this idea in the attention mechanism as a word-length bias, changing the computation of the context vector from Equation (DISPLAY_FORM9) to:
where $\psi $ is a monotonically increasing function of the length $|w_j|$ of word $w_j$. This will encourage target units to attend more to longer source words. In practice, we choose $\psi $ to be the identity function and renormalize so as to ensure that lines still sum to 1 in the attention matrices. The context vectors $c_i$ are now computed with attention weights $\tilde{\alpha }_{ij}$ as:
We finally derive the target segmentation from the attention matrix $A = (\tilde{\alpha }_{ij})$, following the method of Section SECREF11.
Attention-based word segmentation ::: Towards joint alignment and segmentation ::: Introducing an auxiliary loss function
Another way to inject segmentation awareness inside our training procedure is to control the number of target words that will be produced during post-processing. The intuition here is that notwithstanding typological discrepancies, the target segmentation should yield a number of target words that is close to the length of the source.
To this end, we complement the main loss function with an additional term $\mathcal {L}_\mathrm {AUX}$ defined as:
The rationale behind this additional term is as follows: recall that a boundary is then inserted on the target side whenever two consecutive units are not aligned to the same source word. The dot product between consecutive lines in the attention matrix will be close to 1 if consecutive target units are aligned to the same source word, and closer to 0 if they are not. The summation thus quantifies the number of target units that will not be followed by a word boundary after segmentation, and $I - \sum _{i=1}^{I-1} \alpha _{i,*}^\top \alpha _{i+1, *}$ measures the number of word boundaries that are produced on the target side. Minimizing this auxiliary term should guide the model towards learning attention matrices resulting in target segmentations that have the same number of words on the source and target sides.
Figure FIGREF25 illustrates the effect of our auxiliary loss on an example. Without auxiliary loss, the segmentation will yield, in this case, 8 target segments (Figure FIGREF25), while the attention learnt with auxiliary loss will yield 5 target segments (Figure FIGREF25); source sentence, on the other hand, has 4 tokens.
Experiments and discussion
In this section, we describe implementation details for our baseline segmentation system and for the extensions proposed in Section SECREF17, before presenting data and results.
Experiments and discussion ::: Implementation details
Our baseline system is our own reimplementation of Bahdanau's encoder-decoder with attention in PyTorch BIBREF31. The last version of our code, which handles mini-batches efficiently, heavily borrows from Joost Basting's code. Source sentences include an end-of-sentence (EOS) symbol (corresponding to $w_J$ in our notation) and target sentences include both a beginning-of-sentence (BOS) and an EOS symbol. Padding of source and target sentences in mini-batches is required, as well as masking in the attention matrices and during loss computation. Our architecture follows BIBREF20 very closely with some minor changes.
We use a single-layer bidirectional RNN BIBREF32 with GRU cells: these have been shown to perform similarly to LSTM-based RNNs BIBREF33, while computationally more efficient. We use 64-dimensional hidden states for the forward and backward RNNs, and for the embeddings, similarly to BIBREF12, BIBREF13. In Equation (DISPLAY_FORM4), $h_j$ corresponds to the concatenation of the forward and backward states for each step $j$ of the source sequence.
The alignment MLP model computes function $a$ from Equation (DISPLAY_FORM10) as $a(s_{i-1}, h_j)=v_a^\top \tanh (W_a s_{i-1} + U_a h_j)$ – see Appendix A.1.2 in BIBREF20 – where $v_a$, $W_a$, and $U_a$ are weight matrices. For the computation of weights $\tilde{\alpha _{ij}}$ in the word-length bias extension (Equation (DISPLAY_FORM21)), we arbitrarily attribute a length of 1 to the EOS symbol on the source side.
The decoder is initialized using the last backward state of the encoder and a non-linear function ($\tanh $) for state $s_0$. We use a single-layer GRU RNN; hidden states and output embeddings are 64-dimensional. In preliminary experiments, and as in BIBREF34, we observed better segmentations adopting a “generate first” approach during decoding, where we first generate the current target word, then update the current RNN state. Equations (DISPLAY_FORM5) and (DISPLAY_FORM6) are accordingly modified into:
During training and forced decoding, the hidden state $s_i$ is thus updated using ground-truth embeddings $e(\Omega _{i})$. $\Omega _0$ is the BOS symbol. Our implementation of the output layer ($g$) consists of a MLP and a softmax.
We train for 800 epochs on the whole corpus with Adam (the learning rate is 0.001). Parameters are updated after each mini-batch of 64 sentence pairs. A dropout layer BIBREF35 is applied to both source and target embedding layers, with a rate of 0.5. The weights in all linear layers are initialized with Glorot's normalized method (Equation (16) in BIBREF36) and bias vectors are initialized to 0. Embeddings are initialized with the normal distribution $\mathcal {N}(0, 0.1)$. Except for the bridge between the encoder and the decoder, the initialization of RNN weights is kept to PyTorch defaults. During training, we minimize the NLL loss $\mathcal {L}_\mathrm {NLL}$ (see Section SECREF3), adding optionally the auxiliary loss $\mathcal {L}_\mathrm {AUX}$ (Section SECREF22). When the auxiliary loss term is used, we schedule it to be integrated progressively so as to avoid degenerate solutions with coefficient $\lambda _\mathrm {AUX}(k)$ at epoch $k$ defined by:
where $K$ is the total number of epochs and $W$ a wait parameter. The complete loss at epoch $k$ is thus $\mathcal {L}_\mathrm {NLL} + \lambda _\mathrm {AUX} \cdot \mathcal {L}_\mathrm {AUX}$. After trying values ranging from 100 to 700, we set $W$ to 200. We approximate the absolute value in Equation (DISPLAY_FORM24) by $|x| \triangleq \sqrt{x^2 + 0.001}$, in order to make the auxiliary loss function differentiable.
Experiments and discussion ::: Data and evaluation
Our experiments are performed on an actual endangered language, Mboshi (Bantu C25), a language spoken in Congo-Brazzaville, using the bilingual French-Mboshi 5K corpus of BIBREF17. On the Mboshi side, we consider alphabetic representation with no tonal information. On the French side,we simply consider the default segmentation into words.
We denote the baseline segmentation system as base, the word-length bias extension as bias, and the auxiliary loss extensions as aux. We also report results for a variant of aux (aux+ratio), in which the auxiliary loss is computed with a factor corresponding to the true length ratio $r_\mathrm {MB/FR}$ between Mboshi and French averaged over the first 100 sentences of the corpus. In this variant, the auxiliary loss is computed as $\vert I - r_\mathrm {MB/FR} \cdot J - \sum _{i=1}^{I-1} \alpha _{i,*}^\top \alpha _{i+1, *} \vert $.
We report segmentation performance using precision, recall, and F-measure on boundaries (BP, BR, BF), and tokens (WP, WR, WF). We also report the exact-match (X) metric which computes the proportion of correctly segmented utterances. Our main results are in Figure FIGREF47, where we report averaged scores over 10 runs. As a comparison with another bilingual method inspired by the “align to segment” approach, we also include the results obtained using the statistical models of BIBREF9, denoted Pisa, in Table TABREF46.
Experiments and discussion ::: Discussion
A first observation is that our baseline method base improves vastly over Pisa's results (by a margin of about 30% on boundary F-measure, BF).
Experiments and discussion ::: Discussion ::: Effects of the word-length bias
The integration of a word-bias in the attention mechanism seems detrimental to segmentation performance, and results obtained with bias are lower than those obtained with base, except for the sentence exact-match metric (X). To assess whether the introduction of word-length bias actually encourages target units to “attend more” to longer source word in bias, we compute the correlation between the length of source word and the quantity of attention these words receive (for each source position, we sum attention column-wise: $\sum _i \tilde{\alpha }_{ij}$). Results for all segmentation methods are in Table TABREF50. bias increases the correlation between word lengths and attention, but this correlation being already high for all methods (base, or aux and aux+ratio), our attempt to increase it proves here detrimental to segmentation.
Experiments and discussion ::: Discussion ::: Effects of the auxiliary loss
For boundary F-measures (BF) in Figure FIGREF47, aux performs similarly to base, but with a much higher precision, and degraded recall, indicating that the new method does not oversegment as much as base. More insight can be gained from various statistics on the automatically segmented data presented in Table TABREF52. The average token and sentence lengths for aux are closer to their ground-truth values (resp. 4.19 characters and 5.96 words). The global number of tokens produced is also brought closer to its reference. On token metrics, a similar effect is observed, but the trade-off between a lower recall and an increased precision is more favorable and yields more than 3 points in F-measure. These results are encouraging for documentation purposes, where precision is arguably a more valuable metric than recall in a semi-supervised segmentation scenario.
They, however, rely on a crude heuristic that the source and target sides (here French and Mboshi) should have the same number of units, which are only valid for typologically related languages and not very accurate for our dataset.
As Mboshi is more agglutinative than French (5.96 words per sentence on average in the Mboshi 5K, vs. 8.22 for French), we also consider the lightly supervised setting where the true length ratio is provided. This again turns out to be detrimental to performance, except for the boundary precision (BP) and the sentence exact-match (X). Note also that precision becomes stronger than recall for both boundary and token metrics, indicating under-segmentation. This is confirmed by an average token length that exceeds the ground-truth (and an average sentence length below the true value, see Table TABREF52).
Here again, our control of the target length proves effective: compared to base, the auxiliary loss has the effect to decrease the average sentence length and move it closer to its observed value (5.96), yielding an increased precision, an effect that is amplified with aux+ratio. By tuning this ratio, it is expected that we could even get slightly better results.
Related work
The attention mechanism introduced by BIBREF20 has been further explored by many researchers. BIBREF37, for instance, compare a global to a local approach for attention, and examine several architectures to compute alignment weights $\alpha _{ij}$. BIBREF38 additionally propose a recurrent version of the attention mechanism, where a “dynamic memory” keeps track of the attention received by each source word, and demonstrate better translation results. A more general formulation of the attention mechanism can, lastly, be found in BIBREF39, where structural dependencies between source units can be modeled.
With the goal of improving alignment quality, BIBREF40 computes a distance between attentions and word alignments learnt with the reparameterization of IBM Model 2 from BIBREF41; this distance is then added to the cost function during training. To improve alignments also, BIBREF14 introduce several refinements to the attention mechanism, in the form of structural biases common in word-based alignment models. In this work, the attention model is enriched with features able to control positional bias, fertility, or symmetry in the alignments, which leads to better translations for some language pairs, under low-resource conditions. More work seeking to improve alignment and translation quality can be found in BIBREF42, BIBREF43, BIBREF44, BIBREF45, BIBREF46, BIBREF47.
Another important line of reseach related to work studies the relationship between segmentation and alignment quality: it is recognized that sub-lexical units such as BPE BIBREF48 help solve the unknown word problem; other notable works around these lines include BIBREF49 and BIBREF50.
CLD has also attracted a growing interest in recent years. Most recent work includes speech-to-text translation BIBREF51, BIBREF52, speech transcription using bilingual supervision BIBREF53, both speech transcription and translation BIBREF54, or automatic phonemic transcription of tonal languages BIBREF55.
Conclusion
In this paper, we explored neural segmentation methods extending the “align to segment” approach, and proposed extensions to move towards joint segmentation and alignment. This involved the introduction of a word-length bias in the attention mechanism and the design of an auxiliary loss. The latter approach yielded improvements over the baseline on all accounts, in particular for the precision metric.
Our results, however, lag behind the best monolingual performance for this dataset (see e.g. BIBREF56). This might be due to the difficulty of computing valid alignments between phonemes and words in very limited data conditions, which remains very challenging, as also demonstrated by the results of Pisa. However, unlike monolingual methods, bilingual methods generate word alignments and their real benefit should be assessed with alignment based metrics. This is left for future work, as reference word alignments are not yet available for our data.
Other extensions of this work will focus on ways to mitigate data sparsity with weak supervision information, either by using lists of frequent words or the presence of certain word boundaries on the target side or by using more sophisticated attention models in the spirit of BIBREF14 or BIBREF39. | Yes |
09cd7ae01fe97bba230c109d0234fee80a1f013b | 09cd7ae01fe97bba230c109d0234fee80a1f013b_0 | Q: What is the dataset used in the paper?
Text: Introduction
All over the world, languages are disappearing at an unprecedented rate, fostering the need for specific tools aimed to aid field linguists to collect, transcribe, analyze, and annotate endangered language data (e.g. BIBREF0, BIBREF1). A remarkable effort in this direction has improved the data collection procedures and tools BIBREF2, BIBREF3, enabling to collect corpora for an increasing number of endangered languages (e.g. BIBREF4).
One of the basic tasks of computational language documentation (CLD) is to identify word or morpheme boundaries in an unsegmented phonemic or orthographic stream. Several unsupervised monolingual word segmentation algorithms exist in the literature, based, for instance, on information-theoretic BIBREF5, BIBREF6 or nonparametric Bayesian techniques BIBREF7, BIBREF8. These techniques are, however, challenged in real-world settings by the small amount of available data.
A possible remedy is to take advantage of glosses or translations in a foreign, well-resourced language (WL), which often exist for such data, hoping that the bilingual context will provide additional cues to guide the segmentation algorithm. Such techniques have already been explored, for instance, in BIBREF9, BIBREF10 in the context of improving statistical alignment and translation models; and in BIBREF11, BIBREF12, BIBREF13 using Attentional Neural Machine Translation (NMT) models. In these latter studies, word segmentation is obtained by post-processing attention matrices, taking attention information as a noisy proxy to word alignment BIBREF14.
In this paper, we explore ways to exploit neural machine translation models to perform unsupervised boundary detection with bilingual information. Our main contribution is a new loss function for jointly learning alignment and segmentation in neural translation models, allowing us to better control the length of utterances. Our experiments with an actual under-resourced language (UL), Mboshi BIBREF17, show that this technique outperforms our bilingual segmentation baseline.
Recurrent architectures in NMT
In this section, we briefly review the main concepts of recurrent architectures for machine translation introduced in BIBREF18, BIBREF19, BIBREF20. In our setting, the source and target sentences are always observed and we are mostly interested in the attention mechanism that is used to induce word segmentation.
Recurrent architectures in NMT ::: RNN encoder-decoder
Sequence-to-sequence models transform a variable-length source sequence into a variable-length target output sequence. In our context, the source sequence is a sequence of words $w_1, \ldots , w_J$ and the target sequence is an unsegmented sequence of phonemes or characters $\omega _1, \ldots , \omega _I$. In the RNN encoder-decoder architecture, an encoder consisting of a RNN reads a sequence of word embeddings $e(w_1),\dots ,e(w_J)$ representing the source and produces a dense representation $c$ of this sentence in a low-dimensional vector space. Vector $c$ is then fed to an RNN decoder producing the output translation $\omega _1,\dots ,\omega _I$ sequentially.
At each step of the input sequence, the encoder hidden states $h_j$ are computed as:
In most cases, $\phi $ corresponds to a long short-term memory (LSTM) BIBREF24 unit or a gated recurrent unit (GRU) BIBREF25, and $h_J$ is used as the fixed-length context vector $c$ initializing the RNN decoder.
On the target side, the decoder predicts each word $\omega _i$, given the context vector $c$ (in the simplest case, $h_J$, the last hidden state of the encoder) and the previously predicted words, using the probability distribution over the output vocabulary $V_T$:
where $s_i$ is the hidden state of the decoder RNN and $g$ is a nonlinear function (e.g. a multi-layer perceptron with a softmax layer) computed by the output layer of the decoder. The hidden state $s_i$ is then updated according to:
where $f$ again corresponds to the function computed by an LSTM or GRU cell.
The encoder and the decoder are trained jointly to maximize the likelihood of the translation $\mathrm {\Omega }=\Omega _1, \dots , \Omega _I$ given the source sentence $\mathrm {w}=w_1,\dots ,w_J$. As reference target words are available during training, $\Omega _i$ (and the corresponding embedding) can be used instead of $\omega _i$ in Equations (DISPLAY_FORM5) and (DISPLAY_FORM6), a technique known as teacher forcing BIBREF26.
Recurrent architectures in NMT ::: The attention mechanism
Encoding a variable-length source sentence in a fixed-length vector can lead to poor translation results with long sentences BIBREF19. To address this problem, BIBREF20 introduces an attention mechanism which provides a flexible source context to better inform the decoder's decisions. This means that the fixed context vector $c$ in Equations (DISPLAY_FORM5) and (DISPLAY_FORM6) is replaced with a position-dependent context $c_i$, defined as:
where weights $\alpha _{ij}$ are computed by an attention model made of a multi-layer perceptron (MLP) followed by a softmax layer. Denoting $a$ the function computed by the MLP, then
where $e_{ij}$ is known as the energy associated to $\alpha _{ij}$. Lines in the attention matrix $A = (\alpha _{ij})$ sum to 1, and weights $\alpha _{ij}$ can be interpreted as the probability that target word $\omega _i$ is aligned to source word $w_j$. BIBREF20 qualitatively investigated such soft alignments and concluded that their model can correctly align target words to relevant source words (see also BIBREF27, BIBREF28). Our segmentation method (Section SECREF3) relies on the assumption that the same holds when aligning characters or phonemes on the target side to source words.
Attention-based word segmentation
Recall that our goal is to discover words in an unsegmented stream of target characters (or phonemes) in the under-resourced language. In this section, we first describe a baseline method inspired by the “align to segment” of BIBREF12, BIBREF13. We then propose two extensions providing the model with a signal relevant to the segmentation process, so as to move towards a joint learning of segmentation and alignment.
Attention-based word segmentation ::: Align to segment
An attention matrix $A = (\alpha _{ij})$ can be interpreted as a soft alignment matrix between target and source units, where each cell $\alpha _{ij}$ corresponds to the probability for target symbols $\omega _i$ (here, a phone) to be aligned to the source word $w_j$ (cf. Equation (DISPLAY_FORM10)). In our context, where words need to be discovered on the target side, we follow BIBREF12, BIBREF13 and perform word segmentation as follows:
train an attentional RNN encoder-decoder model with attention using teacher forcing (see Section SECREF2);
force-decode the entire corpus and extract one attention matrix for each sentence pair.
identify boundaries in the target sequences. For each target unit $\omega _i$ of the UL, we identify the source word $w_{a_i}$ to which it is most likely aligned : $\forall i, a_i = \operatornamewithlimits{argmax}_j \alpha _{ij}$. Given these alignment links, a word segmentation is computed by introducing a word boundary in the target whenever two adjacent units are not aligned with the same source word ($a_i \ne a_{i+1}$).
Considering a (simulated) low-resource setting, and building on BIBREF14's work, BIBREF11 propose to smooth attentional alignments, either by post-processing attention matrices, or by flattening the softmax function in the attention model (see Equation (DISPLAY_FORM10)) with a temperature parameter $T$. This makes sense as the authors examine attentional alignments obtained while training from UL phonemes to WL words. But when translating from WL words to UL characters, this seems less useful: smoothing will encourage a character to align to many words. This technique is further explored by BIBREF29, who make the temperature parameter trainable and specific to each decoding step, so that the model can learn how to control the softness or sharpness of attention distributions, depending on the current word being decoded.
Attention-based word segmentation ::: Towards joint alignment and segmentation
One limitation in the approach described above lies in the absence of signal relative to segmentation during RNN training. Attempting to move towards a joint learning of alignment and segmentation, we propose here two extensions aimed at introducing constraints derived from our segmentation heuristic in the training process.
Attention-based word segmentation ::: Towards joint alignment and segmentation ::: Word-length bias
Our first extension relies on the assumption that the length of aligned source and target words should correlate. Being in a relationship of mutual translation, aligned words are expected to have comparable frequencies and meaning, hence comparable lengths. This means that the longer a source word is, the more target units should be aligned to it. We implement this idea in the attention mechanism as a word-length bias, changing the computation of the context vector from Equation (DISPLAY_FORM9) to:
where $\psi $ is a monotonically increasing function of the length $|w_j|$ of word $w_j$. This will encourage target units to attend more to longer source words. In practice, we choose $\psi $ to be the identity function and renormalize so as to ensure that lines still sum to 1 in the attention matrices. The context vectors $c_i$ are now computed with attention weights $\tilde{\alpha }_{ij}$ as:
We finally derive the target segmentation from the attention matrix $A = (\tilde{\alpha }_{ij})$, following the method of Section SECREF11.
Attention-based word segmentation ::: Towards joint alignment and segmentation ::: Introducing an auxiliary loss function
Another way to inject segmentation awareness inside our training procedure is to control the number of target words that will be produced during post-processing. The intuition here is that notwithstanding typological discrepancies, the target segmentation should yield a number of target words that is close to the length of the source.
To this end, we complement the main loss function with an additional term $\mathcal {L}_\mathrm {AUX}$ defined as:
The rationale behind this additional term is as follows: recall that a boundary is then inserted on the target side whenever two consecutive units are not aligned to the same source word. The dot product between consecutive lines in the attention matrix will be close to 1 if consecutive target units are aligned to the same source word, and closer to 0 if they are not. The summation thus quantifies the number of target units that will not be followed by a word boundary after segmentation, and $I - \sum _{i=1}^{I-1} \alpha _{i,*}^\top \alpha _{i+1, *}$ measures the number of word boundaries that are produced on the target side. Minimizing this auxiliary term should guide the model towards learning attention matrices resulting in target segmentations that have the same number of words on the source and target sides.
Figure FIGREF25 illustrates the effect of our auxiliary loss on an example. Without auxiliary loss, the segmentation will yield, in this case, 8 target segments (Figure FIGREF25), while the attention learnt with auxiliary loss will yield 5 target segments (Figure FIGREF25); source sentence, on the other hand, has 4 tokens.
Experiments and discussion
In this section, we describe implementation details for our baseline segmentation system and for the extensions proposed in Section SECREF17, before presenting data and results.
Experiments and discussion ::: Implementation details
Our baseline system is our own reimplementation of Bahdanau's encoder-decoder with attention in PyTorch BIBREF31. The last version of our code, which handles mini-batches efficiently, heavily borrows from Joost Basting's code. Source sentences include an end-of-sentence (EOS) symbol (corresponding to $w_J$ in our notation) and target sentences include both a beginning-of-sentence (BOS) and an EOS symbol. Padding of source and target sentences in mini-batches is required, as well as masking in the attention matrices and during loss computation. Our architecture follows BIBREF20 very closely with some minor changes.
We use a single-layer bidirectional RNN BIBREF32 with GRU cells: these have been shown to perform similarly to LSTM-based RNNs BIBREF33, while computationally more efficient. We use 64-dimensional hidden states for the forward and backward RNNs, and for the embeddings, similarly to BIBREF12, BIBREF13. In Equation (DISPLAY_FORM4), $h_j$ corresponds to the concatenation of the forward and backward states for each step $j$ of the source sequence.
The alignment MLP model computes function $a$ from Equation (DISPLAY_FORM10) as $a(s_{i-1}, h_j)=v_a^\top \tanh (W_a s_{i-1} + U_a h_j)$ – see Appendix A.1.2 in BIBREF20 – where $v_a$, $W_a$, and $U_a$ are weight matrices. For the computation of weights $\tilde{\alpha _{ij}}$ in the word-length bias extension (Equation (DISPLAY_FORM21)), we arbitrarily attribute a length of 1 to the EOS symbol on the source side.
The decoder is initialized using the last backward state of the encoder and a non-linear function ($\tanh $) for state $s_0$. We use a single-layer GRU RNN; hidden states and output embeddings are 64-dimensional. In preliminary experiments, and as in BIBREF34, we observed better segmentations adopting a “generate first” approach during decoding, where we first generate the current target word, then update the current RNN state. Equations (DISPLAY_FORM5) and (DISPLAY_FORM6) are accordingly modified into:
During training and forced decoding, the hidden state $s_i$ is thus updated using ground-truth embeddings $e(\Omega _{i})$. $\Omega _0$ is the BOS symbol. Our implementation of the output layer ($g$) consists of a MLP and a softmax.
We train for 800 epochs on the whole corpus with Adam (the learning rate is 0.001). Parameters are updated after each mini-batch of 64 sentence pairs. A dropout layer BIBREF35 is applied to both source and target embedding layers, with a rate of 0.5. The weights in all linear layers are initialized with Glorot's normalized method (Equation (16) in BIBREF36) and bias vectors are initialized to 0. Embeddings are initialized with the normal distribution $\mathcal {N}(0, 0.1)$. Except for the bridge between the encoder and the decoder, the initialization of RNN weights is kept to PyTorch defaults. During training, we minimize the NLL loss $\mathcal {L}_\mathrm {NLL}$ (see Section SECREF3), adding optionally the auxiliary loss $\mathcal {L}_\mathrm {AUX}$ (Section SECREF22). When the auxiliary loss term is used, we schedule it to be integrated progressively so as to avoid degenerate solutions with coefficient $\lambda _\mathrm {AUX}(k)$ at epoch $k$ defined by:
where $K$ is the total number of epochs and $W$ a wait parameter. The complete loss at epoch $k$ is thus $\mathcal {L}_\mathrm {NLL} + \lambda _\mathrm {AUX} \cdot \mathcal {L}_\mathrm {AUX}$. After trying values ranging from 100 to 700, we set $W$ to 200. We approximate the absolute value in Equation (DISPLAY_FORM24) by $|x| \triangleq \sqrt{x^2 + 0.001}$, in order to make the auxiliary loss function differentiable.
Experiments and discussion ::: Data and evaluation
Our experiments are performed on an actual endangered language, Mboshi (Bantu C25), a language spoken in Congo-Brazzaville, using the bilingual French-Mboshi 5K corpus of BIBREF17. On the Mboshi side, we consider alphabetic representation with no tonal information. On the French side,we simply consider the default segmentation into words.
We denote the baseline segmentation system as base, the word-length bias extension as bias, and the auxiliary loss extensions as aux. We also report results for a variant of aux (aux+ratio), in which the auxiliary loss is computed with a factor corresponding to the true length ratio $r_\mathrm {MB/FR}$ between Mboshi and French averaged over the first 100 sentences of the corpus. In this variant, the auxiliary loss is computed as $\vert I - r_\mathrm {MB/FR} \cdot J - \sum _{i=1}^{I-1} \alpha _{i,*}^\top \alpha _{i+1, *} \vert $.
We report segmentation performance using precision, recall, and F-measure on boundaries (BP, BR, BF), and tokens (WP, WR, WF). We also report the exact-match (X) metric which computes the proportion of correctly segmented utterances. Our main results are in Figure FIGREF47, where we report averaged scores over 10 runs. As a comparison with another bilingual method inspired by the “align to segment” approach, we also include the results obtained using the statistical models of BIBREF9, denoted Pisa, in Table TABREF46.
Experiments and discussion ::: Discussion
A first observation is that our baseline method base improves vastly over Pisa's results (by a margin of about 30% on boundary F-measure, BF).
Experiments and discussion ::: Discussion ::: Effects of the word-length bias
The integration of a word-bias in the attention mechanism seems detrimental to segmentation performance, and results obtained with bias are lower than those obtained with base, except for the sentence exact-match metric (X). To assess whether the introduction of word-length bias actually encourages target units to “attend more” to longer source word in bias, we compute the correlation between the length of source word and the quantity of attention these words receive (for each source position, we sum attention column-wise: $\sum _i \tilde{\alpha }_{ij}$). Results for all segmentation methods are in Table TABREF50. bias increases the correlation between word lengths and attention, but this correlation being already high for all methods (base, or aux and aux+ratio), our attempt to increase it proves here detrimental to segmentation.
Experiments and discussion ::: Discussion ::: Effects of the auxiliary loss
For boundary F-measures (BF) in Figure FIGREF47, aux performs similarly to base, but with a much higher precision, and degraded recall, indicating that the new method does not oversegment as much as base. More insight can be gained from various statistics on the automatically segmented data presented in Table TABREF52. The average token and sentence lengths for aux are closer to their ground-truth values (resp. 4.19 characters and 5.96 words). The global number of tokens produced is also brought closer to its reference. On token metrics, a similar effect is observed, but the trade-off between a lower recall and an increased precision is more favorable and yields more than 3 points in F-measure. These results are encouraging for documentation purposes, where precision is arguably a more valuable metric than recall in a semi-supervised segmentation scenario.
They, however, rely on a crude heuristic that the source and target sides (here French and Mboshi) should have the same number of units, which are only valid for typologically related languages and not very accurate for our dataset.
As Mboshi is more agglutinative than French (5.96 words per sentence on average in the Mboshi 5K, vs. 8.22 for French), we also consider the lightly supervised setting where the true length ratio is provided. This again turns out to be detrimental to performance, except for the boundary precision (BP) and the sentence exact-match (X). Note also that precision becomes stronger than recall for both boundary and token metrics, indicating under-segmentation. This is confirmed by an average token length that exceeds the ground-truth (and an average sentence length below the true value, see Table TABREF52).
Here again, our control of the target length proves effective: compared to base, the auxiliary loss has the effect to decrease the average sentence length and move it closer to its observed value (5.96), yielding an increased precision, an effect that is amplified with aux+ratio. By tuning this ratio, it is expected that we could even get slightly better results.
Related work
The attention mechanism introduced by BIBREF20 has been further explored by many researchers. BIBREF37, for instance, compare a global to a local approach for attention, and examine several architectures to compute alignment weights $\alpha _{ij}$. BIBREF38 additionally propose a recurrent version of the attention mechanism, where a “dynamic memory” keeps track of the attention received by each source word, and demonstrate better translation results. A more general formulation of the attention mechanism can, lastly, be found in BIBREF39, where structural dependencies between source units can be modeled.
With the goal of improving alignment quality, BIBREF40 computes a distance between attentions and word alignments learnt with the reparameterization of IBM Model 2 from BIBREF41; this distance is then added to the cost function during training. To improve alignments also, BIBREF14 introduce several refinements to the attention mechanism, in the form of structural biases common in word-based alignment models. In this work, the attention model is enriched with features able to control positional bias, fertility, or symmetry in the alignments, which leads to better translations for some language pairs, under low-resource conditions. More work seeking to improve alignment and translation quality can be found in BIBREF42, BIBREF43, BIBREF44, BIBREF45, BIBREF46, BIBREF47.
Another important line of reseach related to work studies the relationship between segmentation and alignment quality: it is recognized that sub-lexical units such as BPE BIBREF48 help solve the unknown word problem; other notable works around these lines include BIBREF49 and BIBREF50.
CLD has also attracted a growing interest in recent years. Most recent work includes speech-to-text translation BIBREF51, BIBREF52, speech transcription using bilingual supervision BIBREF53, both speech transcription and translation BIBREF54, or automatic phonemic transcription of tonal languages BIBREF55.
Conclusion
In this paper, we explored neural segmentation methods extending the “align to segment” approach, and proposed extensions to move towards joint segmentation and alignment. This involved the introduction of a word-length bias in the attention mechanism and the design of an auxiliary loss. The latter approach yielded improvements over the baseline on all accounts, in particular for the precision metric.
Our results, however, lag behind the best monolingual performance for this dataset (see e.g. BIBREF56). This might be due to the difficulty of computing valid alignments between phonemes and words in very limited data conditions, which remains very challenging, as also demonstrated by the results of Pisa. However, unlike monolingual methods, bilingual methods generate word alignments and their real benefit should be assessed with alignment based metrics. This is left for future work, as reference word alignments are not yet available for our data.
Other extensions of this work will focus on ways to mitigate data sparsity with weak supervision information, either by using lists of frequent words or the presence of certain word boundaries on the target side or by using more sophisticated attention models in the spirit of BIBREF14 or BIBREF39. | French-Mboshi 5K corpus |
be3e020ba84bc53dfb90b8acaf549004b66e31e2 | be3e020ba84bc53dfb90b8acaf549004b66e31e2_0 | Q: How is the word segmentation task evaluated?
Text: Introduction
All over the world, languages are disappearing at an unprecedented rate, fostering the need for specific tools aimed to aid field linguists to collect, transcribe, analyze, and annotate endangered language data (e.g. BIBREF0, BIBREF1). A remarkable effort in this direction has improved the data collection procedures and tools BIBREF2, BIBREF3, enabling to collect corpora for an increasing number of endangered languages (e.g. BIBREF4).
One of the basic tasks of computational language documentation (CLD) is to identify word or morpheme boundaries in an unsegmented phonemic or orthographic stream. Several unsupervised monolingual word segmentation algorithms exist in the literature, based, for instance, on information-theoretic BIBREF5, BIBREF6 or nonparametric Bayesian techniques BIBREF7, BIBREF8. These techniques are, however, challenged in real-world settings by the small amount of available data.
A possible remedy is to take advantage of glosses or translations in a foreign, well-resourced language (WL), which often exist for such data, hoping that the bilingual context will provide additional cues to guide the segmentation algorithm. Such techniques have already been explored, for instance, in BIBREF9, BIBREF10 in the context of improving statistical alignment and translation models; and in BIBREF11, BIBREF12, BIBREF13 using Attentional Neural Machine Translation (NMT) models. In these latter studies, word segmentation is obtained by post-processing attention matrices, taking attention information as a noisy proxy to word alignment BIBREF14.
In this paper, we explore ways to exploit neural machine translation models to perform unsupervised boundary detection with bilingual information. Our main contribution is a new loss function for jointly learning alignment and segmentation in neural translation models, allowing us to better control the length of utterances. Our experiments with an actual under-resourced language (UL), Mboshi BIBREF17, show that this technique outperforms our bilingual segmentation baseline.
Recurrent architectures in NMT
In this section, we briefly review the main concepts of recurrent architectures for machine translation introduced in BIBREF18, BIBREF19, BIBREF20. In our setting, the source and target sentences are always observed and we are mostly interested in the attention mechanism that is used to induce word segmentation.
Recurrent architectures in NMT ::: RNN encoder-decoder
Sequence-to-sequence models transform a variable-length source sequence into a variable-length target output sequence. In our context, the source sequence is a sequence of words $w_1, \ldots , w_J$ and the target sequence is an unsegmented sequence of phonemes or characters $\omega _1, \ldots , \omega _I$. In the RNN encoder-decoder architecture, an encoder consisting of a RNN reads a sequence of word embeddings $e(w_1),\dots ,e(w_J)$ representing the source and produces a dense representation $c$ of this sentence in a low-dimensional vector space. Vector $c$ is then fed to an RNN decoder producing the output translation $\omega _1,\dots ,\omega _I$ sequentially.
At each step of the input sequence, the encoder hidden states $h_j$ are computed as:
In most cases, $\phi $ corresponds to a long short-term memory (LSTM) BIBREF24 unit or a gated recurrent unit (GRU) BIBREF25, and $h_J$ is used as the fixed-length context vector $c$ initializing the RNN decoder.
On the target side, the decoder predicts each word $\omega _i$, given the context vector $c$ (in the simplest case, $h_J$, the last hidden state of the encoder) and the previously predicted words, using the probability distribution over the output vocabulary $V_T$:
where $s_i$ is the hidden state of the decoder RNN and $g$ is a nonlinear function (e.g. a multi-layer perceptron with a softmax layer) computed by the output layer of the decoder. The hidden state $s_i$ is then updated according to:
where $f$ again corresponds to the function computed by an LSTM or GRU cell.
The encoder and the decoder are trained jointly to maximize the likelihood of the translation $\mathrm {\Omega }=\Omega _1, \dots , \Omega _I$ given the source sentence $\mathrm {w}=w_1,\dots ,w_J$. As reference target words are available during training, $\Omega _i$ (and the corresponding embedding) can be used instead of $\omega _i$ in Equations (DISPLAY_FORM5) and (DISPLAY_FORM6), a technique known as teacher forcing BIBREF26.
Recurrent architectures in NMT ::: The attention mechanism
Encoding a variable-length source sentence in a fixed-length vector can lead to poor translation results with long sentences BIBREF19. To address this problem, BIBREF20 introduces an attention mechanism which provides a flexible source context to better inform the decoder's decisions. This means that the fixed context vector $c$ in Equations (DISPLAY_FORM5) and (DISPLAY_FORM6) is replaced with a position-dependent context $c_i$, defined as:
where weights $\alpha _{ij}$ are computed by an attention model made of a multi-layer perceptron (MLP) followed by a softmax layer. Denoting $a$ the function computed by the MLP, then
where $e_{ij}$ is known as the energy associated to $\alpha _{ij}$. Lines in the attention matrix $A = (\alpha _{ij})$ sum to 1, and weights $\alpha _{ij}$ can be interpreted as the probability that target word $\omega _i$ is aligned to source word $w_j$. BIBREF20 qualitatively investigated such soft alignments and concluded that their model can correctly align target words to relevant source words (see also BIBREF27, BIBREF28). Our segmentation method (Section SECREF3) relies on the assumption that the same holds when aligning characters or phonemes on the target side to source words.
Attention-based word segmentation
Recall that our goal is to discover words in an unsegmented stream of target characters (or phonemes) in the under-resourced language. In this section, we first describe a baseline method inspired by the “align to segment” of BIBREF12, BIBREF13. We then propose two extensions providing the model with a signal relevant to the segmentation process, so as to move towards a joint learning of segmentation and alignment.
Attention-based word segmentation ::: Align to segment
An attention matrix $A = (\alpha _{ij})$ can be interpreted as a soft alignment matrix between target and source units, where each cell $\alpha _{ij}$ corresponds to the probability for target symbols $\omega _i$ (here, a phone) to be aligned to the source word $w_j$ (cf. Equation (DISPLAY_FORM10)). In our context, where words need to be discovered on the target side, we follow BIBREF12, BIBREF13 and perform word segmentation as follows:
train an attentional RNN encoder-decoder model with attention using teacher forcing (see Section SECREF2);
force-decode the entire corpus and extract one attention matrix for each sentence pair.
identify boundaries in the target sequences. For each target unit $\omega _i$ of the UL, we identify the source word $w_{a_i}$ to which it is most likely aligned : $\forall i, a_i = \operatornamewithlimits{argmax}_j \alpha _{ij}$. Given these alignment links, a word segmentation is computed by introducing a word boundary in the target whenever two adjacent units are not aligned with the same source word ($a_i \ne a_{i+1}$).
Considering a (simulated) low-resource setting, and building on BIBREF14's work, BIBREF11 propose to smooth attentional alignments, either by post-processing attention matrices, or by flattening the softmax function in the attention model (see Equation (DISPLAY_FORM10)) with a temperature parameter $T$. This makes sense as the authors examine attentional alignments obtained while training from UL phonemes to WL words. But when translating from WL words to UL characters, this seems less useful: smoothing will encourage a character to align to many words. This technique is further explored by BIBREF29, who make the temperature parameter trainable and specific to each decoding step, so that the model can learn how to control the softness or sharpness of attention distributions, depending on the current word being decoded.
Attention-based word segmentation ::: Towards joint alignment and segmentation
One limitation in the approach described above lies in the absence of signal relative to segmentation during RNN training. Attempting to move towards a joint learning of alignment and segmentation, we propose here two extensions aimed at introducing constraints derived from our segmentation heuristic in the training process.
Attention-based word segmentation ::: Towards joint alignment and segmentation ::: Word-length bias
Our first extension relies on the assumption that the length of aligned source and target words should correlate. Being in a relationship of mutual translation, aligned words are expected to have comparable frequencies and meaning, hence comparable lengths. This means that the longer a source word is, the more target units should be aligned to it. We implement this idea in the attention mechanism as a word-length bias, changing the computation of the context vector from Equation (DISPLAY_FORM9) to:
where $\psi $ is a monotonically increasing function of the length $|w_j|$ of word $w_j$. This will encourage target units to attend more to longer source words. In practice, we choose $\psi $ to be the identity function and renormalize so as to ensure that lines still sum to 1 in the attention matrices. The context vectors $c_i$ are now computed with attention weights $\tilde{\alpha }_{ij}$ as:
We finally derive the target segmentation from the attention matrix $A = (\tilde{\alpha }_{ij})$, following the method of Section SECREF11.
Attention-based word segmentation ::: Towards joint alignment and segmentation ::: Introducing an auxiliary loss function
Another way to inject segmentation awareness inside our training procedure is to control the number of target words that will be produced during post-processing. The intuition here is that notwithstanding typological discrepancies, the target segmentation should yield a number of target words that is close to the length of the source.
To this end, we complement the main loss function with an additional term $\mathcal {L}_\mathrm {AUX}$ defined as:
The rationale behind this additional term is as follows: recall that a boundary is then inserted on the target side whenever two consecutive units are not aligned to the same source word. The dot product between consecutive lines in the attention matrix will be close to 1 if consecutive target units are aligned to the same source word, and closer to 0 if they are not. The summation thus quantifies the number of target units that will not be followed by a word boundary after segmentation, and $I - \sum _{i=1}^{I-1} \alpha _{i,*}^\top \alpha _{i+1, *}$ measures the number of word boundaries that are produced on the target side. Minimizing this auxiliary term should guide the model towards learning attention matrices resulting in target segmentations that have the same number of words on the source and target sides.
Figure FIGREF25 illustrates the effect of our auxiliary loss on an example. Without auxiliary loss, the segmentation will yield, in this case, 8 target segments (Figure FIGREF25), while the attention learnt with auxiliary loss will yield 5 target segments (Figure FIGREF25); source sentence, on the other hand, has 4 tokens.
Experiments and discussion
In this section, we describe implementation details for our baseline segmentation system and for the extensions proposed in Section SECREF17, before presenting data and results.
Experiments and discussion ::: Implementation details
Our baseline system is our own reimplementation of Bahdanau's encoder-decoder with attention in PyTorch BIBREF31. The last version of our code, which handles mini-batches efficiently, heavily borrows from Joost Basting's code. Source sentences include an end-of-sentence (EOS) symbol (corresponding to $w_J$ in our notation) and target sentences include both a beginning-of-sentence (BOS) and an EOS symbol. Padding of source and target sentences in mini-batches is required, as well as masking in the attention matrices and during loss computation. Our architecture follows BIBREF20 very closely with some minor changes.
We use a single-layer bidirectional RNN BIBREF32 with GRU cells: these have been shown to perform similarly to LSTM-based RNNs BIBREF33, while computationally more efficient. We use 64-dimensional hidden states for the forward and backward RNNs, and for the embeddings, similarly to BIBREF12, BIBREF13. In Equation (DISPLAY_FORM4), $h_j$ corresponds to the concatenation of the forward and backward states for each step $j$ of the source sequence.
The alignment MLP model computes function $a$ from Equation (DISPLAY_FORM10) as $a(s_{i-1}, h_j)=v_a^\top \tanh (W_a s_{i-1} + U_a h_j)$ – see Appendix A.1.2 in BIBREF20 – where $v_a$, $W_a$, and $U_a$ are weight matrices. For the computation of weights $\tilde{\alpha _{ij}}$ in the word-length bias extension (Equation (DISPLAY_FORM21)), we arbitrarily attribute a length of 1 to the EOS symbol on the source side.
The decoder is initialized using the last backward state of the encoder and a non-linear function ($\tanh $) for state $s_0$. We use a single-layer GRU RNN; hidden states and output embeddings are 64-dimensional. In preliminary experiments, and as in BIBREF34, we observed better segmentations adopting a “generate first” approach during decoding, where we first generate the current target word, then update the current RNN state. Equations (DISPLAY_FORM5) and (DISPLAY_FORM6) are accordingly modified into:
During training and forced decoding, the hidden state $s_i$ is thus updated using ground-truth embeddings $e(\Omega _{i})$. $\Omega _0$ is the BOS symbol. Our implementation of the output layer ($g$) consists of a MLP and a softmax.
We train for 800 epochs on the whole corpus with Adam (the learning rate is 0.001). Parameters are updated after each mini-batch of 64 sentence pairs. A dropout layer BIBREF35 is applied to both source and target embedding layers, with a rate of 0.5. The weights in all linear layers are initialized with Glorot's normalized method (Equation (16) in BIBREF36) and bias vectors are initialized to 0. Embeddings are initialized with the normal distribution $\mathcal {N}(0, 0.1)$. Except for the bridge between the encoder and the decoder, the initialization of RNN weights is kept to PyTorch defaults. During training, we minimize the NLL loss $\mathcal {L}_\mathrm {NLL}$ (see Section SECREF3), adding optionally the auxiliary loss $\mathcal {L}_\mathrm {AUX}$ (Section SECREF22). When the auxiliary loss term is used, we schedule it to be integrated progressively so as to avoid degenerate solutions with coefficient $\lambda _\mathrm {AUX}(k)$ at epoch $k$ defined by:
where $K$ is the total number of epochs and $W$ a wait parameter. The complete loss at epoch $k$ is thus $\mathcal {L}_\mathrm {NLL} + \lambda _\mathrm {AUX} \cdot \mathcal {L}_\mathrm {AUX}$. After trying values ranging from 100 to 700, we set $W$ to 200. We approximate the absolute value in Equation (DISPLAY_FORM24) by $|x| \triangleq \sqrt{x^2 + 0.001}$, in order to make the auxiliary loss function differentiable.
Experiments and discussion ::: Data and evaluation
Our experiments are performed on an actual endangered language, Mboshi (Bantu C25), a language spoken in Congo-Brazzaville, using the bilingual French-Mboshi 5K corpus of BIBREF17. On the Mboshi side, we consider alphabetic representation with no tonal information. On the French side,we simply consider the default segmentation into words.
We denote the baseline segmentation system as base, the word-length bias extension as bias, and the auxiliary loss extensions as aux. We also report results for a variant of aux (aux+ratio), in which the auxiliary loss is computed with a factor corresponding to the true length ratio $r_\mathrm {MB/FR}$ between Mboshi and French averaged over the first 100 sentences of the corpus. In this variant, the auxiliary loss is computed as $\vert I - r_\mathrm {MB/FR} \cdot J - \sum _{i=1}^{I-1} \alpha _{i,*}^\top \alpha _{i+1, *} \vert $.
We report segmentation performance using precision, recall, and F-measure on boundaries (BP, BR, BF), and tokens (WP, WR, WF). We also report the exact-match (X) metric which computes the proportion of correctly segmented utterances. Our main results are in Figure FIGREF47, where we report averaged scores over 10 runs. As a comparison with another bilingual method inspired by the “align to segment” approach, we also include the results obtained using the statistical models of BIBREF9, denoted Pisa, in Table TABREF46.
Experiments and discussion ::: Discussion
A first observation is that our baseline method base improves vastly over Pisa's results (by a margin of about 30% on boundary F-measure, BF).
Experiments and discussion ::: Discussion ::: Effects of the word-length bias
The integration of a word-bias in the attention mechanism seems detrimental to segmentation performance, and results obtained with bias are lower than those obtained with base, except for the sentence exact-match metric (X). To assess whether the introduction of word-length bias actually encourages target units to “attend more” to longer source word in bias, we compute the correlation between the length of source word and the quantity of attention these words receive (for each source position, we sum attention column-wise: $\sum _i \tilde{\alpha }_{ij}$). Results for all segmentation methods are in Table TABREF50. bias increases the correlation between word lengths and attention, but this correlation being already high for all methods (base, or aux and aux+ratio), our attempt to increase it proves here detrimental to segmentation.
Experiments and discussion ::: Discussion ::: Effects of the auxiliary loss
For boundary F-measures (BF) in Figure FIGREF47, aux performs similarly to base, but with a much higher precision, and degraded recall, indicating that the new method does not oversegment as much as base. More insight can be gained from various statistics on the automatically segmented data presented in Table TABREF52. The average token and sentence lengths for aux are closer to their ground-truth values (resp. 4.19 characters and 5.96 words). The global number of tokens produced is also brought closer to its reference. On token metrics, a similar effect is observed, but the trade-off between a lower recall and an increased precision is more favorable and yields more than 3 points in F-measure. These results are encouraging for documentation purposes, where precision is arguably a more valuable metric than recall in a semi-supervised segmentation scenario.
They, however, rely on a crude heuristic that the source and target sides (here French and Mboshi) should have the same number of units, which are only valid for typologically related languages and not very accurate for our dataset.
As Mboshi is more agglutinative than French (5.96 words per sentence on average in the Mboshi 5K, vs. 8.22 for French), we also consider the lightly supervised setting where the true length ratio is provided. This again turns out to be detrimental to performance, except for the boundary precision (BP) and the sentence exact-match (X). Note also that precision becomes stronger than recall for both boundary and token metrics, indicating under-segmentation. This is confirmed by an average token length that exceeds the ground-truth (and an average sentence length below the true value, see Table TABREF52).
Here again, our control of the target length proves effective: compared to base, the auxiliary loss has the effect to decrease the average sentence length and move it closer to its observed value (5.96), yielding an increased precision, an effect that is amplified with aux+ratio. By tuning this ratio, it is expected that we could even get slightly better results.
Related work
The attention mechanism introduced by BIBREF20 has been further explored by many researchers. BIBREF37, for instance, compare a global to a local approach for attention, and examine several architectures to compute alignment weights $\alpha _{ij}$. BIBREF38 additionally propose a recurrent version of the attention mechanism, where a “dynamic memory” keeps track of the attention received by each source word, and demonstrate better translation results. A more general formulation of the attention mechanism can, lastly, be found in BIBREF39, where structural dependencies between source units can be modeled.
With the goal of improving alignment quality, BIBREF40 computes a distance between attentions and word alignments learnt with the reparameterization of IBM Model 2 from BIBREF41; this distance is then added to the cost function during training. To improve alignments also, BIBREF14 introduce several refinements to the attention mechanism, in the form of structural biases common in word-based alignment models. In this work, the attention model is enriched with features able to control positional bias, fertility, or symmetry in the alignments, which leads to better translations for some language pairs, under low-resource conditions. More work seeking to improve alignment and translation quality can be found in BIBREF42, BIBREF43, BIBREF44, BIBREF45, BIBREF46, BIBREF47.
Another important line of reseach related to work studies the relationship between segmentation and alignment quality: it is recognized that sub-lexical units such as BPE BIBREF48 help solve the unknown word problem; other notable works around these lines include BIBREF49 and BIBREF50.
CLD has also attracted a growing interest in recent years. Most recent work includes speech-to-text translation BIBREF51, BIBREF52, speech transcription using bilingual supervision BIBREF53, both speech transcription and translation BIBREF54, or automatic phonemic transcription of tonal languages BIBREF55.
Conclusion
In this paper, we explored neural segmentation methods extending the “align to segment” approach, and proposed extensions to move towards joint segmentation and alignment. This involved the introduction of a word-length bias in the attention mechanism and the design of an auxiliary loss. The latter approach yielded improvements over the baseline on all accounts, in particular for the precision metric.
Our results, however, lag behind the best monolingual performance for this dataset (see e.g. BIBREF56). This might be due to the difficulty of computing valid alignments between phonemes and words in very limited data conditions, which remains very challenging, as also demonstrated by the results of Pisa. However, unlike monolingual methods, bilingual methods generate word alignments and their real benefit should be assessed with alignment based metrics. This is left for future work, as reference word alignments are not yet available for our data.
Other extensions of this work will focus on ways to mitigate data sparsity with weak supervision information, either by using lists of frequent words or the presence of certain word boundaries on the target side or by using more sophisticated attention models in the spirit of BIBREF14 or BIBREF39. | precision, recall, and F-measure on boundaries (BP, BR, BF), and tokens (WP, WR, WF), exact-match (X) metric |
24014a040447013a8cf0c0f196274667320db79f | 24014a040447013a8cf0c0f196274667320db79f_0 | Q: What are performance compared to former models?
Text: Introduction
Dependency parsing predicts the existence and type of linguistic dependency relations between words (as shown in Figure FIGREF1), which is a critical step in accomplishing deep natural language processing. Dependency parsing has been well developed BIBREF0, BIBREF1, and it generally relies on two types of parsing models: transition-based models and graph-based models. The former BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF4 traditionally apply local and greedy transition-based algorithms, while the latter BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12 apply globally optimized graph-based algorithms.
A transition-based dependency parser processes the sentence word-by-word, commonly from left to right, and forms a dependency tree incrementally from the operations predicted. This method is advantageous in that inference on the projective dependency tree is linear in time complexity with respect to sentence length; however, it has several obvious disadvantages. Because the decision-making of each step is based on partially-built parse trees, special training methods are required, which results in slow training and error propagation, as well as weak long-distance dependence processing BIBREF13.
Graph-based parsers learn scoring functions in one-shot and then perform an exhaustive search over the entire tree space for the highest-scoring tree. This improves the performances of the parsers, particularly the long-distance dependency processing, but these models usually have slow inference speed to encourage higher accuracy.
The easy-first parsing approach BIBREF14, BIBREF15 was designed to integrate the advantages of graph-based parsers’ better-performing trees and transition-based parsers’ linear decoding complexity. By processing the input tokens in a stepwise easy-to-hard order, the algorithm makes use of structured information on partially-built parse trees. Because of the presence of rich, structured information, exhaustive inference is not an optimal solution - we can leverage this information to conduct inference much more quickly. As an alternative to exhaustive inference, easy-first chooses to use an approximated greedy search that only explores a tiny fraction of the search space. Compared to graph-based parsers, however, easy-first parsers have two apparent weaknesses: slower training and worse performance. According to our preliminary studies, with the current state-of-the-art systems, we must either sacrifice training complexity for decoding speed, or sacrifice decoding speed for higher accuracy.
In this paper, we propose a novel Global (featuring) Greedy (inference) parsing architecture that achieves fast training, high decoding speed and good performance. With our approach, we use the one-shot arc scoring scheme as in the graph-based parser instead of the stepwise local scoring in transition-based. This is essential for achieving competitive performance, efficient training, and fast decoding. Since, to preserve linear time decoding, we chose a greedy algorithm, we introduce a parsing order scoring scheme to retain the decoding order in inference to achieve the highest accuracy possible. Just as with one-shot scoring in graph-based parsers, our proposed parser will perform arc-attachment scoring, parsing order scoring, and decoding simultaneously in an incremental, deterministic fashion just as transition-based parsers do.
We evaluated our models on the common benchmark treebanks PTB and CTB, as well as on the multilingual CoNLL and the Universal Dependency treebanks. From the evaluation results on the benchmark treebanks, our proposed model gives significant improvements when compared to the baseline parser. In summary, our contributions are thus:
$\bullet $ We integrate the arc scoring mechanism of graph-based parsers and the linear time complexity inference approach of transition parsing models, which, by replacing stepwise local feature scoring, significantly alleviates the drawbacks of these models, improving their moderate performance caused by error propagation and increasing their training speeds resulting from their lack of parallelism.
$\bullet $ Empirical evaluations on benchmark and multilingual treebanks show that our method achieves state-of-the-art or comparable performance, indicating that our novel neural network architecture for dependency parsing is simple, effective, and efficient.
$\bullet $ Our work shows that using neural networks’ excellent learning ability, we can simultaneously achieve both improved accuracy and speed.
The General Greedy Parsing
The global greedy parser will build its dependency trees in a stepwise manner without backtracking, which takes a general greedy decoding algorithm as in easy-first parsers.
Using easy-first parsing's notation, we describe the decoding in our global greedy parsing. As both easy-first and global greedy parsing rely on a series of deterministic parsing actions in a general parsing order (unlike the fixed left-to-right order of standard transitional parsers), they need a specific data structure which consists of a list of unattached nodes (including their partial structures) referred to as “pending". At each step, the parser chooses a specific action $\hat{a}$ on position $i$ with the given arc score score($\cdot $), which is generated by an arc scorer in the parser. Given an intermediate state of parsing with pending $P=\lbrace p_0, p_1, p_2, \cdots , p_N\rbrace $, the attachment action is determined as follows:
where $\mathcal {A}$ denotes the set of the allowed actions, and $i$ is the index of the node in pending. In addition to distinguishing the correct attachments from the incorrect ones, the arc scorer also assigns the highest scores to the easiest attachment decisions and lower scores to the harder decisions, thus determining the parsing order of an input sentence.
For projective parsing, there are exactly two types of actions in the allowed action set: ATTACHLEFT($i$) and ATTACHRIGHT($i$). Let $p_i$ refer to $i$-th element in pending, then the allowed actions can be formally defined as follows:
$\bullet $ ATTACHLEFT($i$): attaches $p_{i+1}$ to $p_i$ , which results in an arc ($p_i$, $p_{i+1}$) headed by $p_i$, and removes $p_{i+1}$ from pending.
$\bullet $ ATTACHRIGHT($i$): attaches $p_i$ to $p_{i+1}$ , which results in an arc ($p_{i+1}$, $p_i$) headed by $p_{i+1}$, and removes $p_i$ from pending.
Global Greedy Parsing Model
Our proposed global greedy model contains three components: (1) an encoder that processes the input sentence and maps it into hidden states that lie in a low dimensional vector space $h_i$ and feeds it into a specific representation layer to strip away irrelevant information, (2) a modified scorer with a parsing order objective, and (3) a greedy inference module that generates the dependency tree.
Global Greedy Parsing Model ::: Encoder
We employ a bi-directional LSTM-CNN architecture (BiLSTM-CNN) to encode the context in which convolutional neural networks (CNNs) learn character-level information $e_{char}$ to better handle out-of-vocabulary words. We then combine these words' character level embeddings with their word embedding $e_{word}$ and POS embedding $e_{pos}$ to create a context-independent representation, which we then feed into the BiLSTM to create word-level context-dependent representations. To further enhance the word-level representation, we leverage an external fixed representation $e_{lm}$ from pre-trained ELMo BIBREF16 or BERT BIBREF17 layer features. Finally, the encoder outputs a sequence of contextualized representations $h_i$.
Because the contextualized representations will be used for several different purposes in the following scorers, it is necessary to specify a representation for each purpose. As shown in BIBREF18, applying a multi-layer perceptron (MLP) to the recurrent output states before the classifier strips away irrelevant information for the current decision, reducing both the dimensionality and the risk of model overfitting. Therefore, in order to distinguish the biaffine scorer's head and dependent representations and the parsing order scorer's representations, we add a separate contextualized representation layer with ReLU as its activation function for each syntax head $h^{head}_i \in H_{head}$ specific representations, dependent $h^{dep}_i \in H_{dep}$ specific representations, and parsing order $h^{order}_i \in H_{order}$:
Global Greedy Parsing Model ::: Scorers
The traditional easy-first model relies on an incremental tree scoring process with stepwise loss backpropagation and sub-tree removal facilitated by local scoring, relying on the scorer and loss backpropagation to hopefully obtain the parsing order. Communicating the information from the scorer and the loss requires training a dynamic oracle, which exposes the model to the configurations resulting from erroneous decisions. This training process is done at the token level, not the sentence level, which unfortunately means incremental scoring prevents parallelized training and causes error propagation. We thus forego incremental local scoring, and, inspired by the design of graph-based parsing models, we instead choose to score all of the syntactic arc candidates in one-shot, which allows for global featuring at a sentence level; however, the introduction of one-shot scoring brings new problems. Since the graph-based method relies on a tree space search algorithm to find the tree with the highest score, the parsing order is not important at all. If we apply one-shot scoring to greedy parsing, we need a mechanism like a stack (as is used in transition-based parsing) to preserve the parsing order.
Both transition-based and easy-first parsers build parse trees in an incremental style, which forces tree formation to follow an order starting from either the root and working towards the leaf nodes or vice versa. When a parser builds an arc that skips any layer, certain errors will exist that it will be impossible for the parent node to find. We thus implement a parsing order prediction module to learn a parsing order objective that outputs a parsing order score addition to the arc score to ensure that each pending node is attached to its parent only after all (or at least as many as possible) of its children have been collected.
Our scorer consists of two parts: a biaffine scorer for one-shot scoring and a parsing order scorer for parsing order guiding. For the biaffine scorer, we adopt the biaffine attention mechanism BIBREF18 to score all possible head-dependent pairs:
where $\textbf {W}_{arc}$, $\textbf {U}_{arc}$, $\textbf {V}_{arc}$, $\textbf {b}_{arc}$ are the weight matrix of the bi-linear term, the two weight vectors of the linear terms, and the bias vector, respectively.
If we perform greedy inference only on the $s_{arc}$ directly, as in Figure FIGREF6, at step $i$, the decoder tests every pair in the pending list, and although the current score fits the correct tree structure for this example, because backtracking is not allowed in the deterministic greedy inference, according to the maximum score $s_{arc}$, the edge selected in step $i$+1 is “root"$\rightarrow $“come". This prevents the child nodes (“today" and “.") from finding the correct parent node in the subsequent step. Thus, the decoder is stuck with this error. This problem can be solved or mitigated by using a max spanning tree (MST) decoder or by adding beam search method to the inference, but neither guarantees maintaining linear time decoding. Therefore, we propose a new scorer for parsing order $s_{order}$. In the scoring stage, the parsing order score is passed to the decoder to guide it and prevent (as much as possible) resorting to erroneous choices.
We formally define the parsing order score for decoding. To decode the nodes at the bottom of the syntax tree first, we define the the parsing order priority as the layer “level" or “position" in the tree. The biaffine output score is the probability of edge (dependency) existence, between 0 and 1, so the greater the probability, the more likely an edge is to exist. Thus, our parsing order scorer gives a layer score for a node, and then, we add this layer score to the biaffine score. Consequently, the relative score of the same layer can be kept unchanged, and the higher the score of a node in the bottom layer, the higher its decoding priority will be. We therefore define $s_{order}$ as:
where $\textbf {W}_{order}$ and $\textbf {b}_{order} $ are parameters for the parsing order scorer. Finally, the one-shot arc score is:
Similarly, we use the biaffine scorer for dependency label classification. We apply MLPs to the contextualized representations before using them in the label classifier as well. As with other graph-based models, the predicted tree at training time has each word as a dependent of its highest-scoring head (although at test time we ensure that the parse is a well-formed tree via the greedy parsing algorithm).
Global Greedy Parsing Model ::: Training Objectives
To parse the syntax tree $y$ for a sentence $x$ with length $l$, the easy-first model relies on an action-by-action process performed on pending. In the training stage, the loss is accumulated once per step (action), and the model is updated by gradient backpropagation according to a preset frequency. This prohibits parallelism during model training lack between and within sentences. Therefore, the traditional easy-first model was trained to maximize following probability:
where $\emph {pending}_i$ is the pending list state at step $i$.
While for our proposed model, it uses the training method similar to that of graph-based models, in which the arc scores are all obtained in one-shot. Consequently, it does not rely on the pending list in the training phase and only uses the pending list to promote the process of linear parsing in the inference stage. Our model is trained to optimize the probability of the dependency tree $y$ when given a sentence $x$: $P_\theta (y|x)$, which can be factorized as:
where $\theta $ represents learnable parameters, $l$ denotes the length of the processing sentence, and $y^{arc}_i$, $y^{rel}_i$ denote the highest-scoring head and dependency relation for node $x_i$. Thus, our model factors the distribution according to a bottom-up tree structure.
Corresponding to multiple objectives, several parts compose the loss of our model. The overall training loss is the sum of three objectives:
where the loss for arc prediction $\mathcal {L}^{arc}$ is the negative log-likelihood loss of the golden structure $y^{arc}$:
the loss for relation prediction $\mathcal {L}^{rel}$ is implemented as the negative log-likelihood loss of the golden relation $y^{rel}$ with the golden structure $y^{arc}$,
and the loss for parsing order prediction $\mathcal {L}^{order}$:
Because the parsing order score of each layer in the tree increases by 1, we frame it as a classification problem and therefore add a multi-class classifier module as the order scorer.
Global Greedy Parsing Model ::: Non-Projective Inference
For non-projective inference, we introduce two additional arc-building actions as follows.
$\bullet $ NP-ATTACHLEFT($i$): attaches $p_{j}$ to $p_i$ where $j > i$, which builds an arc ($p_i$, $p_{j}$) headed by $p_i$, and removes $p_{j}$ from pending.
$\bullet $ NP-ATTACHRIGHT($i$): attaches $p_{j}$ to $p_i$ where $j < i$ which builds an arc ($p_i$, $p_j$) headed by $p_i$, and removes $p_j$ from pending.
If we use the two arc-building actions for non-projective dependency trees directly on $s_{final}$, the time complexity will become $O(n^3)$, so we need to modify this algorithm to accommodate the non-projective dependency trees. Specifically, we no longer use $s_{final}$ directly for greedy search but instead divide each decision into two steps. The first step is to use the order score $s_{order}$ to sort the pending list in descending order. Then, the second step is to find the edge with the largest arc score $s_{arc}$ for this node in the first position of the pending list.
Global Greedy Parsing Model ::: Time Complexity
The number of decoding steps to build a parse tree for a sentence is the same as its length, $n$. Combining this with the searching in the pending list (at each step, we need to find the highest-scoring pair in the pending list to attach. This has a runtime of $O(n)$. The time complexity of a full decoding is $O(n^2)$, which is equal to 1st-order non-projective graph-based parsing but more efficient than 1st-order projective parsing with $O(n^3)$ and other higher order graph parsing models. Compared with the current state-of-the-art transition-based parser STACKPTR BIBREF23, with the same decoding time complexity as ours, since our number of decoding takes $n$ steps while STACKPTR takes $2n-1$ steps for decoding and needs to compute the attention vector at each step, our model actually would be much faster than STACKPTR in decoding.
For the non-projective inference in our model, the complexity is still $O(n^2)$. Since the order score and the arc score are two parts that do not affect each other, we can sort the order scores with time complexity of $O$($n$log$n$) and then iterate in this descending order. The iteration time complexity is $O(n)$ and determining the arc is also $O(n)$, so the overall time complexity is $O$($n$log$n$) $+$ $O(n^2)$, simplifying to $O(n^2)$.
Experiments
We evaluate our parsing model on the English Penn Treebank (PTB), the Chinese Penn Treebank (CTB), treebanks from two CoNLL shared tasks and the Universal Dependency (UD) Treebanks, using unlabeled attachment scores (UAS) and labeled attachment scores (LAS) as the metrics. Punctuation is ignored as in previous work BIBREF18. For English and Chinese, we use the projective inference, while for other languages, we use the non-projective one.
Experiments ::: Treebanks
For English, we use the Stanford Dependency (SD 3.3.0) BIBREF37 conversion of the Penn Treebank BIBREF38, and follow the standard splitting convention for PTB, using sections 2-21 for training, section 22 as a development set and section 23 as a test set. We use the Stanford POS tagger BIBREF39 generate predicted POS tags.
For Chinese, we adopt the splitting convention for CTB BIBREF40 described in BIBREF19. The dependencies are converted with the Penn2Malt converter. Gold segmentation and POS tags are used as in previous work BIBREF19.
For the CoNLL Treebanks, we use the English treebank from the CoNLL-2008 shared task BIBREF41 and all 13 treebanks from the CoNLL-X shared task BIBREF42. The experimental settings are the same as BIBREF43.
For UD Treebanks, following the selection of BIBREF23, we take 12 treebanks from UD version 2.1 (Nivre et al. 2017): Bulgarian (bg), Catalan (ca), Czech (cs), Dutch (nl), English (en), French (fr), German (de), Italian (it), Norwegian (no), Romanian (ro), Russian (ru) and Spanish (es). We adopt the standard training/dev/test splits and use the universal POS tags provided in each treebank for all the languages.
Experiments ::: Implementation Details ::: Pre-trained Embeddings
We use the GloVe BIBREF44 trained on Wikipedia and Gigaword as external embeddings for English parsing. For other languages, we use the word vectors from 157 languages trained on Wikipedia and Crawl using fastText BIBREF45. We use the extracted BERT layer features to enhance the performance on CoNLL-X and UD treebanks.
Experiments ::: Implementation Details ::: Hyperparameters
The character embeddings are 8-dimensional and randomly initialized. In the character CNN, the convolutions have a window size of 3 and consist of 50 filters. We use 3 stacked bidirectional LSTMs with 512-dimensional hidden states each. The outputs of the BiLSTM employ a 512-dimensional MLP layer for the arc scorer, a 128-dimensional MLP layer for the relation scorer, and a 128-dimensional MLP layer for the parsing order scorer, with all using ReLU as the activation function. Additionally, for parsing the order score, since considering it a classification problem over parse tree layers, we set its range to $[0, 1, ..., 32]$.
Experiments ::: Implementation Details ::: Training
Parameter optimization is performed with the Adam optimizer with $\beta _1$ = $\beta _2$ = 0.9. We choose an initial learning rate of $\eta _0$ = 0.001. The learning rate $\eta $ is annealed by multiplying a fixed decay rate $\rho $ = 0.75 when parsing performance stops increasing on validation sets. To reduce the effects of an exploding gradient, we use a gradient clipping of 5.0. For the BiLSTM, we use recurrent dropout with a drop rate of 0.33 between hidden states and 0.33 between layers. Following BIBREF18, we also use embedding dropout with a rate of 0.33 on all word, character, and POS tag embeddings.
Experiments ::: Main Results
We now compare our model with several other recently proposed parsers as shown in Table TABREF9. Our global greedy parser significantly outperforms the easy-first parser in BIBREF14 (HT-LSTM) on both PTB and CTB. Compared with other graph- and transition-based parsers, our model is also competitive with the state-of-the-art on PTB when considering the UAS metric. Compared to state-of-the-art parsers in transition and graph types, BIAF and STACKPTR, respectively, our model gives better or comparable results but with much faster training and decoding. Additionally, with the help of pre-trained language models, ELMo or BERT, our model can achieve even greater results.
In order to explore the impact of the parsing order objective on the parsing performance, we replace the greedy inference with the traditional MST parsing algorithm (i.e., BIAF + parsing order objective), and the result is shown as “This work (MST)", giving slight performance improvement compared to the greedy inference, which shows globally optimized decoding of graph model still takes its advantage. Besides, compared to the standard training objective for graph model based parser, the performance improvement is slight but still shows the proposed parsing order objective is indeed helpful.
Experiments ::: CoNLL Results
Table TABREF11 presents the results on 14 treebanks from the CoNLL shared tasks. Our model yields the best results on both UAS and LAS metrics of all languages except the Japanese. As for Japanese, our model gives unsatisfactory results because the original treebank was written in Roman phonetic characters instead of hiragana, which is used by both common Japanese writing and our pre-trained embeddings. Despite this, our model overall still gives 1.0% higher average UAS and LAS than the previous best parser, BIAF.
Experiments ::: UD Results
Following BIBREF23, we report results on the test sets of 12 different languages from the UD treebanks along with the current state-of-the-art: BIAF and STACKPTR. Although both BIAF and STACKPTR parsers have achieved relatively high parsing accuracies on the 12 languages and have all UAS higher than 90%, our model achieves state-of-the-art results in all languages for both UAS and LAS. Overall, our model reports more than 1.0% higher average UAS than STACKPTR and 0.3% higher than BIAF.
Experiments ::: Runtime Analysis
In order to verify the time complexity analysis of our model, we measured the running time and speed of BIAF, STACKPTR and our model on PTB training and development set using the projective algorithm. The comparison in Table TABREF24 shows that in terms of convergence time, our model is basically the same speed as BIAF, while STACKPTR is much slower. For decoding, our model is the fastest, followed by BIAF. STACKPTR is unexpectedly the slowest. This is because the time cost of attention scoring in decoding is not negligible when compared with the processing speed and actually even accounts for a significant portion of the runtime.
Conclusion
This paper presents a new global greedy parser in which we enable greedy parsing inference compatible with the global arc scoring of graph-based parsing models instead of the local feature scoring of transitional parsing models. The proposed parser can perform projective parsing when only using two arc-building actions, and it also supports non-projective parsing when introducing two extra non-projective arc-building actions. Compared to graph-based and transition-based parsers, our parser achieves a better tradeoff between parsing accuracy and efficiency by taking advantages of both graph-based models' training methods and transition-based models' linear time decoding strategies. Experimental results on 28 treebanks show the effectiveness of our parser by achieving good performance on 27 treebanks, including the PTB and CTB benchmarks. | model overall still gives 1.0% higher average UAS and LAS than the previous best parser, BIAF, our model reports more than 1.0% higher average UAS than STACKPTR and 0.3% higher than BIAF |
9aa52b898d029af615b95b18b79078e9bed3d766 | 9aa52b898d029af615b95b18b79078e9bed3d766_0 | Q: How faster is training and decoding compared to former models?
Text: Introduction
Dependency parsing predicts the existence and type of linguistic dependency relations between words (as shown in Figure FIGREF1), which is a critical step in accomplishing deep natural language processing. Dependency parsing has been well developed BIBREF0, BIBREF1, and it generally relies on two types of parsing models: transition-based models and graph-based models. The former BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF4 traditionally apply local and greedy transition-based algorithms, while the latter BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12 apply globally optimized graph-based algorithms.
A transition-based dependency parser processes the sentence word-by-word, commonly from left to right, and forms a dependency tree incrementally from the operations predicted. This method is advantageous in that inference on the projective dependency tree is linear in time complexity with respect to sentence length; however, it has several obvious disadvantages. Because the decision-making of each step is based on partially-built parse trees, special training methods are required, which results in slow training and error propagation, as well as weak long-distance dependence processing BIBREF13.
Graph-based parsers learn scoring functions in one-shot and then perform an exhaustive search over the entire tree space for the highest-scoring tree. This improves the performances of the parsers, particularly the long-distance dependency processing, but these models usually have slow inference speed to encourage higher accuracy.
The easy-first parsing approach BIBREF14, BIBREF15 was designed to integrate the advantages of graph-based parsers’ better-performing trees and transition-based parsers’ linear decoding complexity. By processing the input tokens in a stepwise easy-to-hard order, the algorithm makes use of structured information on partially-built parse trees. Because of the presence of rich, structured information, exhaustive inference is not an optimal solution - we can leverage this information to conduct inference much more quickly. As an alternative to exhaustive inference, easy-first chooses to use an approximated greedy search that only explores a tiny fraction of the search space. Compared to graph-based parsers, however, easy-first parsers have two apparent weaknesses: slower training and worse performance. According to our preliminary studies, with the current state-of-the-art systems, we must either sacrifice training complexity for decoding speed, or sacrifice decoding speed for higher accuracy.
In this paper, we propose a novel Global (featuring) Greedy (inference) parsing architecture that achieves fast training, high decoding speed and good performance. With our approach, we use the one-shot arc scoring scheme as in the graph-based parser instead of the stepwise local scoring in transition-based. This is essential for achieving competitive performance, efficient training, and fast decoding. Since, to preserve linear time decoding, we chose a greedy algorithm, we introduce a parsing order scoring scheme to retain the decoding order in inference to achieve the highest accuracy possible. Just as with one-shot scoring in graph-based parsers, our proposed parser will perform arc-attachment scoring, parsing order scoring, and decoding simultaneously in an incremental, deterministic fashion just as transition-based parsers do.
We evaluated our models on the common benchmark treebanks PTB and CTB, as well as on the multilingual CoNLL and the Universal Dependency treebanks. From the evaluation results on the benchmark treebanks, our proposed model gives significant improvements when compared to the baseline parser. In summary, our contributions are thus:
$\bullet $ We integrate the arc scoring mechanism of graph-based parsers and the linear time complexity inference approach of transition parsing models, which, by replacing stepwise local feature scoring, significantly alleviates the drawbacks of these models, improving their moderate performance caused by error propagation and increasing their training speeds resulting from their lack of parallelism.
$\bullet $ Empirical evaluations on benchmark and multilingual treebanks show that our method achieves state-of-the-art or comparable performance, indicating that our novel neural network architecture for dependency parsing is simple, effective, and efficient.
$\bullet $ Our work shows that using neural networks’ excellent learning ability, we can simultaneously achieve both improved accuracy and speed.
The General Greedy Parsing
The global greedy parser will build its dependency trees in a stepwise manner without backtracking, which takes a general greedy decoding algorithm as in easy-first parsers.
Using easy-first parsing's notation, we describe the decoding in our global greedy parsing. As both easy-first and global greedy parsing rely on a series of deterministic parsing actions in a general parsing order (unlike the fixed left-to-right order of standard transitional parsers), they need a specific data structure which consists of a list of unattached nodes (including their partial structures) referred to as “pending". At each step, the parser chooses a specific action $\hat{a}$ on position $i$ with the given arc score score($\cdot $), which is generated by an arc scorer in the parser. Given an intermediate state of parsing with pending $P=\lbrace p_0, p_1, p_2, \cdots , p_N\rbrace $, the attachment action is determined as follows:
where $\mathcal {A}$ denotes the set of the allowed actions, and $i$ is the index of the node in pending. In addition to distinguishing the correct attachments from the incorrect ones, the arc scorer also assigns the highest scores to the easiest attachment decisions and lower scores to the harder decisions, thus determining the parsing order of an input sentence.
For projective parsing, there are exactly two types of actions in the allowed action set: ATTACHLEFT($i$) and ATTACHRIGHT($i$). Let $p_i$ refer to $i$-th element in pending, then the allowed actions can be formally defined as follows:
$\bullet $ ATTACHLEFT($i$): attaches $p_{i+1}$ to $p_i$ , which results in an arc ($p_i$, $p_{i+1}$) headed by $p_i$, and removes $p_{i+1}$ from pending.
$\bullet $ ATTACHRIGHT($i$): attaches $p_i$ to $p_{i+1}$ , which results in an arc ($p_{i+1}$, $p_i$) headed by $p_{i+1}$, and removes $p_i$ from pending.
Global Greedy Parsing Model
Our proposed global greedy model contains three components: (1) an encoder that processes the input sentence and maps it into hidden states that lie in a low dimensional vector space $h_i$ and feeds it into a specific representation layer to strip away irrelevant information, (2) a modified scorer with a parsing order objective, and (3) a greedy inference module that generates the dependency tree.
Global Greedy Parsing Model ::: Encoder
We employ a bi-directional LSTM-CNN architecture (BiLSTM-CNN) to encode the context in which convolutional neural networks (CNNs) learn character-level information $e_{char}$ to better handle out-of-vocabulary words. We then combine these words' character level embeddings with their word embedding $e_{word}$ and POS embedding $e_{pos}$ to create a context-independent representation, which we then feed into the BiLSTM to create word-level context-dependent representations. To further enhance the word-level representation, we leverage an external fixed representation $e_{lm}$ from pre-trained ELMo BIBREF16 or BERT BIBREF17 layer features. Finally, the encoder outputs a sequence of contextualized representations $h_i$.
Because the contextualized representations will be used for several different purposes in the following scorers, it is necessary to specify a representation for each purpose. As shown in BIBREF18, applying a multi-layer perceptron (MLP) to the recurrent output states before the classifier strips away irrelevant information for the current decision, reducing both the dimensionality and the risk of model overfitting. Therefore, in order to distinguish the biaffine scorer's head and dependent representations and the parsing order scorer's representations, we add a separate contextualized representation layer with ReLU as its activation function for each syntax head $h^{head}_i \in H_{head}$ specific representations, dependent $h^{dep}_i \in H_{dep}$ specific representations, and parsing order $h^{order}_i \in H_{order}$:
Global Greedy Parsing Model ::: Scorers
The traditional easy-first model relies on an incremental tree scoring process with stepwise loss backpropagation and sub-tree removal facilitated by local scoring, relying on the scorer and loss backpropagation to hopefully obtain the parsing order. Communicating the information from the scorer and the loss requires training a dynamic oracle, which exposes the model to the configurations resulting from erroneous decisions. This training process is done at the token level, not the sentence level, which unfortunately means incremental scoring prevents parallelized training and causes error propagation. We thus forego incremental local scoring, and, inspired by the design of graph-based parsing models, we instead choose to score all of the syntactic arc candidates in one-shot, which allows for global featuring at a sentence level; however, the introduction of one-shot scoring brings new problems. Since the graph-based method relies on a tree space search algorithm to find the tree with the highest score, the parsing order is not important at all. If we apply one-shot scoring to greedy parsing, we need a mechanism like a stack (as is used in transition-based parsing) to preserve the parsing order.
Both transition-based and easy-first parsers build parse trees in an incremental style, which forces tree formation to follow an order starting from either the root and working towards the leaf nodes or vice versa. When a parser builds an arc that skips any layer, certain errors will exist that it will be impossible for the parent node to find. We thus implement a parsing order prediction module to learn a parsing order objective that outputs a parsing order score addition to the arc score to ensure that each pending node is attached to its parent only after all (or at least as many as possible) of its children have been collected.
Our scorer consists of two parts: a biaffine scorer for one-shot scoring and a parsing order scorer for parsing order guiding. For the biaffine scorer, we adopt the biaffine attention mechanism BIBREF18 to score all possible head-dependent pairs:
where $\textbf {W}_{arc}$, $\textbf {U}_{arc}$, $\textbf {V}_{arc}$, $\textbf {b}_{arc}$ are the weight matrix of the bi-linear term, the two weight vectors of the linear terms, and the bias vector, respectively.
If we perform greedy inference only on the $s_{arc}$ directly, as in Figure FIGREF6, at step $i$, the decoder tests every pair in the pending list, and although the current score fits the correct tree structure for this example, because backtracking is not allowed in the deterministic greedy inference, according to the maximum score $s_{arc}$, the edge selected in step $i$+1 is “root"$\rightarrow $“come". This prevents the child nodes (“today" and “.") from finding the correct parent node in the subsequent step. Thus, the decoder is stuck with this error. This problem can be solved or mitigated by using a max spanning tree (MST) decoder or by adding beam search method to the inference, but neither guarantees maintaining linear time decoding. Therefore, we propose a new scorer for parsing order $s_{order}$. In the scoring stage, the parsing order score is passed to the decoder to guide it and prevent (as much as possible) resorting to erroneous choices.
We formally define the parsing order score for decoding. To decode the nodes at the bottom of the syntax tree first, we define the the parsing order priority as the layer “level" or “position" in the tree. The biaffine output score is the probability of edge (dependency) existence, between 0 and 1, so the greater the probability, the more likely an edge is to exist. Thus, our parsing order scorer gives a layer score for a node, and then, we add this layer score to the biaffine score. Consequently, the relative score of the same layer can be kept unchanged, and the higher the score of a node in the bottom layer, the higher its decoding priority will be. We therefore define $s_{order}$ as:
where $\textbf {W}_{order}$ and $\textbf {b}_{order} $ are parameters for the parsing order scorer. Finally, the one-shot arc score is:
Similarly, we use the biaffine scorer for dependency label classification. We apply MLPs to the contextualized representations before using them in the label classifier as well. As with other graph-based models, the predicted tree at training time has each word as a dependent of its highest-scoring head (although at test time we ensure that the parse is a well-formed tree via the greedy parsing algorithm).
Global Greedy Parsing Model ::: Training Objectives
To parse the syntax tree $y$ for a sentence $x$ with length $l$, the easy-first model relies on an action-by-action process performed on pending. In the training stage, the loss is accumulated once per step (action), and the model is updated by gradient backpropagation according to a preset frequency. This prohibits parallelism during model training lack between and within sentences. Therefore, the traditional easy-first model was trained to maximize following probability:
where $\emph {pending}_i$ is the pending list state at step $i$.
While for our proposed model, it uses the training method similar to that of graph-based models, in which the arc scores are all obtained in one-shot. Consequently, it does not rely on the pending list in the training phase and only uses the pending list to promote the process of linear parsing in the inference stage. Our model is trained to optimize the probability of the dependency tree $y$ when given a sentence $x$: $P_\theta (y|x)$, which can be factorized as:
where $\theta $ represents learnable parameters, $l$ denotes the length of the processing sentence, and $y^{arc}_i$, $y^{rel}_i$ denote the highest-scoring head and dependency relation for node $x_i$. Thus, our model factors the distribution according to a bottom-up tree structure.
Corresponding to multiple objectives, several parts compose the loss of our model. The overall training loss is the sum of three objectives:
where the loss for arc prediction $\mathcal {L}^{arc}$ is the negative log-likelihood loss of the golden structure $y^{arc}$:
the loss for relation prediction $\mathcal {L}^{rel}$ is implemented as the negative log-likelihood loss of the golden relation $y^{rel}$ with the golden structure $y^{arc}$,
and the loss for parsing order prediction $\mathcal {L}^{order}$:
Because the parsing order score of each layer in the tree increases by 1, we frame it as a classification problem and therefore add a multi-class classifier module as the order scorer.
Global Greedy Parsing Model ::: Non-Projective Inference
For non-projective inference, we introduce two additional arc-building actions as follows.
$\bullet $ NP-ATTACHLEFT($i$): attaches $p_{j}$ to $p_i$ where $j > i$, which builds an arc ($p_i$, $p_{j}$) headed by $p_i$, and removes $p_{j}$ from pending.
$\bullet $ NP-ATTACHRIGHT($i$): attaches $p_{j}$ to $p_i$ where $j < i$ which builds an arc ($p_i$, $p_j$) headed by $p_i$, and removes $p_j$ from pending.
If we use the two arc-building actions for non-projective dependency trees directly on $s_{final}$, the time complexity will become $O(n^3)$, so we need to modify this algorithm to accommodate the non-projective dependency trees. Specifically, we no longer use $s_{final}$ directly for greedy search but instead divide each decision into two steps. The first step is to use the order score $s_{order}$ to sort the pending list in descending order. Then, the second step is to find the edge with the largest arc score $s_{arc}$ for this node in the first position of the pending list.
Global Greedy Parsing Model ::: Time Complexity
The number of decoding steps to build a parse tree for a sentence is the same as its length, $n$. Combining this with the searching in the pending list (at each step, we need to find the highest-scoring pair in the pending list to attach. This has a runtime of $O(n)$. The time complexity of a full decoding is $O(n^2)$, which is equal to 1st-order non-projective graph-based parsing but more efficient than 1st-order projective parsing with $O(n^3)$ and other higher order graph parsing models. Compared with the current state-of-the-art transition-based parser STACKPTR BIBREF23, with the same decoding time complexity as ours, since our number of decoding takes $n$ steps while STACKPTR takes $2n-1$ steps for decoding and needs to compute the attention vector at each step, our model actually would be much faster than STACKPTR in decoding.
For the non-projective inference in our model, the complexity is still $O(n^2)$. Since the order score and the arc score are two parts that do not affect each other, we can sort the order scores with time complexity of $O$($n$log$n$) and then iterate in this descending order. The iteration time complexity is $O(n)$ and determining the arc is also $O(n)$, so the overall time complexity is $O$($n$log$n$) $+$ $O(n^2)$, simplifying to $O(n^2)$.
Experiments
We evaluate our parsing model on the English Penn Treebank (PTB), the Chinese Penn Treebank (CTB), treebanks from two CoNLL shared tasks and the Universal Dependency (UD) Treebanks, using unlabeled attachment scores (UAS) and labeled attachment scores (LAS) as the metrics. Punctuation is ignored as in previous work BIBREF18. For English and Chinese, we use the projective inference, while for other languages, we use the non-projective one.
Experiments ::: Treebanks
For English, we use the Stanford Dependency (SD 3.3.0) BIBREF37 conversion of the Penn Treebank BIBREF38, and follow the standard splitting convention for PTB, using sections 2-21 for training, section 22 as a development set and section 23 as a test set. We use the Stanford POS tagger BIBREF39 generate predicted POS tags.
For Chinese, we adopt the splitting convention for CTB BIBREF40 described in BIBREF19. The dependencies are converted with the Penn2Malt converter. Gold segmentation and POS tags are used as in previous work BIBREF19.
For the CoNLL Treebanks, we use the English treebank from the CoNLL-2008 shared task BIBREF41 and all 13 treebanks from the CoNLL-X shared task BIBREF42. The experimental settings are the same as BIBREF43.
For UD Treebanks, following the selection of BIBREF23, we take 12 treebanks from UD version 2.1 (Nivre et al. 2017): Bulgarian (bg), Catalan (ca), Czech (cs), Dutch (nl), English (en), French (fr), German (de), Italian (it), Norwegian (no), Romanian (ro), Russian (ru) and Spanish (es). We adopt the standard training/dev/test splits and use the universal POS tags provided in each treebank for all the languages.
Experiments ::: Implementation Details ::: Pre-trained Embeddings
We use the GloVe BIBREF44 trained on Wikipedia and Gigaword as external embeddings for English parsing. For other languages, we use the word vectors from 157 languages trained on Wikipedia and Crawl using fastText BIBREF45. We use the extracted BERT layer features to enhance the performance on CoNLL-X and UD treebanks.
Experiments ::: Implementation Details ::: Hyperparameters
The character embeddings are 8-dimensional and randomly initialized. In the character CNN, the convolutions have a window size of 3 and consist of 50 filters. We use 3 stacked bidirectional LSTMs with 512-dimensional hidden states each. The outputs of the BiLSTM employ a 512-dimensional MLP layer for the arc scorer, a 128-dimensional MLP layer for the relation scorer, and a 128-dimensional MLP layer for the parsing order scorer, with all using ReLU as the activation function. Additionally, for parsing the order score, since considering it a classification problem over parse tree layers, we set its range to $[0, 1, ..., 32]$.
Experiments ::: Implementation Details ::: Training
Parameter optimization is performed with the Adam optimizer with $\beta _1$ = $\beta _2$ = 0.9. We choose an initial learning rate of $\eta _0$ = 0.001. The learning rate $\eta $ is annealed by multiplying a fixed decay rate $\rho $ = 0.75 when parsing performance stops increasing on validation sets. To reduce the effects of an exploding gradient, we use a gradient clipping of 5.0. For the BiLSTM, we use recurrent dropout with a drop rate of 0.33 between hidden states and 0.33 between layers. Following BIBREF18, we also use embedding dropout with a rate of 0.33 on all word, character, and POS tag embeddings.
Experiments ::: Main Results
We now compare our model with several other recently proposed parsers as shown in Table TABREF9. Our global greedy parser significantly outperforms the easy-first parser in BIBREF14 (HT-LSTM) on both PTB and CTB. Compared with other graph- and transition-based parsers, our model is also competitive with the state-of-the-art on PTB when considering the UAS metric. Compared to state-of-the-art parsers in transition and graph types, BIAF and STACKPTR, respectively, our model gives better or comparable results but with much faster training and decoding. Additionally, with the help of pre-trained language models, ELMo or BERT, our model can achieve even greater results.
In order to explore the impact of the parsing order objective on the parsing performance, we replace the greedy inference with the traditional MST parsing algorithm (i.e., BIAF + parsing order objective), and the result is shown as “This work (MST)", giving slight performance improvement compared to the greedy inference, which shows globally optimized decoding of graph model still takes its advantage. Besides, compared to the standard training objective for graph model based parser, the performance improvement is slight but still shows the proposed parsing order objective is indeed helpful.
Experiments ::: CoNLL Results
Table TABREF11 presents the results on 14 treebanks from the CoNLL shared tasks. Our model yields the best results on both UAS and LAS metrics of all languages except the Japanese. As for Japanese, our model gives unsatisfactory results because the original treebank was written in Roman phonetic characters instead of hiragana, which is used by both common Japanese writing and our pre-trained embeddings. Despite this, our model overall still gives 1.0% higher average UAS and LAS than the previous best parser, BIAF.
Experiments ::: UD Results
Following BIBREF23, we report results on the test sets of 12 different languages from the UD treebanks along with the current state-of-the-art: BIAF and STACKPTR. Although both BIAF and STACKPTR parsers have achieved relatively high parsing accuracies on the 12 languages and have all UAS higher than 90%, our model achieves state-of-the-art results in all languages for both UAS and LAS. Overall, our model reports more than 1.0% higher average UAS than STACKPTR and 0.3% higher than BIAF.
Experiments ::: Runtime Analysis
In order to verify the time complexity analysis of our model, we measured the running time and speed of BIAF, STACKPTR and our model on PTB training and development set using the projective algorithm. The comparison in Table TABREF24 shows that in terms of convergence time, our model is basically the same speed as BIAF, while STACKPTR is much slower. For decoding, our model is the fastest, followed by BIAF. STACKPTR is unexpectedly the slowest. This is because the time cost of attention scoring in decoding is not negligible when compared with the processing speed and actually even accounts for a significant portion of the runtime.
Conclusion
This paper presents a new global greedy parser in which we enable greedy parsing inference compatible with the global arc scoring of graph-based parsing models instead of the local feature scoring of transitional parsing models. The proposed parser can perform projective parsing when only using two arc-building actions, and it also supports non-projective parsing when introducing two extra non-projective arc-building actions. Compared to graph-based and transition-based parsers, our parser achieves a better tradeoff between parsing accuracy and efficiency by taking advantages of both graph-based models' training methods and transition-based models' linear time decoding strategies. Experimental results on 28 treebanks show the effectiveness of our parser by achieving good performance on 27 treebanks, including the PTB and CTB benchmarks. | Proposed vs best baseline:
Decoding: 8541 vs 8532 tokens/sec
Training: 8h vs 8h |
c431c142f5b82374746a2b2f18b40c6874f7131d | c431c142f5b82374746a2b2f18b40c6874f7131d_0 | Q: What datasets was the method evaluated on?
Text: Introduction
Neural Machine Translation (NMT) has made considerable progress in recent years BIBREF0 , BIBREF1 , BIBREF2 . Traditional NMT has relied solely on parallel sentence pairs for training data, which can be an expensive and scarce resource. This motivates the use of monolingual data, usually more abundant BIBREF3 . Approaches using monolingual data for machine translation include language model fusion for both phrase-based BIBREF4 , BIBREF5 and neural MT BIBREF6 , BIBREF7 , back-translation BIBREF8 , BIBREF9 , unsupervised machine translation BIBREF10 , BIBREF11 , dual learning BIBREF12 , BIBREF13 , BIBREF14 , and multi-task learning BIBREF15 .
We focus on back-translation (BT), which, despite its simplicity, has thus far been the most effective technique BIBREF16 , BIBREF17 , BIBREF18 . Back-translation entails training an intermediate target-to-source model on genuine bitext, and using this model to translate a large monolingual corpus from the target into the source language. This allows training a source-to-target model on a mixture of genuine parallel data and synthetic pairs from back-translation.
We build upon edunov2018understanding and imamura2018enhancement, who investigate BT at the scale of hundreds of millions of sentences. Their work studies different decoding/generation methods for back-translation: in addition to regular beam search, they consider sampling and adding noise to the one-best hypothesis produced by beam search. They show that sampled BT and noised-beam BT significantly outperform standard BT, and attribute this success to increased source-side diversity (sections 5.2 and 4.4).
Our work investigates noised-beam BT (NoisedBT) and questions the role noise is playing. Rather than increasing source diversity, our work instead suggests that the performance gains come simply from signaling to the model that the source side is back-translated, allowing it to treat the synthetic parallel data differently than the natural bitext. We hypothesize that BT introduces both helpful signal (strong target-language signal and weak cross-lingual signal) and harmful signal (amplifying the biases of machine translation). Indicating to the model whether a given training sentence is back-translated should allow the model to separate the helpful and harmful signal.
To support this hypothesis, we first demonstrate that the permutation and word-dropping noise used by BIBREF19 do not improve or significantly degrade NMT accuracy, corroborating that noise might act as an indicator that the source is back-translated, without much loss in mutual information between the source and target. We then train models on WMT English-German (EnDe) without BT noise, and instead explicitly tag the synthetic data with a reserved token. We call this technique “Tagged Back-Translation" (TaggedBT). These models achieve equal to slightly higher performance than the noised variants. We repeat these experiments with WMT English-Romanian (EnRo), where NoisedBT underperforms standard BT and TaggedBT improves over both techniques. We demonstrate that TaggedBT also allows for effective iterative back-translation with EnRo, a technique which saw quality losses when applied with standard back-translation.
To further our understanding of TaggedBT, we investigate the biases encoded in models by comparing the entropy of their attention matrices, and look at the attention weight on the tag. We conclude by investigating the effects of the back-translation tag at decoding time.
Related Work
This section describes prior work exploiting target-side monolingual data and discusses related work tagging NMT training data.
Leveraging Monolingual Data for NMT
Monolingual data can provide valuable information to improve translation quality. Various methods for using target-side LMs have proven effective for NMT BIBREF20 , BIBREF7 , but have tended to be less successful than back-translation – for example, BIBREF7 report under +0.5 Bleu over their baseline on EnDe newstest14, whereas BIBREF19 report over +4.0 Bleu on the same test set. Furthermore, there is no straighforward way to incorporate source-side monolingual data into a neural system with a LM.
Back-translation was originally introduced for phrase-based systems BIBREF21 , BIBREF22 , but flourished in NMT after work by sennrich2016improving. Several approaches have looked into iterative forward- and BT experiments (using source-side monolingual data), including cotterell2018explaining, hoang2018iterative, and niu2018bidirectional. Recently, iterative back-translation in both directions has been devised has a way to address unsupervised machine translation BIBREF23 , BIBREF11 .
Recent work has focused on the importance of diversity and complexity in synthetic training data. BIBREF24 find that BT benefits difficult-to-translate words the most, and select from the back-translated corpus by oversampling words with high prediction loss. BIBREF25 argue that in order for BT to enhance the encoder, it must have a more diverse source side, and sample several back-translated source sentences for each monolingual target sentence. Our work follows most closely BIBREF19 , who investigate alternative decoding schemes for BT. Like BIBREF25 , they argue that BT through beam or greedy decoding leads to an overly regular domain on the source side, which poorly represents the diverse distribution of natural text.
Beyond the scope of this work, we briefly mention alternative techniques leveraging monolingual data, like forward translation BIBREF26 , BIBREF27 , or source copying BIBREF28 .
Training Data Tagging for NMT
Tags have been used for various purposes in NMT. Tags on the source sentence can indicate the target language in multi-lingual models BIBREF29 . BIBREF30 use tags in a similar fashion to control the formality of a translation from English to Japanese. BIBREF31 use tags to control gender in translation. Most relevant to our work, BIBREF32 use tags to mark source sentence domain in a multi-domain setting.
Experimental Setup
This section presents our datasets, evaluation protocols and model architectures. It also describes our back-translation procedure, as well as noising and tagging strategies.
Data
We perform our experiments on WMT18 EnDe bitext, WMT16 EnRo bitext, and WMT15 EnFr bitext respectively. We use WMT Newscrawl for monolingual data (2007-2017 for De, 2016 for Ro, 2007-2013 for En, and 2007-2014 for Fr). For bitext, we filter out empty sentences and sentences longer than 250 subwords. We remove pairs whose whitespace-tokenized length ratio is greater than 2. This results in about 5.0M pairs for EnDe, and 0.6M pairs for EnRo. We do not filter the EnFr bitext, resulting in 41M sentence pairs.
For monolingual data, we deduplicate and filter sentences with more than 70 tokens or 500 characters. Furthermore, after back-translation, we remove any sentence pairs where the back-translated source is longer than 75 tokens or 550 characters. This results in 216.5M sentences for EnDe, 2.2M for EnRo, 149.9M for RoEn, and 39M for EnFr. For monolingual data, all tokens are defined by whitespace tokenization, not wordpieces.
The DeEn model used to generate BT data has 28.6 SacreBleu on newstest12, the RoEn model used for BT has a test SacreBleu of 31.9 (see Table 4 .b), and the FrEn model used to generate the BT data has 39.2 SacreBleu on newstest14.
Evaluation
We rely on Bleu score BIBREF33 as our evaluation metric.
While well established, any slight difference in post-processing and Bleu computation can have a dramatic impact on output values BIBREF34 . For example, BIBREF35 report 33.3 Bleu on EnRo using unsupervised NMT, which at first seems comparable to our reported 33.4 SacreBleu from iterative TaggedBT. However, when we use their preprocessing scripts and evaluation protocol, our system achieves 39.2 Bleu on the same data, which is close to 6 points higher than the same model evaluated by SacreBleu.
We therefore report strictly SacreBleu , using the reference implementation from post2018call, which aims to standardize Bleu evaluation.
Architecture
We use the transformer-base and transformer-big architectures BIBREF2 implemented in lingvo BIBREF36 . Transformer-base is used for the bitext noising experiments and the EnRo experiments, whereas the transformer-big is used for the EnDe tasks with BT. Both use a vocabulary of 32k subword units. As an alternative to the checkpoint averaging used in edunov2018understanding, we train with exponentially weighted moving average (EMA) decay with weight decay parameter $\alpha =0.999$ BIBREF37 .
Transformer-base models are trained on 16 GPUs with synchronous gradient updates and per-gpu-batch-size of 4,096 tokens, for an effective batch size of 64k tokens/step. Training lasts 400k steps, passing over 24B tokens. For the final EnDe TaggedBT model, we train transformer-big similarly but on 128 GPUs, for an effective batch size of 512k tokens/step. A training run of 300M steps therefore sees about 150B tokens. We pick checkpoints with newstest2012 for EnDe and newsdev2016 for EnRo.
Noising
We focused on noised beam BT, the most effective noising approach according to BIBREF19 . Before training, we noised the decoded data BIBREF10 by applying 10% word-dropout, 10% word blanking, and a 3-constrained permutation (a permutation such that no token moves further than 3 tokens from its original position). We refer to data generated this way as NoisedBT. Additionally, we experiment using only the 3-constrained permutation and no word dropout/blanking, which we abbreviate as P3BT.
Tagging
We tag our BT training data by prepending a reserved token to the input sequence, which is then treated in the same way as any other token. We also experiment with both noising and tagging together, which we call Tagged Noised Back-Translation, or TaggedNoisedBT. This consists simply of prepending the $<$ BT $>$ tag to each noised training example.
An example training sentence for each of these set-ups can be seen in Table 1 . We do not tag the bitext, and always train on a mix of back-translated data and (untagged) bitext unless explicitly stated otherwise.
Results
This section studies the impact of training data noise on translation quality, and then presents our results with TaggedBT on EnDe and EnRo.
Noising Parallel Bitext
We first show that noising EnDe bitext sources does not seriously impact the translation quality of the transformer-base baseline. For each sentence pair in the corpus, we flip a coin and noise the source sentence with probability $p$ . We then train a model from scratch on this partially noised dataset. Table 2 shows results for various values of $p$ . Specifically, it presents the somewhat unexpected finding that even when noising 100% of the source bitext (so the model has never seen well-formed English), Bleu on well-formed test data only drops by 2.5.
This result prompts the following line of reasoning about the role of noise in BT: (i) By itself, noising does not add meaningful signal (or else it would improve performance); (ii) It also does not damage the signal much; (iii) In the context of back-translation, the noise could therefore signal whether a sentence were back-translated, without significantly degrading performance.
Tagged Back-Translation for EnDe
We compare the results of training on a mixture of bitext and a random sample of 24M back-translated sentences in Table 3 .a, for the various set-ups of BT described in sections "Conclusion" and "Future Work" . Like edunov2018understanding, we confirm that BT improves over bitext alone, and noised BT improves over standard BT by about the same margin. All methods of marking the source text as back-translated (NoisedBT, P3BT, TaggedBT, and TaggedNoisedBT) perform about equally, with TaggedBT having the highest average Bleu by a small margin. Tagging and noising together (TaggedNoisedBT) does not improve over either tagging or noising alone, supporting the conclusion that tagging and noising are not orthogonal signals but rather different means to the same end.
Table 3 .b verifies our result at scale applying TaggedBT on the full BT dataset (216.5M sentences), upsampling the bitext so that each batch contains an expected 20% of bitext. As in the smaller scenario, TaggedBT matches or slightly out-performs NoisedBT, with an advantage on seven test-sets and a disadvantage on one. We also compare our results to the best-performing model from edunov2018understanding. Our model is on par with or slightly superior to their result, out-performing it on four test sets and under-performing it on two, with the largest advantage on Newstest2018 (+1.4 Bleu).
As a supplementary experiment, we consider training only on BT data, with no bitext. We compare this to training only on NoisedBT data. If noising in fact increases the quality or diversity of the data, one would expect the NoisedBT data to yield higher performance than training on unaltered BT data, when in fact it has about 1 Bleu lower performance (Table 3 .a, “BT alone" and “NoisedBT alone").
We also compare NoisedBT versus TaggedNoisedBT in a set-up where the bitext itself is noised. In this scenario, the noise can no longer be used by the model as an implicit tag to differentiate between bitext and synthetic BT data, so we expect the TaggedNoisedBT variant to perform better than NoisedBT by a similar margin to NoisedBT's improvement over BT in the unnoised-bitext setting. The last sub-section of Table 3 .a confirms this.
Tagged Back-Translation for EnRo
We repeat these experiments for WMT EnRo (Table 4 ). This is a much lower-resource task than EnDe, and thus can benefit more from monolingual data. In this case, NoisedBT is actually harmful, lagging standard BT by -0.6 Bleu. TaggedBT closes this gap and passes standard BT by +0.4 Bleu, for a total gain of +1.0 Bleu over NoisedBT.
Tagged Back-Translation for EnFr
We performed a minimal set of experiments on WMT EnFr, which are summarized in Table 5 . This is a much higher-resource language pair than either EnRo or EnDe, but BIBREF19 demonstrate that noised BT (using sampling) can still help in this set-up. In this case, we see that BT alone hurts performance compared to the strong bitext baseline, but NoisedBT indeed surpasses the bitext model. TaggedBT out-performs all other methods, beating NoisedBT by an average of +0.3 Bleu over all test sets.
It is worth noting that our numbers are lower than those reported by BIBREF19 on the years they report (36.1, 43.8, and 40.9 on 2013, 2014, and 2015 respectively). We did not investigate this result. We suspect that this is an error/inoptimlaity in our set-up, as we did not optimize these models, and ran only one experiment for each of the four set-ups. Alternately, sampling could outperform noising in the large-data regime.
Iterative Tagged Back-Translation
We further investigate the effects of TaggedBT by performing one round of iterative back-translation BIBREF38 , BIBREF39 , BIBREF40 , and find another difference between the different varieties of BT: NoisedBT and TaggedBT allow the model to bootstrap improvements from an improved reverse model, whereas standard BT does not. This is consistent with our argument that data tagging allows the model to extract information out of each data set more effectively.
For the purposes of this paper we call a model trained with standard back-translation an Iteration-1 BT model, where the back-translations were generated by a model trained only on bitext. We inductively define the Iteration-k BT model as that model which is trained on BT data generated by an Iteration-(k-1) BT model, for $k>1$ . Unless otherwise specified, any BT model mentioned in this paper is an Iteration-1 BT model.
We perform these experiments on the English-Romanian dataset, which is smaller and thus better suited for this computationally expensive process. We used the (Iteration-1) TaggedBT model to generate the RoEn back-translated training data. Using this we trained a superior RoEn model, mixing 80% BT data with 20% bitext. Using this Iteration-2 RoEn model, we generated new EnRo BT data, which we used to train the Iteration-3 EnRo models. SacreBleu scores for all these models are displayed in Table 4 .
We find that the iteration-3 BT models improve over their Iteration-1 counterparts only for NoisedBT (+1.0 Bleu, dev+test avg) and TaggedBT (+0.7 Bleu, dev+test avg), whereas the Iteration-3 BT model shows no improvement over its Iteration-1 counterpart (-0.1 Bleu, dev+test avg). In other words, both techniques that (explicitly or implicitly) tag synthetic data benefit from iterative BT. We speculate that this separation of the synthetic and natural domains allows the model to bootstrap more effectively from the increasing quality of the back-translated data while not being damaged by its quality issues, whereas the simple BT model cannot make this distinction, and is equally “confused" by the biases in higher or lower-quality BT data.
An identical experiment with EnDe did not see either gains or losses in Bleu from iteration-3 TaggedBT. This is likely because there is less room to bootstrap with the larger-capacity model. This said, we do not wish to read too deeply into these results, as the effect size is not large, and neither is the number of experiments. A more thorough suite of experiments is warranted before any strong conclusions can be made on the implications of tagging on iterative BT.
Analysis
In an attempt to gain further insight into TaggedBT as it compares with standard BT or NoisedBT, we examine attention matrices in the presence of the back translation tag and measure the impact of the tag at decoding time.
Attention Entropy and Sink-Ratio
To understand how the model treats the tag and what biases it learns from the data, we investigate the entropy of the attention probability distribution, as well as the attention captured by the tag.
We examine decoder attention (at the top layer) on the first source token. We define Attention Sink Ratio for index $j$ ( $\textrm {ASR}_j$ ) as the averaged attention over the $j$ th token, normalized by uniform attention, i.e. $ \textrm {ASR}_j(x, \hat{y}) = \frac{1}{|\hat{y}|} \sum _{i = 1}^{ | \hat{y} | } \frac{\alpha _{ij}}{\tilde{\alpha }} $
where $\alpha _{ij}$ is the attention value for target token $i$ in hypothesis $\hat{y}$ over source token $j$ and $\tilde{\alpha } = \frac{1}{|x|}$ corresponds to uniform attention. We examine ASR on text that has been noised and/or tagged (depending on the model), to understand how BT sentences are treated during training. For the tagged variants, there is heavy attention on the tag when it is present (Table ), indicating that the model relies on the information signalled by the tag.
Our second analysis probes word-for-word translation bias through the average source-token entropy of the attention probability model when decoding natural text. Table reports the average length-normalized Shannon entropy: $ \tilde{\textrm {}{H}(x, \hat{y}) = - \frac{1}{|\hat{y}|} \sum _{i=1}^{| \hat{y}|} \frac{1}{\log |x|} \sum _{j=1}^{|x|}\alpha _{ij} \log (\alpha _{ij}) The entropy of the attention probabilities from the model trained on BT data is the clear outlier. This low entropy corresponds to a concentrated attention matrix, which we observed to be concentrated on the diagonal (See Figure~\ref {pretty-attention:ende-bt} and \ref {pretty-attention:enro-bt}). This could indicate the presence of word-by-word translation, a consequence of the harmful part of the signal from back-translated data. The entropy on parallel data from the NoisedBT model is much higher, corresponding to more diffuse attention, which we see in Figure~\ref {pretty-attention:ende-nbt} and ~\ref {pretty-attention:enro-nbt}. In other words, the word-for-word translation biases in BT data, that were incorporated into the BT model, have been manually undone by the noise, so the model^{\prime }s understanding of how to decode parallel text is not corrupted. We see that TaggedBT leads to a similarly high entropy, indicating the model has learnt this without needing to manually ``break" the literal-translation bias. As a sanity check, we see that the entropy of the P3BT model^{\prime }s attention is also high, but is lower than that of the NoisedBT model, because P3 noise is less destructive. The one surprising entry on this table is probably the low entropy of the TaggedNoisedBT. Our best explanation is that TaggedNoisedBT puts disproportionately high attention on the sentence-end token, with 1.4x the \textrm {ASR}_{|x|} that TaggedBT has, naturally leading to lower entropy. }$ | WMT18 EnDe bitext, WMT16 EnRo bitext, WMT15 EnFr bitext, We perform our experiments on WMT18 EnDe bitext, WMT16 EnRo bitext, and WMT15 EnFr bitext respectively. We use WMT Newscrawl for monolingual data (2007-2017 for De, 2016 for Ro, 2007-2013 for En, and 2007-2014 for Fr). For bitext, we filter out empty sentences and sentences longer than 250 subwords. We remove pairs whose whitespace-tokenized length ratio is greater than 2. This results in about 5.0M pairs for EnDe, and 0.6M pairs for EnRo. We do not filter the EnFr bitext, resulting in 41M sentence pairs. |
7835d8f578386834c02e2c9aba78a345059d56ca | 7835d8f578386834c02e2c9aba78a345059d56ca_0 | Q: Is the model evaluated against a baseline?
Text: Introduction
Automatic dubbing can be regarded as an extension of the speech-to-speech translation (STST) task BIBREF0, which is generally seen as the combination of three sub-tasks: (i) transcribing speech to text in a source language (ASR), (ii) translating text from a source to a target language (MT) and (iii) generating speech from text in a target language (TTS). Independently from the implementation approach BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, the main goal of STST is producing an output that reflects the linguistic content of the original sentence. On the other hand, automatic dubbing aims to replace all speech contained in a video document with speech in a different language, so that the result sounds and looks as natural as the original. Hence, in addition to conveying the same content of the original utterance, dubbing should also match the original timbre, emotion, duration, prosody, background noise, and reverberation.
While STST has been addressed for long time and by several research labs BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF0, relatively less and more sparse efforts have been devoted to automatic dubbing BIBREF7, BIBREF8, BIBREF9, BIBREF10, although the potential demand of such technology could be huge. In fact, multimedia content created and put online has been growing at exponential rate, in the last decade, while availability and cost of human skills for subtitling and dubbing still remains a barrier for its diffusion worldwide. Professional dubbing of a video file is a very labor intensive process that involves many steps: (i) extracting speech segments from the audio track and annotating these with speaker information; (ii) transcribing the speech segments, (iii) translating the transcript in the target language, (iv) adapting the translation for timing, (v) choosing the voice actors, (vi) performing the dubbing sessions, (vii) fine-aligning the dubbed speech segments, (viii) mixing the new voice tracks within the original soundtrack.
Automatic dubbing has been addressed both in monolingual cross-lingual settings. In BIBREF14, synchronization of two speech signals with the same content was tackled with time-alignment via dynamic time warping. In BIBREF15 automatic monolingual dubbing for TV users with special needs was generated from subtitles. However, due to the poor correlation between length and timing of the subtitles, TTS output frequently broke the timing boundaries. To avoid unnatural time compression of TTS's voice when fitting timing constraints, BIBREF7 proposed phone-dependent time compression and text simplification to shorten the subtitles, while BIBREF8 leveraged scene-change detection to relax the subtitle time boundaries. Regarding cross-lingual dubbing, lip movements synchronization was tackled in BIBREF9 by directly modifying the actor's mouth motion via shuffling of the actor's video frames. While the method does not use any prior linguistic or phonetic knowledge, it has been only demonstrated on very simple and controlled conditions. Finally, mostly related to our contribution is BIBREF10, which discusses speech synchronization at the phrase level (prosodic alignment) for English-to-Spanish automatic dubbing.
In this paper we present research work to enhance a STST pipeline in order to comply with the timing and rendering requirements posed by cross-lingual automatic dubbing of TED Talk videos. Similarly to BIBREF7, we also shorten the TTS script by directly modifying the MT engine rather than via text simplification. As in BIBREF10, we synchronize phrases across languages, but follow a fluency-based rather than content-based criterion and replace generation and rescoring of hypotheses in BIBREF10 with a more efficient dynamic programming solution. Moreover, we extend BIBREF10 by enhancing neural MT and neural TTS to improve speech synchronization, and by performing audio rendering on the dubbed speech to make it sound more real inside the video.
In the following sections, we introduce the overall architecture (Section 2) and the proposed enhancements (Sections 3-6). Then, we present results (Section 7) of experiments evaluating the naturalness of automatic dubbing of TED Talk clips from English into Italian. To our knowledge, this is the first work on automatic dubbing that integrates enhanced deep learning models for MT, TTS and audio rendering, and evaluates them on real-world videos.
Automatic Dubbing
With some approximation, we consider here automatic dubbing of the audio track of a video as the task of STST, i.e. ASR + MT + TTS, with the additional requirement that the output must be temporally, prosodically and acoustically close to the original audio. We investigate an architecture (see Figure 1) that enhances the STST pipeline with (i) enhanced MT able to generate translations of variable lengths, (ii) a prosodic alignment module that temporally aligns the MT output with the speech segments in the original audio, (iii) enhanced TTS to accurately control the duration of each produce utterance, and, finally, (iv) audio rendering that adds to the TTS output background noise and reverberation extracted from the original audio. In the following, we describe each component in detail, with the exception of ASR, for which we use BIBREF16 an of-the-shelf online service. ˜
Machine Translation
Our approach to control the length of MT output is inspired by target forcing in multilingual neural MT BIBREF17, BIBREF18. We partition the training sentence pairs into three groups (short, normal, long) according to the target/source string-length ratio. In practice, we select two thresholds $t_1$ and $t_2$, and partition training data according to the length-ratio intervals $[0,t_1)$, $[t_1,t_2)$ and $[t_2,\infty ]$. At training time a length token is prepended to each source sentence according to its group, in order to let the neural MT model discriminate between the groups. At inference time, the length token is instead prepended to bias the model to generate a translation of the desired length type. We trained a Transformer model BIBREF19 with output length control on web crawled and proprietary data amounting to 150 million English-Italian sentence pairs (with no overlap with the test data). The model has encoder and decoder with 6 layers, layer size of 1024, hidden size of 4096 on feed forward layers, and 16 heads in the multi-head attention. For the reported experiments, we trained the models with thresholds $t_1=0.95$ and $t_2=1.05$ and generated at inference time translations of the shortest type, resulting in an average length ratio of $0.97$ on our test set. A detailed account of the approach, the followed training procedure and experimental results on the same task of this paper can be found in BIBREF20. Finally, as baseline MT system we used an online service.
Prosodic Alignment
Prosodic alignmentBIBREF10 is the problem of segmenting the target sentence to optimally match the distribution of words and pauses of the source sentence. Let ${\bf e}=e_1,e_2,\ldots ,e_n$ be a source sentence of $n$ words which is segmented according to $k$ breakpoints $1 \le i_1 < i_2 < \ldots i_k=n$, shortly denoted with ${\bf i}$. Given a target sentence ${\bf f}=f_1,f_2,\ldots ,f_m$ of $m$ words, the goal is to find within it $k$ corresponding breakpoints $1 \le j_1 < j_2 < \ldots j_k=m$ (shortly denoted with ${\bf j}$) that maximize the probability:
By assuming a Markovian dependency on ${\bf j}$, i.e.:
and omitting from the notation the constant terms ${\bf i},{\bf e},{\bf f}$, we can derive the following recurrent quantity:
where $Q(j,t)$ denotes the log-probability of the optimal segmentation of ${\bf f}$ up to position $j$ with $t$ break points. It is easy to show that the solution of (DISPLAY_FORM5) corresponds to $Q(m,k)$ and that optimal segmentation can be efficiently computed via dynamic-programming. Let ${\tilde{f}}_t = f_{j_{t-1}+1},\ldots ,f_{j_t}$ and ${\tilde{e}}_t =e_{i_{t-1}+1},\ldots ,e_{i_t}$ indicate the $t$-th segments of ${\bf f}$ and ${\bf e}$, respectively, we define the conditional probability of the $t$-th break point in ${\bf f}$ by:
The first term computes the relative match in duration between the corresponding $t$-th segments, while the second term measure the linguistic plausibility of a placing a break after the ${j_t}$ in ${\bf f}$. For this, we simply compute the following ratio of language model perplexities computed over a text window centered on the break point, by assuming or not the presence of a pause (comma, semicolon or dash) in the middle:
In our implementation, we use a larger text window (last and first two words), we replace words with parts-of speech, and estimate the language model on a large English corpus.
Text To Speech
Our neural TTS system consists of two modules: a Context Generation module, which generates a context sequence from the input text, and a Neural Vocoder module, which converts the context sequence into a speech waveform. The first one is an attention-based sequence-to-sequence network BIBREF21, BIBREF22 that predicts a Mel-spectrogram given an input text. A grapheme-to-phoneme module converts the sequence of words into a sequence of phonemes plus augmented features like punctuation marks and prosody related features derived from the text (e.g. lexical stress). For the Context Generation module, we trained speaker-dependent models on two Italian voices, male and female, with 10 and 37 hours of high quality recordings, respectively. We use the Universal Neural Vocoder introduced in BIBREF23, pre-trained with 2000 utterances per each of the 74 voices from a proprietary database.
To ensure close matching of the duration of Italian TTS output with timing information extracted from the original English audio, for each utterance we resize the generated Mel spectrogram using spline interpolation prior to running the Neural Vocoder. We empirically observed that this method produces speech of better quality than traditional time-stretching.
Audio Rendering ::: Foreground-Background Separation
The input audio can be seen as a mixture of foreground (speech) and background (everything else) and our goal is to extract the background and add it to the dubbed speech to make it sound more real and similar to the original. For the foreground-background separation task, we adapted the popular U-Net BIBREF24 architecture, which is described in detail in BIBREF25 for a music-vocal separation task. It consists of a series of down-sampling blocks, followed by one ’bottom’ convolutional layer, followed by a series of up-sampling blocks with skip connections from the down-sampling to the up-sampling blocks. Because of the down-sampling blocks, the model can compute a number of high-level features on coarser time scales, which are concatenated with the local, high-resolution features computed from the same-level up-sampling block. This concatenation results into multi-scale features for prediction. The model operates on a time-frequency representation (spectrograms) of the audio mixture and it outputs two soft ratio masks corresponding to foreground and background, respectively, which are multiplied element-wise with the mixed spectrogram, to obtain the final estimates of the two sources. Finally, the estimated spectrograms go through an inverse short-term Fourier transform block to produce raw time domain signals. The loss function used to train the model is the sum of the $L_1$ losses between the target and the masked input spectrograms, for the foreground and the background BIBREF25, respectively. The model is trained with the Adam optimizer on mixed audio provided with foreground and background ground truths. Training data was created from 360 hours of clean speech from Librispeech (foreground) and 120 hours of recording taken from audioset BIBREF26 (background), from which speech was filtered out using a Voice Activity Detector (VAD). Foreground and background are mixed for different signal-to-noise ratio (SNR), to generate the audio mixtures.
Audio Rendering ::: Re-reverberation
In this step, we estimate the environment reverberation from the original audio and apply it to the dubbed audio. Unfortunately, estimating the room impulse response (RIR) from a reverberated signal requires solving an ill-posed blind deconvolution problem. Hence, instead of estimating the RIR, we do a blind estimation of the reverberation time (RT), which is commonly used to assess the amount of room reverberation or its effects. The RT is defined as the time interval in which the energy of a steady-state sound field decays 60 dB below its initial level after switching off the excitation source. In this work we use a Maximum Likelihood Estimation (MLE) based RT estimate (see details of the method in BIBREF27). Estimated RT is then used to generate a synthetic RIR using a publicly available RIR generator BIBREF28. This synthetic RIR is finally applied to the dubbed audio.
Experimental Evaluation
We evaluated our automatic dubbing architecture (Figure 1), by running perceptual evaluations in which users are asked to grade the naturalness of video clips dubbed with three configurations (see Table TABREF12): (A) speech-to-speech translation baseline, (B) the baseline with enhanced MT and prosodic alignment, (C) the former system enhanced with audio rendering.
Our evaluation focuses on two questions:
What is the overall naturalness of automatic dubbing?
How does each introduced enhancement contribute to the naturalness of automatic dubbing?
We adopt the MUSHRA (MUlti Stimulus test with Hidden Reference and Anchor) methodology BIBREF29, originally designed to evaluate audio codecs and later also TTS. We asked listeners to evaluate the naturalness of each versions of a video clip on a 0-100 scale. Figure FIGREF15 shows the user interface. In absence of a human dubbed version of each clip, we decided to use, for calibration purposes, the clip in the original language as hidden reference. The clip versions to evaluate are not labeled and randomly ordered. The observer has to play each version at least once before moving forward and can leave a comment about the worse version.
In order to limit randomness introduced by ASR and TTS across the clips and by MT across versions of the same clip, we decided to run the experiments using manual speech transcripts, one TTS voice per gender, and MT output by the baseline (A) and enhanced MT system (B-C) of quality judged at least acceptable by an expert. With these criteria in mind, we selected 24 video clips from 6 TED Talks (3 female and 3 male speakers, 5 clips per talk) from the official test set of the MUST-C corpus BIBREF30 with the following criteria: duration of around 10-15 seconds, only one speaker talking, at least two sentences, speaker face mostly visible.
We involved in the experiment both Italian and non Italian listeners. We recommended all participants to disregard the content and only focus on the naturalness of the output. Our goal is to measure both language independent and language dependent naturalness, i.e. to verify how speech in the video resembles human speech with respect to acoustics and synchronization, and how intelligible it is to native listeners.
Experimental Evaluation ::: Results
We collected a total of 657 ratings by 14 volunteers, 5 Italian and 9 non-Italian listeners, spread over the 24 clips and three testing conditions. We conducted a statistical analysis of the data with linear mixed-effects models using the lme4 package for R BIBREF31. We analyzed the naturalness score (response variable) against the following two-level fixed effects: dubbing system A vs. B, system A vs. C, and system B vs. C. We run separate analysis for Italian and non-Italian listeners. In our mixed models, listeners and video clips are random effects, as they represent a tiny sample of the respective true populationsBIBREF31. We keep models maximal, i.e. with intercepts and slopes for each random effect, end remove terms required to avoid singularities BIBREF32. Each model is fitted by maximum likelihood and significance of intercepts and slopes are computed via t-test.
Table TABREF18 summarized our results. In the first comparison, baseline (A) versus the system with enhanced MT and prosody alignment (B), we see that both non-Italian and Italian listeners perceive a similar naturalness of system A (46.81 vs. 47.22). When movid to system B, non-Italian listeners perceive a small improvement (+1.14), although not statistically significant, while Italian speaker perceive a statistically significant degradation (-10.93). In the comparison between B and C (i.e. B enhanced with audio rendering), we see that non-Italian listeners observe a significant increase in naturalness (+10.34), statistically significant, while Italian listeners perceive a smaller and not statistical significant improvement (+1.05). The final comparison between A and C gives almost consistent results with the previous two evaluations: non-Italian listeners perceive better quality in condition C (+11.01) while Italian listeners perceive lower quality (-9.60). Both measured variations are however not statistically significant due to the higher standard errors of the slope estimates $\Delta $C. Notice in fact that each mixed-effects model is trained on distinct data sets and with different random effect variables. A closer look at the random effects parameters indeed shows that for the B vs. C comparison, the standard deviation estimate of the listener intercept is 3.70, while for the A vs. C one it is 11.02. In other words, much higher variability across user scores is observed in the A vs. C case rather than in the B vs. C case. A much smaller increase is instead observed across the video-clip random intercepts, i.e. from 11.80 to 12.66. The comments left by the Italian listeners tell that the main problem of system B is the unnaturalness of the speaking rate, i.e. is is either too slow, too fast, or too uneven.
The distributions of the MUSHRA scores presented at the top of Figure FIGREF19 confirm our analysis. What is more relevant, the distribution of the rank order (bottom) strengths our previous analysis. Italian listeners tend to rank system A the best system (median $1.0$) and vary their preference between systems B and C (both with median $2.0$). In contrast, non-Italian rank system A as the worse system (median $2.5$), system B as the second (median $2.0$), and statistically significantly prefer system C as the best system (median $1.0$).
Hence, while our preliminary evaluation found that shorter MT output can potentially enable better synchronization, the combination of MT and prosodic alignment appears to be still problematic and prone to generate unnatural speech.
The incorporation of audio rendering (system $C$) significantly improves the experience of the non-Italian listeners (66 in median) respect to systems $B$ and $C$. This points out the relevance of including para-linguist aspects (i.e. applause's, audience laughs in jokes,etc.) and acoustic conditions (i.e. reverberation, ambient noise, etc.). For the target (Italian) listeners this improvement appears instead masked by the disfluencies introduced by the prosodic alignment step. If we try to directly measure the relative gains given by audio rendering, we see that Italian listeners score system B better than system A 27% of the times and system C better than A 31% of the times, which is a 15% relative gain. On the contrary non-Italian speakers score B better than A 52% of the times, and C better than A 66% of the times, which is a 27% relative gain.
Conclusions
We have perceptually evaluated the impact on the naturalness of automatic speech dubbing when we enhance a baseline speech-to-speech translation system with the possibility to control the length of the translation output, align target words with the speech-pause segmentation of the source, and enrich speech output with ambient noise and reverberation extracted from the original audio. We tested our system with both Italian and non-Italian listeners in order to evaluate both language independent and language dependent naturalness of dubbed videos. Results show that while we succeeded at achieving synchronization at the phrasal level, our prosodic alignment step negatively impacts on the fluency and prosody of the generated language. The impact of these disfluencies on native listeners seems to partially mask the effect of the audio rendering with background noise and reverberation, which instead results in a major increase of naturalness for non-Italian listeners. Future work will definitely devoted to improving the prosodic alignment component, by computing better segmentation and introducing more flexible lip-synchronization.
Acknowledgements
The authors would like to thank the Amazon Polly, Translate and Transcribe research teams; Adam Michalski, Alessandra Brusadin, Mattia Di Gangi and Surafel Melaku for contributions to the project, and all colleagues at Amazon AWS who helped with the evaluation. | No |
32e78ca99ba8b8423d4b21c54cd5309cb92191fc | 32e78ca99ba8b8423d4b21c54cd5309cb92191fc_0 | Q: How many people are employed for the subjective evaluation?
Text: Introduction
Automatic dubbing can be regarded as an extension of the speech-to-speech translation (STST) task BIBREF0, which is generally seen as the combination of three sub-tasks: (i) transcribing speech to text in a source language (ASR), (ii) translating text from a source to a target language (MT) and (iii) generating speech from text in a target language (TTS). Independently from the implementation approach BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, the main goal of STST is producing an output that reflects the linguistic content of the original sentence. On the other hand, automatic dubbing aims to replace all speech contained in a video document with speech in a different language, so that the result sounds and looks as natural as the original. Hence, in addition to conveying the same content of the original utterance, dubbing should also match the original timbre, emotion, duration, prosody, background noise, and reverberation.
While STST has been addressed for long time and by several research labs BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF0, relatively less and more sparse efforts have been devoted to automatic dubbing BIBREF7, BIBREF8, BIBREF9, BIBREF10, although the potential demand of such technology could be huge. In fact, multimedia content created and put online has been growing at exponential rate, in the last decade, while availability and cost of human skills for subtitling and dubbing still remains a barrier for its diffusion worldwide. Professional dubbing of a video file is a very labor intensive process that involves many steps: (i) extracting speech segments from the audio track and annotating these with speaker information; (ii) transcribing the speech segments, (iii) translating the transcript in the target language, (iv) adapting the translation for timing, (v) choosing the voice actors, (vi) performing the dubbing sessions, (vii) fine-aligning the dubbed speech segments, (viii) mixing the new voice tracks within the original soundtrack.
Automatic dubbing has been addressed both in monolingual cross-lingual settings. In BIBREF14, synchronization of two speech signals with the same content was tackled with time-alignment via dynamic time warping. In BIBREF15 automatic monolingual dubbing for TV users with special needs was generated from subtitles. However, due to the poor correlation between length and timing of the subtitles, TTS output frequently broke the timing boundaries. To avoid unnatural time compression of TTS's voice when fitting timing constraints, BIBREF7 proposed phone-dependent time compression and text simplification to shorten the subtitles, while BIBREF8 leveraged scene-change detection to relax the subtitle time boundaries. Regarding cross-lingual dubbing, lip movements synchronization was tackled in BIBREF9 by directly modifying the actor's mouth motion via shuffling of the actor's video frames. While the method does not use any prior linguistic or phonetic knowledge, it has been only demonstrated on very simple and controlled conditions. Finally, mostly related to our contribution is BIBREF10, which discusses speech synchronization at the phrase level (prosodic alignment) for English-to-Spanish automatic dubbing.
In this paper we present research work to enhance a STST pipeline in order to comply with the timing and rendering requirements posed by cross-lingual automatic dubbing of TED Talk videos. Similarly to BIBREF7, we also shorten the TTS script by directly modifying the MT engine rather than via text simplification. As in BIBREF10, we synchronize phrases across languages, but follow a fluency-based rather than content-based criterion and replace generation and rescoring of hypotheses in BIBREF10 with a more efficient dynamic programming solution. Moreover, we extend BIBREF10 by enhancing neural MT and neural TTS to improve speech synchronization, and by performing audio rendering on the dubbed speech to make it sound more real inside the video.
In the following sections, we introduce the overall architecture (Section 2) and the proposed enhancements (Sections 3-6). Then, we present results (Section 7) of experiments evaluating the naturalness of automatic dubbing of TED Talk clips from English into Italian. To our knowledge, this is the first work on automatic dubbing that integrates enhanced deep learning models for MT, TTS and audio rendering, and evaluates them on real-world videos.
Automatic Dubbing
With some approximation, we consider here automatic dubbing of the audio track of a video as the task of STST, i.e. ASR + MT + TTS, with the additional requirement that the output must be temporally, prosodically and acoustically close to the original audio. We investigate an architecture (see Figure 1) that enhances the STST pipeline with (i) enhanced MT able to generate translations of variable lengths, (ii) a prosodic alignment module that temporally aligns the MT output with the speech segments in the original audio, (iii) enhanced TTS to accurately control the duration of each produce utterance, and, finally, (iv) audio rendering that adds to the TTS output background noise and reverberation extracted from the original audio. In the following, we describe each component in detail, with the exception of ASR, for which we use BIBREF16 an of-the-shelf online service. ˜
Machine Translation
Our approach to control the length of MT output is inspired by target forcing in multilingual neural MT BIBREF17, BIBREF18. We partition the training sentence pairs into three groups (short, normal, long) according to the target/source string-length ratio. In practice, we select two thresholds $t_1$ and $t_2$, and partition training data according to the length-ratio intervals $[0,t_1)$, $[t_1,t_2)$ and $[t_2,\infty ]$. At training time a length token is prepended to each source sentence according to its group, in order to let the neural MT model discriminate between the groups. At inference time, the length token is instead prepended to bias the model to generate a translation of the desired length type. We trained a Transformer model BIBREF19 with output length control on web crawled and proprietary data amounting to 150 million English-Italian sentence pairs (with no overlap with the test data). The model has encoder and decoder with 6 layers, layer size of 1024, hidden size of 4096 on feed forward layers, and 16 heads in the multi-head attention. For the reported experiments, we trained the models with thresholds $t_1=0.95$ and $t_2=1.05$ and generated at inference time translations of the shortest type, resulting in an average length ratio of $0.97$ on our test set. A detailed account of the approach, the followed training procedure and experimental results on the same task of this paper can be found in BIBREF20. Finally, as baseline MT system we used an online service.
Prosodic Alignment
Prosodic alignmentBIBREF10 is the problem of segmenting the target sentence to optimally match the distribution of words and pauses of the source sentence. Let ${\bf e}=e_1,e_2,\ldots ,e_n$ be a source sentence of $n$ words which is segmented according to $k$ breakpoints $1 \le i_1 < i_2 < \ldots i_k=n$, shortly denoted with ${\bf i}$. Given a target sentence ${\bf f}=f_1,f_2,\ldots ,f_m$ of $m$ words, the goal is to find within it $k$ corresponding breakpoints $1 \le j_1 < j_2 < \ldots j_k=m$ (shortly denoted with ${\bf j}$) that maximize the probability:
By assuming a Markovian dependency on ${\bf j}$, i.e.:
and omitting from the notation the constant terms ${\bf i},{\bf e},{\bf f}$, we can derive the following recurrent quantity:
where $Q(j,t)$ denotes the log-probability of the optimal segmentation of ${\bf f}$ up to position $j$ with $t$ break points. It is easy to show that the solution of (DISPLAY_FORM5) corresponds to $Q(m,k)$ and that optimal segmentation can be efficiently computed via dynamic-programming. Let ${\tilde{f}}_t = f_{j_{t-1}+1},\ldots ,f_{j_t}$ and ${\tilde{e}}_t =e_{i_{t-1}+1},\ldots ,e_{i_t}$ indicate the $t$-th segments of ${\bf f}$ and ${\bf e}$, respectively, we define the conditional probability of the $t$-th break point in ${\bf f}$ by:
The first term computes the relative match in duration between the corresponding $t$-th segments, while the second term measure the linguistic plausibility of a placing a break after the ${j_t}$ in ${\bf f}$. For this, we simply compute the following ratio of language model perplexities computed over a text window centered on the break point, by assuming or not the presence of a pause (comma, semicolon or dash) in the middle:
In our implementation, we use a larger text window (last and first two words), we replace words with parts-of speech, and estimate the language model on a large English corpus.
Text To Speech
Our neural TTS system consists of two modules: a Context Generation module, which generates a context sequence from the input text, and a Neural Vocoder module, which converts the context sequence into a speech waveform. The first one is an attention-based sequence-to-sequence network BIBREF21, BIBREF22 that predicts a Mel-spectrogram given an input text. A grapheme-to-phoneme module converts the sequence of words into a sequence of phonemes plus augmented features like punctuation marks and prosody related features derived from the text (e.g. lexical stress). For the Context Generation module, we trained speaker-dependent models on two Italian voices, male and female, with 10 and 37 hours of high quality recordings, respectively. We use the Universal Neural Vocoder introduced in BIBREF23, pre-trained with 2000 utterances per each of the 74 voices from a proprietary database.
To ensure close matching of the duration of Italian TTS output with timing information extracted from the original English audio, for each utterance we resize the generated Mel spectrogram using spline interpolation prior to running the Neural Vocoder. We empirically observed that this method produces speech of better quality than traditional time-stretching.
Audio Rendering ::: Foreground-Background Separation
The input audio can be seen as a mixture of foreground (speech) and background (everything else) and our goal is to extract the background and add it to the dubbed speech to make it sound more real and similar to the original. For the foreground-background separation task, we adapted the popular U-Net BIBREF24 architecture, which is described in detail in BIBREF25 for a music-vocal separation task. It consists of a series of down-sampling blocks, followed by one ’bottom’ convolutional layer, followed by a series of up-sampling blocks with skip connections from the down-sampling to the up-sampling blocks. Because of the down-sampling blocks, the model can compute a number of high-level features on coarser time scales, which are concatenated with the local, high-resolution features computed from the same-level up-sampling block. This concatenation results into multi-scale features for prediction. The model operates on a time-frequency representation (spectrograms) of the audio mixture and it outputs two soft ratio masks corresponding to foreground and background, respectively, which are multiplied element-wise with the mixed spectrogram, to obtain the final estimates of the two sources. Finally, the estimated spectrograms go through an inverse short-term Fourier transform block to produce raw time domain signals. The loss function used to train the model is the sum of the $L_1$ losses between the target and the masked input spectrograms, for the foreground and the background BIBREF25, respectively. The model is trained with the Adam optimizer on mixed audio provided with foreground and background ground truths. Training data was created from 360 hours of clean speech from Librispeech (foreground) and 120 hours of recording taken from audioset BIBREF26 (background), from which speech was filtered out using a Voice Activity Detector (VAD). Foreground and background are mixed for different signal-to-noise ratio (SNR), to generate the audio mixtures.
Audio Rendering ::: Re-reverberation
In this step, we estimate the environment reverberation from the original audio and apply it to the dubbed audio. Unfortunately, estimating the room impulse response (RIR) from a reverberated signal requires solving an ill-posed blind deconvolution problem. Hence, instead of estimating the RIR, we do a blind estimation of the reverberation time (RT), which is commonly used to assess the amount of room reverberation or its effects. The RT is defined as the time interval in which the energy of a steady-state sound field decays 60 dB below its initial level after switching off the excitation source. In this work we use a Maximum Likelihood Estimation (MLE) based RT estimate (see details of the method in BIBREF27). Estimated RT is then used to generate a synthetic RIR using a publicly available RIR generator BIBREF28. This synthetic RIR is finally applied to the dubbed audio.
Experimental Evaluation
We evaluated our automatic dubbing architecture (Figure 1), by running perceptual evaluations in which users are asked to grade the naturalness of video clips dubbed with three configurations (see Table TABREF12): (A) speech-to-speech translation baseline, (B) the baseline with enhanced MT and prosodic alignment, (C) the former system enhanced with audio rendering.
Our evaluation focuses on two questions:
What is the overall naturalness of automatic dubbing?
How does each introduced enhancement contribute to the naturalness of automatic dubbing?
We adopt the MUSHRA (MUlti Stimulus test with Hidden Reference and Anchor) methodology BIBREF29, originally designed to evaluate audio codecs and later also TTS. We asked listeners to evaluate the naturalness of each versions of a video clip on a 0-100 scale. Figure FIGREF15 shows the user interface. In absence of a human dubbed version of each clip, we decided to use, for calibration purposes, the clip in the original language as hidden reference. The clip versions to evaluate are not labeled and randomly ordered. The observer has to play each version at least once before moving forward and can leave a comment about the worse version.
In order to limit randomness introduced by ASR and TTS across the clips and by MT across versions of the same clip, we decided to run the experiments using manual speech transcripts, one TTS voice per gender, and MT output by the baseline (A) and enhanced MT system (B-C) of quality judged at least acceptable by an expert. With these criteria in mind, we selected 24 video clips from 6 TED Talks (3 female and 3 male speakers, 5 clips per talk) from the official test set of the MUST-C corpus BIBREF30 with the following criteria: duration of around 10-15 seconds, only one speaker talking, at least two sentences, speaker face mostly visible.
We involved in the experiment both Italian and non Italian listeners. We recommended all participants to disregard the content and only focus on the naturalness of the output. Our goal is to measure both language independent and language dependent naturalness, i.e. to verify how speech in the video resembles human speech with respect to acoustics and synchronization, and how intelligible it is to native listeners.
Experimental Evaluation ::: Results
We collected a total of 657 ratings by 14 volunteers, 5 Italian and 9 non-Italian listeners, spread over the 24 clips and three testing conditions. We conducted a statistical analysis of the data with linear mixed-effects models using the lme4 package for R BIBREF31. We analyzed the naturalness score (response variable) against the following two-level fixed effects: dubbing system A vs. B, system A vs. C, and system B vs. C. We run separate analysis for Italian and non-Italian listeners. In our mixed models, listeners and video clips are random effects, as they represent a tiny sample of the respective true populationsBIBREF31. We keep models maximal, i.e. with intercepts and slopes for each random effect, end remove terms required to avoid singularities BIBREF32. Each model is fitted by maximum likelihood and significance of intercepts and slopes are computed via t-test.
Table TABREF18 summarized our results. In the first comparison, baseline (A) versus the system with enhanced MT and prosody alignment (B), we see that both non-Italian and Italian listeners perceive a similar naturalness of system A (46.81 vs. 47.22). When movid to system B, non-Italian listeners perceive a small improvement (+1.14), although not statistically significant, while Italian speaker perceive a statistically significant degradation (-10.93). In the comparison between B and C (i.e. B enhanced with audio rendering), we see that non-Italian listeners observe a significant increase in naturalness (+10.34), statistically significant, while Italian listeners perceive a smaller and not statistical significant improvement (+1.05). The final comparison between A and C gives almost consistent results with the previous two evaluations: non-Italian listeners perceive better quality in condition C (+11.01) while Italian listeners perceive lower quality (-9.60). Both measured variations are however not statistically significant due to the higher standard errors of the slope estimates $\Delta $C. Notice in fact that each mixed-effects model is trained on distinct data sets and with different random effect variables. A closer look at the random effects parameters indeed shows that for the B vs. C comparison, the standard deviation estimate of the listener intercept is 3.70, while for the A vs. C one it is 11.02. In other words, much higher variability across user scores is observed in the A vs. C case rather than in the B vs. C case. A much smaller increase is instead observed across the video-clip random intercepts, i.e. from 11.80 to 12.66. The comments left by the Italian listeners tell that the main problem of system B is the unnaturalness of the speaking rate, i.e. is is either too slow, too fast, or too uneven.
The distributions of the MUSHRA scores presented at the top of Figure FIGREF19 confirm our analysis. What is more relevant, the distribution of the rank order (bottom) strengths our previous analysis. Italian listeners tend to rank system A the best system (median $1.0$) and vary their preference between systems B and C (both with median $2.0$). In contrast, non-Italian rank system A as the worse system (median $2.5$), system B as the second (median $2.0$), and statistically significantly prefer system C as the best system (median $1.0$).
Hence, while our preliminary evaluation found that shorter MT output can potentially enable better synchronization, the combination of MT and prosodic alignment appears to be still problematic and prone to generate unnatural speech.
The incorporation of audio rendering (system $C$) significantly improves the experience of the non-Italian listeners (66 in median) respect to systems $B$ and $C$. This points out the relevance of including para-linguist aspects (i.e. applause's, audience laughs in jokes,etc.) and acoustic conditions (i.e. reverberation, ambient noise, etc.). For the target (Italian) listeners this improvement appears instead masked by the disfluencies introduced by the prosodic alignment step. If we try to directly measure the relative gains given by audio rendering, we see that Italian listeners score system B better than system A 27% of the times and system C better than A 31% of the times, which is a 15% relative gain. On the contrary non-Italian speakers score B better than A 52% of the times, and C better than A 66% of the times, which is a 27% relative gain.
Conclusions
We have perceptually evaluated the impact on the naturalness of automatic speech dubbing when we enhance a baseline speech-to-speech translation system with the possibility to control the length of the translation output, align target words with the speech-pause segmentation of the source, and enrich speech output with ambient noise and reverberation extracted from the original audio. We tested our system with both Italian and non-Italian listeners in order to evaluate both language independent and language dependent naturalness of dubbed videos. Results show that while we succeeded at achieving synchronization at the phrasal level, our prosodic alignment step negatively impacts on the fluency and prosody of the generated language. The impact of these disfluencies on native listeners seems to partially mask the effect of the audio rendering with background noise and reverberation, which instead results in a major increase of naturalness for non-Italian listeners. Future work will definitely devoted to improving the prosodic alignment component, by computing better segmentation and introducing more flexible lip-synchronization.
Acknowledgements
The authors would like to thank the Amazon Polly, Translate and Transcribe research teams; Adam Michalski, Alessandra Brusadin, Mattia Di Gangi and Surafel Melaku for contributions to the project, and all colleagues at Amazon AWS who helped with the evaluation. | 14 volunteers |
ffc5ad48b69a71e92295a66a9a0ff39548ab3cf1 | ffc5ad48b69a71e92295a66a9a0ff39548ab3cf1_0 | Q: What other embedding models are tested?
Text: Introduction
The prominent model for representing semantics of words is the distributional vector space model BIBREF2 and the prevalent approach for constructing these models is the distributional one which assumes that semantics of a word can be predicted from its context, hence placing words with similar contexts in close proximity to each other in an imaginary high-dimensional vector space. Distributional techniques, either in their conventional form which compute co-occurrence matrices BIBREF2 , BIBREF3 and learn high-dimensional vectors for words, or the recent neural-based paradigm which directly learns latent low-dimensional vectors, usually referred to as embeddings BIBREF4 , rely on a multitude of occurrences for each individual word to enable accurate representations. As a result of this statistical nature, words that are infrequent or unseen during training, such as domain-specific words, will not have reliable embeddings. This is the case even if massive corpora are used for training, such as the 100B-word Google News dataset BIBREF5 .
Recent work on embedding induction has mainly focused on morphologically complex rare words and has tried to address the problem by learning transformations that can transfer a word's semantic information to its morphological variations, hence inducing embeddings for complex forms by breaking them into their sub-word units BIBREF6 , BIBREF7 , BIBREF8 . However, these techniques are unable to effectively model single-morpheme words for which no sub-word information is available in the training data, essentially ignoring most of the rare domain-specific entities which are crucial in the performance of NLP systems when applied to those domains.
On the other hand, distributional techniques generally ignore all the lexical knowledge that is encoded in dictionaries, ontologies, or other lexical resources. There exist hundreds of high coverage or domain-specific lexical resources which contain valuable information for infrequent words, particularly in domains such as health. Here, we present a methodology that merges the two worlds by benefiting from both expert-based lexical knowledge encoded in external resources as well as statistical information derived from large corpora, enabling vocabulary expansion not only for morphological variations but also for infrequent single-morpheme words. The contributions of this work are twofold: (1) we propose a technique that induces embeddings for rare and unseen words by exploiting the information encoded for them in an external lexical resource, and (2) we apply, possibly for the first time, vector space mapping techniques, which are widely used in multilingual settings, to map two lexical semantic spaces with different properties in the same language. We show that a transfer methodology can lead to consistent improvements on a standard rare word similarity dataset.
Methodology
We take an existing semantic space INLINEFORM0 and enrich it with rare and unseen words on the basis of the knowledge encoded for them in an external knowledge base (KB) INLINEFORM1 . The procedure has two main steps: we first embed INLINEFORM2 to transform it from a graph representation into a vector space representation (§ SECREF2 ), and then map this space to INLINEFORM3 (§ SECREF7 ). Our methodology is illustrated in Figure 1.
In our experiments, we used WordNet 3.0 BIBREF9 as our external knowledge base INLINEFORM0 . For word embeddings, we experimented with two popular models: (1) GloVe embeddings trained by BIBREF10 on Wikipedia and Gigaword 5 (vocab: 400K, dim: 300), and (2) w2v-gn, Word2vec BIBREF5 trained on the Google News dataset (vocab: 3M, dim: 300).
Knowledge Base Embedding
Our coverage enhancement starts by transforming the knowledge base INLINEFORM0 into a vector space representation that is comparable to that of the corpus-based space INLINEFORM1 . To this end, we use two techniques for learning low-dimensional feature spaces from knowledge graphs: DeepWalk and node2vec. DeepWalk uses a stream of short random walks in order to extract local information for a node from the graph. By treating these walks as short sentences and phrases in a special language, the approach learns latent representations for each node. Similarly, node2vec learns a mapping of nodes to continuous vectors that maximizes the likelihood of preserving network neighborhoods of nodes. Thanks to a flexible objective that is not tied to a particular sampling strategy, node2vec reports improvements over DeepWalk on multiple classification and link prediction datasets. For both these systems we used the default parameters and set the dimensionality of output representation to 100. Also, note than nodes in the semantic graph of WordNet represent synsets. Hence, a polysemous word would correspond to multiple nodes. In our experiments, we use the MaxSim assumption of BIBREF11 in order to map words to synsets.
To verify the reliability of these vector representations, we carried out an experiment on three standard word similarity datasets: RG-65 BIBREF12 , WordSim-353 similarity subset BIBREF13 , and SimLex-999 BIBREF14 . Table TABREF5 reports Pearson and Spearman correlations for the two KB embedding techniques (on WordNet's graph) and, as baseline, for our two word embeddings, i.e. w2v-gn and GloVe. The results are very similar, with node2vec proving to be slightly superior. We note that the performances are close to those of state-of-the-art WordNet approaches BIBREF15 , which shows the efficacy of these embedding techniques in capturing the semantic properties of WordNet's graph.
Semantic Space Transformation
Once we have the lexical resource INLINEFORM0 represented as a vector space INLINEFORM1 , we proceed with projecting it to INLINEFORM2 in order to improve the word coverage of the latter with additional words from the former. In this procedure we make two assumptions. Firstly, the two spaces provide reliable models of word semantics; hence, the relative within-space distances between words in the two spaces are comparable. Secondly, there exists a set of shared words between the two spaces, which we refer to as semantic bridges, from which we can learn a projection that maps one space into another.
As for the mapping, we used two techniques which are widely used for the mapping of semantic spaces belonging to different languages, mainly with the purpose of learning multilingual semantic spaces: Least squares BIBREF16 , BIBREF17 and Canonical Correlation Analysis BIBREF18 , BIBREF19 . These models receive as their input two vector spaces of two different languages and a seed lexicon for that language pair and learn a linear mapping between the two spaces. Ideally, words that are semantically similar across the two languages will be placed in close proximity to each other in the projected space. We adapt these models to the monolingual setting and for mapping two semantic spaces with different properties. As for the seed lexicon (to which in our setting we refer to as semantic bridges) in this monolingual setting, we use the set of monosemous words in the vocabulary which are deemed to have the most reliable semantic representations.
Specifically, let INLINEFORM0 and INLINEFORM1 be the corpus and KB semantic spaces, respectively, and INLINEFORM2 and INLINEFORM3 be their corresponding subset of semantic bridges, i.e., words that are monosemous according to the WordNet sense inventory. Note that INLINEFORM4 and INLINEFORM5 are vector matrices that contain representations for the same set of corresponding words, i.e., INLINEFORM6 . LS views the problem as a multivariate regression and learns a linear function INLINEFORM7 (where INLINEFORM8 and INLINEFORM9 are the dimensionalities of the KB and corpus spaces, respectively) on the basis of the following INLINEFORM10 -regularized least squares error objective and typically using stochastic gradient descent: DISPLAYFORM0
The enriched space INLINEFORM0 is then obtained as a union of INLINEFORM1 and INLINEFORM2 . CCA, on the other hand, learns two distinct linear mappings INLINEFORM3 and INLINEFORM4 with the objective of maximizing the correlation between the dimensions of the projected vectors INLINEFORM5 and INLINEFORM6 : DISPLAYFORM0
In this case, INLINEFORM0 is the union of INLINEFORM1 and INLINEFORM2 . In the next section we first compare different KB embedding and transformation techniques introduced in this section and then apply our methodology to a rare word similarity task.
Evaluation benchmark
To verify the reliability of the transformed semantic space, we propose an evaluation benchmark on the basis of word similarity datasets. Given an enriched space INLINEFORM0 and a similarity dataset INLINEFORM1 , we compute the similarity of each word pair INLINEFORM2 as the cosine similarity of their corresponding transformed vectors INLINEFORM3 and INLINEFORM4 from the two spaces, where INLINEFORM5 and INLINEFORM6 for LS and INLINEFORM7 and INLINEFORM8 for CCA. A high performance on this benchmark shows that the mapping has been successful in placing semantically similar terms near to each other whereas dissimilar terms are relatively far apart in the space. We repeat the computation for each pair in the reverse direction.
Comparison Study
Figure FIGREF1 shows the performance of different configurations on our three similarity datasets and for increasing sizes of semantic bridge sets. We experimented with four different configurations: two KB embedding approaches, i.e. DeepWalk and node2vec, and two mapping techniques, i.e. LS and CCA (cf. § SECREF2 ). In general, the optimal performance is reached when around 3K semantic bridges are used for transformation. DeepWalk and node2vec prove to be very similar in their performance across the three datasets. Among the two transformation techniques, CCA consistently outperforms LS on all three datasets when provided with 1000 or more semantic bridges (with 500, however, LS always has an edge). In the remaining experiments we only report results for the best configuration: node2vec with CCA. We also set the size of semantic bridge set to 5K.
Rare Word Similarity
In order to verify the reliability of our technique in coverage expansion for infrequent words we did a set of experiments on the Rare Word similarity dataset BIBREF6 . The dataset comprises 2034 pairs of rare words, such as ulcerate-change and nurturance-care, judged by 10 raters on a [0,10] scale. Table TABREF15 shows the results on the dataset for three pre-trained word embeddings (cf. § SECREF2 ), in their initial form as well as when enriched with additional words from WordNet.
Among the three initial embeddings, w2v-gn-500K provides the lowest coverage, with over 20% out-of-vocabulary pairs, whereas GloVe has a similar coverage to that of w2v-gn despite its significantly smaller vocabulary (400K vs. 3M). Upon enrichment, all the embeddings attain near full coverage (over 99%), thanks to the vocabulary expansion by rare words in WordNet. The enhanced coverage leads to consistent performance improvements according to both Pearson and Spearman correlations. The best performance gain is achieved for w2v-gn-500K (around 10% absolute gain) which proves the efficacy of our approach in inducing embeddings for rare words. The improvements are also statistically significant (p < 0.05) according to conducted one tailed t-test BIBREF20 , showing that the coverage enhancement could lead to improved performance even if lower-performing KB embedding and transformation are used.
Related Work
The main focus of research in embedding coverage enhancement has been on the morphologically complex forms BIBREF21 . BIBREF6 used recursive neural networks (RNNs) and neural language models in order to induce embeddings for morphologically complex words from their morphemes whereas BIBREF22 adapted phrase composition models for this purpose. BIBREF7 proposed a different model based on log-bilinear language models, mainly to have a compositional vector-based morphological representation that can be easily integrated into a machine translation decoder. These models often utilize a morphological segmentation toolkit, such as Morfessor BIBREF23 , in order to break inflected words into their morphological structures and to obtain segmentations for words in the vocabulary. BIBREF8 put forward a technique that does not rely on any external morphological analyzer and instead, induces morphological rules and transformations, represented as vectors in the same embedding space. Based on these rules a morphological graph is constructed and representations are inferred by analyzing morphological transformations in the graph. However, all these techniques fall short in inducing representations for single-morpheme words that are not seen frequently during training as they base their modeling on information available on sub-word units. In contrast, our transformation-based model can also induce embeddings for single-morpheme words that are infrequent or unseen in the training data, such as domain-specific entities.
Conclusions and Future Work
We presented a methodology for merging distributional semantic spaces and lexical ontologies and applied it to the task of extending the vocabulary of the former with the help of information extracted from the latter. We carried out an analysis for different KB embedding and semantic space mapping techniques and also showed that our methodology can lead to considerable enrichment of two standard word embedding models, leading to consistent improvements on the rare word similarity dataset. One interesting property of our approach is that it can be used in the reverse direction and for the completion of knowledge bases using the distributional information derived from text corpora. In future work, we plan to investigate this direction. We also intend to experiment with domain-specific lexical resources and measure the impact of coverage enhancement on a downstream NLP application. | GloVe embeddings trained by BIBREF10 on Wikipedia and Gigaword 5 (vocab: 400K, dim: 300), w2v-gn, Word2vec BIBREF5 trained on the Google News dataset (vocab: 3M, dim: 300), DeepWalk , node2vec |
1024f22110c436aa7a62a1022819bfe62dc0d336 | 1024f22110c436aa7a62a1022819bfe62dc0d336_0 | Q: How is performance measured?
Text: Introduction
The prominent model for representing semantics of words is the distributional vector space model BIBREF2 and the prevalent approach for constructing these models is the distributional one which assumes that semantics of a word can be predicted from its context, hence placing words with similar contexts in close proximity to each other in an imaginary high-dimensional vector space. Distributional techniques, either in their conventional form which compute co-occurrence matrices BIBREF2 , BIBREF3 and learn high-dimensional vectors for words, or the recent neural-based paradigm which directly learns latent low-dimensional vectors, usually referred to as embeddings BIBREF4 , rely on a multitude of occurrences for each individual word to enable accurate representations. As a result of this statistical nature, words that are infrequent or unseen during training, such as domain-specific words, will not have reliable embeddings. This is the case even if massive corpora are used for training, such as the 100B-word Google News dataset BIBREF5 .
Recent work on embedding induction has mainly focused on morphologically complex rare words and has tried to address the problem by learning transformations that can transfer a word's semantic information to its morphological variations, hence inducing embeddings for complex forms by breaking them into their sub-word units BIBREF6 , BIBREF7 , BIBREF8 . However, these techniques are unable to effectively model single-morpheme words for which no sub-word information is available in the training data, essentially ignoring most of the rare domain-specific entities which are crucial in the performance of NLP systems when applied to those domains.
On the other hand, distributional techniques generally ignore all the lexical knowledge that is encoded in dictionaries, ontologies, or other lexical resources. There exist hundreds of high coverage or domain-specific lexical resources which contain valuable information for infrequent words, particularly in domains such as health. Here, we present a methodology that merges the two worlds by benefiting from both expert-based lexical knowledge encoded in external resources as well as statistical information derived from large corpora, enabling vocabulary expansion not only for morphological variations but also for infrequent single-morpheme words. The contributions of this work are twofold: (1) we propose a technique that induces embeddings for rare and unseen words by exploiting the information encoded for them in an external lexical resource, and (2) we apply, possibly for the first time, vector space mapping techniques, which are widely used in multilingual settings, to map two lexical semantic spaces with different properties in the same language. We show that a transfer methodology can lead to consistent improvements on a standard rare word similarity dataset.
Methodology
We take an existing semantic space INLINEFORM0 and enrich it with rare and unseen words on the basis of the knowledge encoded for them in an external knowledge base (KB) INLINEFORM1 . The procedure has two main steps: we first embed INLINEFORM2 to transform it from a graph representation into a vector space representation (§ SECREF2 ), and then map this space to INLINEFORM3 (§ SECREF7 ). Our methodology is illustrated in Figure 1.
In our experiments, we used WordNet 3.0 BIBREF9 as our external knowledge base INLINEFORM0 . For word embeddings, we experimented with two popular models: (1) GloVe embeddings trained by BIBREF10 on Wikipedia and Gigaword 5 (vocab: 400K, dim: 300), and (2) w2v-gn, Word2vec BIBREF5 trained on the Google News dataset (vocab: 3M, dim: 300).
Knowledge Base Embedding
Our coverage enhancement starts by transforming the knowledge base INLINEFORM0 into a vector space representation that is comparable to that of the corpus-based space INLINEFORM1 . To this end, we use two techniques for learning low-dimensional feature spaces from knowledge graphs: DeepWalk and node2vec. DeepWalk uses a stream of short random walks in order to extract local information for a node from the graph. By treating these walks as short sentences and phrases in a special language, the approach learns latent representations for each node. Similarly, node2vec learns a mapping of nodes to continuous vectors that maximizes the likelihood of preserving network neighborhoods of nodes. Thanks to a flexible objective that is not tied to a particular sampling strategy, node2vec reports improvements over DeepWalk on multiple classification and link prediction datasets. For both these systems we used the default parameters and set the dimensionality of output representation to 100. Also, note than nodes in the semantic graph of WordNet represent synsets. Hence, a polysemous word would correspond to multiple nodes. In our experiments, we use the MaxSim assumption of BIBREF11 in order to map words to synsets.
To verify the reliability of these vector representations, we carried out an experiment on three standard word similarity datasets: RG-65 BIBREF12 , WordSim-353 similarity subset BIBREF13 , and SimLex-999 BIBREF14 . Table TABREF5 reports Pearson and Spearman correlations for the two KB embedding techniques (on WordNet's graph) and, as baseline, for our two word embeddings, i.e. w2v-gn and GloVe. The results are very similar, with node2vec proving to be slightly superior. We note that the performances are close to those of state-of-the-art WordNet approaches BIBREF15 , which shows the efficacy of these embedding techniques in capturing the semantic properties of WordNet's graph.
Semantic Space Transformation
Once we have the lexical resource INLINEFORM0 represented as a vector space INLINEFORM1 , we proceed with projecting it to INLINEFORM2 in order to improve the word coverage of the latter with additional words from the former. In this procedure we make two assumptions. Firstly, the two spaces provide reliable models of word semantics; hence, the relative within-space distances between words in the two spaces are comparable. Secondly, there exists a set of shared words between the two spaces, which we refer to as semantic bridges, from which we can learn a projection that maps one space into another.
As for the mapping, we used two techniques which are widely used for the mapping of semantic spaces belonging to different languages, mainly with the purpose of learning multilingual semantic spaces: Least squares BIBREF16 , BIBREF17 and Canonical Correlation Analysis BIBREF18 , BIBREF19 . These models receive as their input two vector spaces of two different languages and a seed lexicon for that language pair and learn a linear mapping between the two spaces. Ideally, words that are semantically similar across the two languages will be placed in close proximity to each other in the projected space. We adapt these models to the monolingual setting and for mapping two semantic spaces with different properties. As for the seed lexicon (to which in our setting we refer to as semantic bridges) in this monolingual setting, we use the set of monosemous words in the vocabulary which are deemed to have the most reliable semantic representations.
Specifically, let INLINEFORM0 and INLINEFORM1 be the corpus and KB semantic spaces, respectively, and INLINEFORM2 and INLINEFORM3 be their corresponding subset of semantic bridges, i.e., words that are monosemous according to the WordNet sense inventory. Note that INLINEFORM4 and INLINEFORM5 are vector matrices that contain representations for the same set of corresponding words, i.e., INLINEFORM6 . LS views the problem as a multivariate regression and learns a linear function INLINEFORM7 (where INLINEFORM8 and INLINEFORM9 are the dimensionalities of the KB and corpus spaces, respectively) on the basis of the following INLINEFORM10 -regularized least squares error objective and typically using stochastic gradient descent: DISPLAYFORM0
The enriched space INLINEFORM0 is then obtained as a union of INLINEFORM1 and INLINEFORM2 . CCA, on the other hand, learns two distinct linear mappings INLINEFORM3 and INLINEFORM4 with the objective of maximizing the correlation between the dimensions of the projected vectors INLINEFORM5 and INLINEFORM6 : DISPLAYFORM0
In this case, INLINEFORM0 is the union of INLINEFORM1 and INLINEFORM2 . In the next section we first compare different KB embedding and transformation techniques introduced in this section and then apply our methodology to a rare word similarity task.
Evaluation benchmark
To verify the reliability of the transformed semantic space, we propose an evaluation benchmark on the basis of word similarity datasets. Given an enriched space INLINEFORM0 and a similarity dataset INLINEFORM1 , we compute the similarity of each word pair INLINEFORM2 as the cosine similarity of their corresponding transformed vectors INLINEFORM3 and INLINEFORM4 from the two spaces, where INLINEFORM5 and INLINEFORM6 for LS and INLINEFORM7 and INLINEFORM8 for CCA. A high performance on this benchmark shows that the mapping has been successful in placing semantically similar terms near to each other whereas dissimilar terms are relatively far apart in the space. We repeat the computation for each pair in the reverse direction.
Comparison Study
Figure FIGREF1 shows the performance of different configurations on our three similarity datasets and for increasing sizes of semantic bridge sets. We experimented with four different configurations: two KB embedding approaches, i.e. DeepWalk and node2vec, and two mapping techniques, i.e. LS and CCA (cf. § SECREF2 ). In general, the optimal performance is reached when around 3K semantic bridges are used for transformation. DeepWalk and node2vec prove to be very similar in their performance across the three datasets. Among the two transformation techniques, CCA consistently outperforms LS on all three datasets when provided with 1000 or more semantic bridges (with 500, however, LS always has an edge). In the remaining experiments we only report results for the best configuration: node2vec with CCA. We also set the size of semantic bridge set to 5K.
Rare Word Similarity
In order to verify the reliability of our technique in coverage expansion for infrequent words we did a set of experiments on the Rare Word similarity dataset BIBREF6 . The dataset comprises 2034 pairs of rare words, such as ulcerate-change and nurturance-care, judged by 10 raters on a [0,10] scale. Table TABREF15 shows the results on the dataset for three pre-trained word embeddings (cf. § SECREF2 ), in their initial form as well as when enriched with additional words from WordNet.
Among the three initial embeddings, w2v-gn-500K provides the lowest coverage, with over 20% out-of-vocabulary pairs, whereas GloVe has a similar coverage to that of w2v-gn despite its significantly smaller vocabulary (400K vs. 3M). Upon enrichment, all the embeddings attain near full coverage (over 99%), thanks to the vocabulary expansion by rare words in WordNet. The enhanced coverage leads to consistent performance improvements according to both Pearson and Spearman correlations. The best performance gain is achieved for w2v-gn-500K (around 10% absolute gain) which proves the efficacy of our approach in inducing embeddings for rare words. The improvements are also statistically significant (p < 0.05) according to conducted one tailed t-test BIBREF20 , showing that the coverage enhancement could lead to improved performance even if lower-performing KB embedding and transformation are used.
Related Work
The main focus of research in embedding coverage enhancement has been on the morphologically complex forms BIBREF21 . BIBREF6 used recursive neural networks (RNNs) and neural language models in order to induce embeddings for morphologically complex words from their morphemes whereas BIBREF22 adapted phrase composition models for this purpose. BIBREF7 proposed a different model based on log-bilinear language models, mainly to have a compositional vector-based morphological representation that can be easily integrated into a machine translation decoder. These models often utilize a morphological segmentation toolkit, such as Morfessor BIBREF23 , in order to break inflected words into their morphological structures and to obtain segmentations for words in the vocabulary. BIBREF8 put forward a technique that does not rely on any external morphological analyzer and instead, induces morphological rules and transformations, represented as vectors in the same embedding space. Based on these rules a morphological graph is constructed and representations are inferred by analyzing morphological transformations in the graph. However, all these techniques fall short in inducing representations for single-morpheme words that are not seen frequently during training as they base their modeling on information available on sub-word units. In contrast, our transformation-based model can also induce embeddings for single-morpheme words that are infrequent or unseen in the training data, such as domain-specific entities.
Conclusions and Future Work
We presented a methodology for merging distributional semantic spaces and lexical ontologies and applied it to the task of extending the vocabulary of the former with the help of information extracted from the latter. We carried out an analysis for different KB embedding and semantic space mapping techniques and also showed that our methodology can lead to considerable enrichment of two standard word embedding models, leading to consistent improvements on the rare word similarity dataset. One interesting property of our approach is that it can be used in the reverse direction and for the completion of knowledge bases using the distributional information derived from text corpora. In future work, we plan to investigate this direction. We also intend to experiment with domain-specific lexical resources and measure the impact of coverage enhancement on a downstream NLP application. | To verify the reliability of the transformed semantic space, we propose an evaluation benchmark on the basis of word similarity datasets. Given an enriched space INLINEFORM0 and a similarity dataset INLINEFORM1 , we compute the similarity of each word pair INLINEFORM2 as the cosine similarity of their corresponding transformed vectors INLINEFORM3 and INLINEFORM4 from the two spaces, where INLINEFORM5 and INLINEFORM6 for LS and INLINEFORM7 and INLINEFORM8 for CCA. |
f062723bda695716aa7cb0f27675b7fc0d302d4d | f062723bda695716aa7cb0f27675b7fc0d302d4d_0 | Q: How are rare words defined?
Text: Introduction
The prominent model for representing semantics of words is the distributional vector space model BIBREF2 and the prevalent approach for constructing these models is the distributional one which assumes that semantics of a word can be predicted from its context, hence placing words with similar contexts in close proximity to each other in an imaginary high-dimensional vector space. Distributional techniques, either in their conventional form which compute co-occurrence matrices BIBREF2 , BIBREF3 and learn high-dimensional vectors for words, or the recent neural-based paradigm which directly learns latent low-dimensional vectors, usually referred to as embeddings BIBREF4 , rely on a multitude of occurrences for each individual word to enable accurate representations. As a result of this statistical nature, words that are infrequent or unseen during training, such as domain-specific words, will not have reliable embeddings. This is the case even if massive corpora are used for training, such as the 100B-word Google News dataset BIBREF5 .
Recent work on embedding induction has mainly focused on morphologically complex rare words and has tried to address the problem by learning transformations that can transfer a word's semantic information to its morphological variations, hence inducing embeddings for complex forms by breaking them into their sub-word units BIBREF6 , BIBREF7 , BIBREF8 . However, these techniques are unable to effectively model single-morpheme words for which no sub-word information is available in the training data, essentially ignoring most of the rare domain-specific entities which are crucial in the performance of NLP systems when applied to those domains.
On the other hand, distributional techniques generally ignore all the lexical knowledge that is encoded in dictionaries, ontologies, or other lexical resources. There exist hundreds of high coverage or domain-specific lexical resources which contain valuable information for infrequent words, particularly in domains such as health. Here, we present a methodology that merges the two worlds by benefiting from both expert-based lexical knowledge encoded in external resources as well as statistical information derived from large corpora, enabling vocabulary expansion not only for morphological variations but also for infrequent single-morpheme words. The contributions of this work are twofold: (1) we propose a technique that induces embeddings for rare and unseen words by exploiting the information encoded for them in an external lexical resource, and (2) we apply, possibly for the first time, vector space mapping techniques, which are widely used in multilingual settings, to map two lexical semantic spaces with different properties in the same language. We show that a transfer methodology can lead to consistent improvements on a standard rare word similarity dataset.
Methodology
We take an existing semantic space INLINEFORM0 and enrich it with rare and unseen words on the basis of the knowledge encoded for them in an external knowledge base (KB) INLINEFORM1 . The procedure has two main steps: we first embed INLINEFORM2 to transform it from a graph representation into a vector space representation (§ SECREF2 ), and then map this space to INLINEFORM3 (§ SECREF7 ). Our methodology is illustrated in Figure 1.
In our experiments, we used WordNet 3.0 BIBREF9 as our external knowledge base INLINEFORM0 . For word embeddings, we experimented with two popular models: (1) GloVe embeddings trained by BIBREF10 on Wikipedia and Gigaword 5 (vocab: 400K, dim: 300), and (2) w2v-gn, Word2vec BIBREF5 trained on the Google News dataset (vocab: 3M, dim: 300).
Knowledge Base Embedding
Our coverage enhancement starts by transforming the knowledge base INLINEFORM0 into a vector space representation that is comparable to that of the corpus-based space INLINEFORM1 . To this end, we use two techniques for learning low-dimensional feature spaces from knowledge graphs: DeepWalk and node2vec. DeepWalk uses a stream of short random walks in order to extract local information for a node from the graph. By treating these walks as short sentences and phrases in a special language, the approach learns latent representations for each node. Similarly, node2vec learns a mapping of nodes to continuous vectors that maximizes the likelihood of preserving network neighborhoods of nodes. Thanks to a flexible objective that is not tied to a particular sampling strategy, node2vec reports improvements over DeepWalk on multiple classification and link prediction datasets. For both these systems we used the default parameters and set the dimensionality of output representation to 100. Also, note than nodes in the semantic graph of WordNet represent synsets. Hence, a polysemous word would correspond to multiple nodes. In our experiments, we use the MaxSim assumption of BIBREF11 in order to map words to synsets.
To verify the reliability of these vector representations, we carried out an experiment on three standard word similarity datasets: RG-65 BIBREF12 , WordSim-353 similarity subset BIBREF13 , and SimLex-999 BIBREF14 . Table TABREF5 reports Pearson and Spearman correlations for the two KB embedding techniques (on WordNet's graph) and, as baseline, for our two word embeddings, i.e. w2v-gn and GloVe. The results are very similar, with node2vec proving to be slightly superior. We note that the performances are close to those of state-of-the-art WordNet approaches BIBREF15 , which shows the efficacy of these embedding techniques in capturing the semantic properties of WordNet's graph.
Semantic Space Transformation
Once we have the lexical resource INLINEFORM0 represented as a vector space INLINEFORM1 , we proceed with projecting it to INLINEFORM2 in order to improve the word coverage of the latter with additional words from the former. In this procedure we make two assumptions. Firstly, the two spaces provide reliable models of word semantics; hence, the relative within-space distances between words in the two spaces are comparable. Secondly, there exists a set of shared words between the two spaces, which we refer to as semantic bridges, from which we can learn a projection that maps one space into another.
As for the mapping, we used two techniques which are widely used for the mapping of semantic spaces belonging to different languages, mainly with the purpose of learning multilingual semantic spaces: Least squares BIBREF16 , BIBREF17 and Canonical Correlation Analysis BIBREF18 , BIBREF19 . These models receive as their input two vector spaces of two different languages and a seed lexicon for that language pair and learn a linear mapping between the two spaces. Ideally, words that are semantically similar across the two languages will be placed in close proximity to each other in the projected space. We adapt these models to the monolingual setting and for mapping two semantic spaces with different properties. As for the seed lexicon (to which in our setting we refer to as semantic bridges) in this monolingual setting, we use the set of monosemous words in the vocabulary which are deemed to have the most reliable semantic representations.
Specifically, let INLINEFORM0 and INLINEFORM1 be the corpus and KB semantic spaces, respectively, and INLINEFORM2 and INLINEFORM3 be their corresponding subset of semantic bridges, i.e., words that are monosemous according to the WordNet sense inventory. Note that INLINEFORM4 and INLINEFORM5 are vector matrices that contain representations for the same set of corresponding words, i.e., INLINEFORM6 . LS views the problem as a multivariate regression and learns a linear function INLINEFORM7 (where INLINEFORM8 and INLINEFORM9 are the dimensionalities of the KB and corpus spaces, respectively) on the basis of the following INLINEFORM10 -regularized least squares error objective and typically using stochastic gradient descent: DISPLAYFORM0
The enriched space INLINEFORM0 is then obtained as a union of INLINEFORM1 and INLINEFORM2 . CCA, on the other hand, learns two distinct linear mappings INLINEFORM3 and INLINEFORM4 with the objective of maximizing the correlation between the dimensions of the projected vectors INLINEFORM5 and INLINEFORM6 : DISPLAYFORM0
In this case, INLINEFORM0 is the union of INLINEFORM1 and INLINEFORM2 . In the next section we first compare different KB embedding and transformation techniques introduced in this section and then apply our methodology to a rare word similarity task.
Evaluation benchmark
To verify the reliability of the transformed semantic space, we propose an evaluation benchmark on the basis of word similarity datasets. Given an enriched space INLINEFORM0 and a similarity dataset INLINEFORM1 , we compute the similarity of each word pair INLINEFORM2 as the cosine similarity of their corresponding transformed vectors INLINEFORM3 and INLINEFORM4 from the two spaces, where INLINEFORM5 and INLINEFORM6 for LS and INLINEFORM7 and INLINEFORM8 for CCA. A high performance on this benchmark shows that the mapping has been successful in placing semantically similar terms near to each other whereas dissimilar terms are relatively far apart in the space. We repeat the computation for each pair in the reverse direction.
Comparison Study
Figure FIGREF1 shows the performance of different configurations on our three similarity datasets and for increasing sizes of semantic bridge sets. We experimented with four different configurations: two KB embedding approaches, i.e. DeepWalk and node2vec, and two mapping techniques, i.e. LS and CCA (cf. § SECREF2 ). In general, the optimal performance is reached when around 3K semantic bridges are used for transformation. DeepWalk and node2vec prove to be very similar in their performance across the three datasets. Among the two transformation techniques, CCA consistently outperforms LS on all three datasets when provided with 1000 or more semantic bridges (with 500, however, LS always has an edge). In the remaining experiments we only report results for the best configuration: node2vec with CCA. We also set the size of semantic bridge set to 5K.
Rare Word Similarity
In order to verify the reliability of our technique in coverage expansion for infrequent words we did a set of experiments on the Rare Word similarity dataset BIBREF6 . The dataset comprises 2034 pairs of rare words, such as ulcerate-change and nurturance-care, judged by 10 raters on a [0,10] scale. Table TABREF15 shows the results on the dataset for three pre-trained word embeddings (cf. § SECREF2 ), in their initial form as well as when enriched with additional words from WordNet.
Among the three initial embeddings, w2v-gn-500K provides the lowest coverage, with over 20% out-of-vocabulary pairs, whereas GloVe has a similar coverage to that of w2v-gn despite its significantly smaller vocabulary (400K vs. 3M). Upon enrichment, all the embeddings attain near full coverage (over 99%), thanks to the vocabulary expansion by rare words in WordNet. The enhanced coverage leads to consistent performance improvements according to both Pearson and Spearman correlations. The best performance gain is achieved for w2v-gn-500K (around 10% absolute gain) which proves the efficacy of our approach in inducing embeddings for rare words. The improvements are also statistically significant (p < 0.05) according to conducted one tailed t-test BIBREF20 , showing that the coverage enhancement could lead to improved performance even if lower-performing KB embedding and transformation are used.
Related Work
The main focus of research in embedding coverage enhancement has been on the morphologically complex forms BIBREF21 . BIBREF6 used recursive neural networks (RNNs) and neural language models in order to induce embeddings for morphologically complex words from their morphemes whereas BIBREF22 adapted phrase composition models for this purpose. BIBREF7 proposed a different model based on log-bilinear language models, mainly to have a compositional vector-based morphological representation that can be easily integrated into a machine translation decoder. These models often utilize a morphological segmentation toolkit, such as Morfessor BIBREF23 , in order to break inflected words into their morphological structures and to obtain segmentations for words in the vocabulary. BIBREF8 put forward a technique that does not rely on any external morphological analyzer and instead, induces morphological rules and transformations, represented as vectors in the same embedding space. Based on these rules a morphological graph is constructed and representations are inferred by analyzing morphological transformations in the graph. However, all these techniques fall short in inducing representations for single-morpheme words that are not seen frequently during training as they base their modeling on information available on sub-word units. In contrast, our transformation-based model can also induce embeddings for single-morpheme words that are infrequent or unseen in the training data, such as domain-specific entities.
Conclusions and Future Work
We presented a methodology for merging distributional semantic spaces and lexical ontologies and applied it to the task of extending the vocabulary of the former with the help of information extracted from the latter. We carried out an analysis for different KB embedding and semantic space mapping techniques and also showed that our methodology can lead to considerable enrichment of two standard word embedding models, leading to consistent improvements on the rare word similarity dataset. One interesting property of our approach is that it can be used in the reverse direction and for the completion of knowledge bases using the distributional information derived from text corpora. In future work, we plan to investigate this direction. We also intend to experiment with domain-specific lexical resources and measure the impact of coverage enhancement on a downstream NLP application. | judged by 10 raters on a [0,10] scale |
50e3fd6778dadf8ec0ff589aa8b18c61bdcacd41 | 50e3fd6778dadf8ec0ff589aa8b18c61bdcacd41_0 | Q: What other datasets are used?
Text: Introduction
There is a growing interest in research revolving around automated fake news detection and fact checking as its need increases due to the dangerous speed fake news spreads on social media BIBREF0. With as much as 68% of adults in the United States regularly consuming news on social media, being able to distinguish fake from non-fake is a pressing need.
Numerous recent studies have tackled fake news detection with various techniques. The work of BIBREF1 identifies and verifies the stance of a headline with respect to its content as a first step in identifying potential fake news, achieving an accuracy of 89.59% on a publicly available article stance dataset. The work of BIBREF2 uses a deep learning approach and integrates multiple sources to assign a degree of “fakeness” to an article, beating representative baselines on a publicly-available fake news dataset.
More recent approaches also incorporate newer, novel methods to aid in detection. The work of BIBREF3 handles fake news detection as a specific case of cross-level stance detection. In addition, their work also uses the presence of an “inverted pyramid” structure as an indicator of real news, using a neural network to encode a given article's structure.
While these approaches are valid and robust, most, if not all, modern fake news detection techniques assume the existence of large, expertly-annotated corpora to train models from scratch. Both BIBREF1 and BIBREF3 use the Fake News Challenge dataset, with 49,972 labeled stances for each headline-body pairs. BIBREF2, on the other hand, uses the LIAR dataset BIBREF4, which contains 12,836 labeled short statements as well as sources to support the labels.
This requirement for large datasets to effectively train fake news detection models from scratch makes it difficult to adapt these techniques into low-resource languages. Our work focuses on the use of Transfer Learning (TL) to evade this data scarcity problem.
We make three contributions.
First, we construct the first fake news dataset in the low-resourced Filipino language, alleviating data scarcity for research in this domain.
Second, we show that TL techniques such as ULMFiT BIBREF5, BERT BIBREF6, and GPT-2 BIBREF7, BIBREF8 perform better compared to few-shot techniques by a considerable margin.
Third, we show that auxiliary language modeling losses BIBREF9, BIBREF10 allows transformers to adapt to the stylometry of downstream tasks, which produces more robust fake news classifiers.
Methods
We provide a baseline model as a comparison point, using a few-shot learning-based technique to benchmark transfer learning against methods designed with low resource settings in mind. After which, we show three TL techniques that we studied and adapted to the task of fake news detection.
Methods ::: Baseline
We use a siamese neural network, shown to perform state-of-the-art few-shot learning BIBREF11, as our baseline model.
A siamese network is composed of weight-tied twin networks that accept distinct inputs, joined by an energy function, which computes a distance metric between the representations given by both twins. The network could then be trained to differentiate between classes in order to perform classification BIBREF11.
We modify the original to account for sequential data, with each twin composed of an embedding layer, a Long-Short Term Memory (LSTM) BIBREF12 layer, and a feed-forward layer with Rectified Linear Unit (ReLU) activations.
Each twin embeds and computes representations for a pair of sequences, with the prediction vector $p$ computed as:
where $o_i$ denotes the output representation of each siamese twin $i$ , $W_{\textnormal {out}}$ and $b_{\textnormal {out}}$ denote the weight matrix and bias of the output layer, and $\sigma $ denotes the sigmoid activation function.
Methods ::: ULMFiT
ULMFiT BIBREF5 was introduced as a TL method for Natural Language Processing (NLP) that works akin to ImageNet BIBREF13 pretraining in Computer Vision.
It uses an AWD-LSTM BIBREF14 pretrained on a language modeling objective as a base model, which is then finetuned to a downstream task in two steps.
First, the language model is finetuned to the text of the target task to adapt to the task syntactically. Second, a classification layer is appended to the model and is finetuned to the classification task conservatively. During finetuning, multiple different techniques are introduced to prevent catastrophic forgetting.
ULMFiT delivers state-of-the-art performance for text classification, and is notable for being able to set comparable scores with as little as 1000 samples of data, making it attractive for use in low-resource settings BIBREF5.
Methods ::: BERT
BERT is a Transformer-based BIBREF15 language model designed to pretrain “deep bidirectional representations” that can be finetuned to different tasks, with state-of-the-art results achieved in multiple language understanding benchmarks BIBREF6.
As with all Transformers, it draws power from a mechanism called “Attention” BIBREF16, which allows the model to compute weighted importance for each token in a sequence, effectively pinpointing context reference BIBREF15. Precisely, we compute attention on a set of queries packed as a matrix $Q$ on key and value matrices $K$ and $V$, respectively, as:
where $d_{k}$ is the dimensions of the key matrix $K$. Attention allows the Transformer to refer to multiple positions in a sequence for context at any given time regardless of distance, which is an advantage over Recurrent Neural Networks (RNN).
BERT's advantage over ULMFiT is its bidirectionality, leveraging both left and right context using a pretraining method called “Masked Language Modeling.” In addition, BERT also benefits from being deep, allowing it to capture more context and information. BERT-Base, the smallest BERT model, has 12 layers (768 units in each hidden layer) and 12 attention heads for a total of 110M parameters. Its larger sibling, BERT-Large, has 24 layers (1024 units in each hidden layer) and 16 attention heads for a total of 340M parameters.
Methods ::: GPT-2
The GPT-2 BIBREF8 technique builds up from the original GPT BIBREF7. Its main contribution is the way it is trained. With an improved architecture, it learns to do multiple tasks by just training on vanilla language modeling.
Architecture-wise, it is a Transformer-based model similar to BERT, with a few differences. It uses two feed-forward layers per transformer “block,” in addition to using “delayed residuals” which allows the model to choose which transformed representations to output.
GPT-2 is notable for being extremely deep, with 1.5B parameters, 10x more than the original GPT architecture. This gives it more flexibility in learning tasks unsupervised from language modeling, especially when trained on a very large unlabeled corpus.
Methods ::: Multitask Finetuning
BERT and GPT-2 both lack an explicit “language model finetuning step,” which gives ULMFiT an advantage where it learns to adapt to the stylometry and linguistic features of the text used by its target task. Motivated by this, we propose to augment Transformer-based TL techniques with a language model finetuning step.
Motivated by recent advancements in multitask learning, we finetune the model to the stylometry of the target task at the same time as we finetune the classifier, instead of setting it as a separate step. This produces two losses to be optimized together during training, and ensures that no task (stylometric adaptation or classification) will be prioritized over the other. This concept has been proposed and explored to improve the performance of transfer learning in multiple language tasks BIBREF9, BIBREF10.
We show that this method improves performance on both BERT and GPT-2, given that it learns to adapt to the idiosyncracies of its target task in a similar way that ULMFiT also does.
Experimental Setup ::: Fake News Dataset
We work with a dataset composed of 3,206 news articles, each labeled real or fake, with a perfect 50/50 split between 1,603 real and fake articles, respectively. Fake articles were sourced from online sites that were tagged as fake news sites by the non-profit independent media fact-checking organization Verafiles and the National Union of Journalists in the Philippines (NUJP). Real articles were sourced from mainstream news websites in the Philippines, including Pilipino Star Ngayon, Abante, and Bandera.
For preprocessing, we only perform tokenization on our dataset, specifically “Byte-Pair Encoding” (BPE) BIBREF17. BPE is a form of fixed-vocabulary subword tokenization that considers subword units as the most primitive form of entity (i.e. a token) instead of canonical words (i.e. “I am walking today” $\rightarrow $ “I am walk ##ing to ##day”). BPE is useful as it allows our model to represent out-of-vocabulary (OOV) words unlike standard tokenization. In addition, it helps language models in learning morphologically-rich languages as it now treats morphemes as primary enitites instead of canonical word tokens.
For training/finetuning the classifiers, we use a 70%-30% train-test split of the dataset.
Experimental Setup ::: Pretraining Corpora
To pretrain BERT and GPT-2 language models, as well as an AWD-LSTM language model for use in ULMFiT, a large unlabeled training corpora is needed. For this purpose, we construct a corpus of 172,815 articles from Tagalog Wikipedia which we call WikiText-TL-39 BIBREF18. We form training-validation-test splits of 70%-15%-15% from this corpora.
Preprocessing is similar to the fake news dataset, with the corpus only being lightly preprocessed and tokenized using Byte-Pair Encoding.
Corpus statistics for the pretraining corpora are shown on table TABREF17.
Experimental Setup ::: Siamese Network Training
We train a siamese recurrent neural network as our baseline. For each twin, we use 300 dimensions for the embedding layer and a hidden size of 512 for all hidden state vectors.
To optimize the network, we use a regularized cross-entropy objective of the following form:
where y$(x_1, x_2)$ = 1 when $x_1$ and $x_2$ are from the same class and 0 otherwise. We use the Adam optimizer BIBREF19 with an initial learning rate of 1e-4 to train the network for a maximum of 500 epochs.
Experimental Setup ::: Transfer Pretraining
We pretrain a cased BERT-Base model using our prepared unlabeled text corpora using Google's provided pretraining scripts. For the masked language model pretraining objective, we use a 0.15 probability of a word being masked. We also set the maximum number of masked language model predictions to 20, and a maximum sequence length of 512. For training, we use a learning rate of 1e-4 and a batch size of 256. We train the model for 1,000,000 steps with 10,000 steps of learning rate warmup for 157 hours on a Google Cloud Tensor processing Unit (TPU) v3-8.
For GPT-2, we pretrain a GPT-2 Transformer model on our prepared text corpora using language modeling as its sole pretraining task, according to the specifications of BIBREF8. We use an embedding dimension of 410, a hidden dimension of 2100, and a maximum sequence length of 256. We use 10 attention heads per multihead attention block, with 16 blocks composing the encoder of the transformer. We use dropout on all linear layers to a probability of 0.1. We initialize all parameters to a standard deviation of 0.02. For training, we use a learning rate of 2.5e-4, and a batch size of 32, much smaller than BERT considering the large size of the model. We train the model for 200 epochs with 1,000 steps of learning rate warmup using the Adam optimizer. The model was pretrained for 178 hours on a machine with one NVIDIA Tesla V100 GPU.
For ULMFiT, we pretrain a 3-layer AWD-LSTM model with an embedding size of 400 and a hidden size of 1150. We set the dropout values for the embedding, the RNN input, the hidden-to-hidden transition, and the RNN output to (0.1, 0.3, 0.3, 0.4) respectively. We use a weight dropout of 0.5 on the LSTM’s recurrent weight matrices. The model was trained for 30 epochs with a learning rate of 1e-3, a batch size of 128, and a weight decay of 0.1. We use the Adam optimizer and use slanted triangular learning rate schedules BIBREF5. We train the model on a machine with one NVIDIA Tesla V100 GPU for a total of 11 hours.
For each pretraining scheme, we checkpoint models every epoch to preserve a copy of the weights such that we may restore them once the model starts overfitting. This is done as an extra regularization technique.
Experimental Setup ::: Finetuning
We finetune our models to the target fake news classification task using the pretrained weights with an appended classification layer or head.
For BERT, we append a classification head composed of a single linear layer followed by a softmax transformation to the transformer model. We then finetune our BERT-Base model on the fake news classification task for 3 epochs, using a batch size of 32, and a learning rate of 2e-5.
For GPT-2, our classification head is first comprised of a layer normalization transform, followed by a linear layer, then a softmax transform. We finetune the pretrained GPT-2 transformer for 3 epochs, using a batch size of 32, and a learning rate of 3e-5.
For ULMFiT, we perform language model finetuning on the fake news dataset (appending no extra classification heads yet) for a total of 10 epochs, using a learning rate of 1e-2, a batch size of 80, and weight decay of 0.3. For the final ULMFiT finetuning stage, we append a compound classification head (linear $\rightarrow $ batch normalization $\rightarrow $ ReLU $\rightarrow $ linear $\rightarrow $ batch normalization $\rightarrow $ softmax). We then finetune for 5 epochs, gradually unfreezing layers from the last to the first until all layers are unfrozen on the fourth epoch. We use a learning rate of 1e-2 and set Adam's $\alpha $ and $\beta $ parameters to 0.8 and 0.7, respectively.
To show the efficacy of Multitask Finetuning, we augment BERT and GPT-2 to use this finetuning setup with their classification heads. We finetune both models to the target task for 3 epochs, using a batch size of 32, and a learning rate of 3e-5. For optimization, we use Adam with a warmup steps of 10% the number of steps, comprising 3 epochs.
Experimental Setup ::: Generalizability Across Domains
To study the generalizability of the model to different news domains, we test our models against test cases not found in the training dataset. We mainly focus on three domains: political news, opinion articles, and entertainment/gossip articles. Articles used for testing are sourced from the same websites that the training dataset was taken from.
Results and Discussion ::: Classification Results
Our baseline model, the siamese recurrent network, achieved an accuracy of 77.42% on the test set of the fake news classification task.
The transfer learning methods gave comparable scores. BERT finetuned to a final 87.47% accuracy, a 10.05% improvement over the siamese network's performance. GPT-2 finetuned to a final accuracy of 90.99%, a 13.57% improvement from the baseline performance. ULMFiT finetuning gave a final accuracy of 91.59%, an improvement of 14.17% over the baseline Siamese Network.
We could see that TL techniques outperformed the siamese network baseline, which we hypothesize is due to the intact pretrained knowledge in the language models used to finetune the classifiers. The pretraining step aided the model in forming relationships between text, and thus, performed better at stylometric based tasks with little finetuning.
The model results are all summarized in table TABREF26.
Results and Discussion ::: Language Model Finetuning Significance
One of the most surprising results is that BERT and GPT-2 performed worse than ULMFiT in the fake news classification task despite being deeper models capable of more complex relationships between data.
We hypothesize that ULMFiT achieved better accuracy because of its additional language model finetuning step. We provide evidence for this assumption with an additional experiment that shows a decrease in performance when the language model finetuning step is removed, droppping ULMFiT's accuracy to 78.11%, making it only perform marginally better than the baseline model. Results for this experiment are outlined in Table TABREF28
In this finetuning stage, the model is said to “adapt to the idiosyncracies of the task it is solving” BIBREF5. Given that our techniques rely on linguistic cues and features to make accurate predictions, having the model adapt to the stylometry or “writing style” of an article will therefore improve performance.
Results and Discussion ::: Multitask-based Finetuning
We used a multitask finetuning technique over the standard finetuning steps for BERT and GPT-2, motivated by the advantage that language model finetuning provides to ULMFiT, and found that it greatly improves the performance of our models.
BERT achieved a final accuracy of 91.20%, now marginally comparable to ULMFiT's full performance. GPT-2, on the other hand, finetuned to a final accuracy of 96.28%, a full 4.69% improvement over the performance of ULMFiT. This provides evidence towards our hypothesis that a language model finetuning step will allow transformer-based TL techniques to perform better, given their inherent advantage in modeling complexity over more shallow models such as the AWD-LSTM used by ULMFiT. Rersults for this experiment are outlined in Table TABREF30.
Ablation Studies
Several ablation studies are performed to establish causation between the model architectures and the performance boosts in the study.
Ablation Studies ::: Pretraining Effects
An ablation on pretraining was done to establish evidence that pretraining before finetuning accounts for a significant boost in performance over the baseline model. Using non-pretrained models, we finetune for the fake news classification task using the same settings as in the prior experiments.
In Table TABREF32, it can be seen that generative pretraining via language modeling does account for a considerable amount of performance, constituting 44.32% of the overall performance (a boost of 42.67% in accuracy) in the multitasking setup, and constituting 43.93% of the overall performance (a boost of 39.97%) in the standard finetuning setup.
This provides evidence that the pretraining step is necessary in achieving state-of-the-art performance.
Ablation Studies ::: Attention Head Effects
An ablation study was done to establish causality between the multiheaded nature of the attention mechanisms and state-of-the-art performance. We posit that since the model can refer to multiple context points at once, it improves in performance.
For this experiment, we performed several pretraining-finetuning setups with varied numbers of attention heads using the multitask-based finetuning scheme. Using a pretrained GPT-2 model, attention heads were masked with zero-tensors to downsample the number of positions the model could attend to at one time.
As shown in Table TABREF34, reducing the number of attention heads severely decreases multitasking performance. Using only one attention head, thereby attending to only one context position at once, degrades the performance to less than the performance of 10 heads using the standard finetuning scheme. This shows that more attention heads, thereby attending to multiple different contexts at once, is important to boosting performance to state-of-the-art results.
While increasing the number of attention heads improves performance, keeping on adding extra heads will not result to an equivalent boost as the performance plateaus after a number of heads.
As shown in Figure FIGREF35, the performance boost of the model plateaus after 10 attention heads, which was the default used in the study. While the performance of 16 heads is greater than 10, it is only a marginal improvement, and does not justify the added costs to training with more attention heads.
Stylometric Tests
To supplement our understanding of the features our models learn and establish empirical difference in their stylometries, we use two stylometric tests traditionally used for authorship attribution: Mendenhall's Characteristic Curves BIBREF20 and John Burrow's Delta Method BIBREF21.
We provide a characteristic curve comparison to establish differences between real and fake news. For the rest of this section, we refer to the characteristic curves on Figure FIGREF36.
When looking at the y-axis, there is a big difference in word count. The fake news corpora has twice the amount of words as the real news corpora. This means that fake news articles are at average lengthier than real news articles. The only differences seen in the x-axis is the order of appearance of word lengths 6, 7, and 1. The characteristic curves also exhibit differences in trend. While the head and tail look similar, the body show different trends. When graphing the corpora by news category, the heads and tails look similar to the general real and fake news characteristic curve but the body exhibits a trend different from the general corpora. This difference in trend may be attributed to either a lack of text data to properly represent real and fake news or the existence of a stylistic difference between real and fake news.
We also use Burrow’s Delta method to see a numeric distance between text samples. Using the labeled news article corpora, we compare samples outside of the corpora towards real and fake news to see how similar they are in terms of vocabulary distance. The test produces smaller distance for the correct label, which further reaffirms our hypothesis that there is a stylistic difference between the labels. However, the difference in distance between real and fake news against the sample is not significantly large. For articles on politics, business, entertainment, and viral events, the test generates distances that are significant. Meanwhile news in the safety, sports, technology, infrastructure, educational, and health categories have negligible differences in distance. This suggests that some categories are written similarly despite veracity.
Further Discussions ::: Pretraining Tasks
All the TL techniques were pretrained with a language modeling-based task. While language modeling has been empirically proven as a good pretraining task, we surmise that other pretraining tasks could replace or support it.
Since automatic fake news detection uses stylometric information (i.e. writing style, language cues), we predict that the task could benefit from pretraining objectives that also learn stylometric information such as authorship attribution.
Further Discussions ::: Generalizability Across Domains
When testing on three different types of articles (Political News, Opinion, Entertainment/Gossip), we find that writing style is a prominent indicator for fake articles, supporting previous findings regarding writing style in fake news detection BIBREF22.
Supported by our findings on the stylometric differences of fake and real news, we show that the model predicts a label based on the test article's stylometry. It produces correct labels when tested on real and fake news.
We provide further evidence that the models learn stylometry by testing on out-of-domain articles, particularly opinion and gossip articles. While these articles aren't necessarily real or fake, their stylometries are akin to real and fake articles respectively, and so are classified as such.
Conclusion
In this paper, we show that TL techniques can be used to train robust fake news classifiers in low-resource settings, with TL methods performing better than few-shot techniques, despite being a setting they are designed in mind with.
We also show the significance of language model finetuning for tasks that involve stylometric cues, with ULMFiT performing better than transformer-based techniques with deeper language model backbones. Motivated by this, we augment the methodology with a multitask learning-inspired finetuning technique that allowed transformer-based transfer learning techniques to adapt to the stylometry of a target task, much like ULMFiT, resulting in better performance.
For future work, we propose that more pretraining tasks be explored, particularly ones that learn stylometric information inherently (such as authorship attribution).
Acknowledgments
The authors would like to acknowledge the efforts of VeraFiles and the National Union of Journalists in the Philippines (NUJP) for their work covering and combating the spread of fake news.
We are partially supported by Google's Tensoflow Research Cloud (TFRC) program. Access to the TPU units provided by the program allowed the BERT models in this paper, as well as the countless experiments that brought it to fruition, possible. | WikiText-TL-39 |
c5980fe1a0c53bce1502cc674c8a2ed8c311f936 | c5980fe1a0c53bce1502cc674c8a2ed8c311f936_0 | Q: What is the size of the dataset?
Text: Introduction
There is a growing interest in research revolving around automated fake news detection and fact checking as its need increases due to the dangerous speed fake news spreads on social media BIBREF0. With as much as 68% of adults in the United States regularly consuming news on social media, being able to distinguish fake from non-fake is a pressing need.
Numerous recent studies have tackled fake news detection with various techniques. The work of BIBREF1 identifies and verifies the stance of a headline with respect to its content as a first step in identifying potential fake news, achieving an accuracy of 89.59% on a publicly available article stance dataset. The work of BIBREF2 uses a deep learning approach and integrates multiple sources to assign a degree of “fakeness” to an article, beating representative baselines on a publicly-available fake news dataset.
More recent approaches also incorporate newer, novel methods to aid in detection. The work of BIBREF3 handles fake news detection as a specific case of cross-level stance detection. In addition, their work also uses the presence of an “inverted pyramid” structure as an indicator of real news, using a neural network to encode a given article's structure.
While these approaches are valid and robust, most, if not all, modern fake news detection techniques assume the existence of large, expertly-annotated corpora to train models from scratch. Both BIBREF1 and BIBREF3 use the Fake News Challenge dataset, with 49,972 labeled stances for each headline-body pairs. BIBREF2, on the other hand, uses the LIAR dataset BIBREF4, which contains 12,836 labeled short statements as well as sources to support the labels.
This requirement for large datasets to effectively train fake news detection models from scratch makes it difficult to adapt these techniques into low-resource languages. Our work focuses on the use of Transfer Learning (TL) to evade this data scarcity problem.
We make three contributions.
First, we construct the first fake news dataset in the low-resourced Filipino language, alleviating data scarcity for research in this domain.
Second, we show that TL techniques such as ULMFiT BIBREF5, BERT BIBREF6, and GPT-2 BIBREF7, BIBREF8 perform better compared to few-shot techniques by a considerable margin.
Third, we show that auxiliary language modeling losses BIBREF9, BIBREF10 allows transformers to adapt to the stylometry of downstream tasks, which produces more robust fake news classifiers.
Methods
We provide a baseline model as a comparison point, using a few-shot learning-based technique to benchmark transfer learning against methods designed with low resource settings in mind. After which, we show three TL techniques that we studied and adapted to the task of fake news detection.
Methods ::: Baseline
We use a siamese neural network, shown to perform state-of-the-art few-shot learning BIBREF11, as our baseline model.
A siamese network is composed of weight-tied twin networks that accept distinct inputs, joined by an energy function, which computes a distance metric between the representations given by both twins. The network could then be trained to differentiate between classes in order to perform classification BIBREF11.
We modify the original to account for sequential data, with each twin composed of an embedding layer, a Long-Short Term Memory (LSTM) BIBREF12 layer, and a feed-forward layer with Rectified Linear Unit (ReLU) activations.
Each twin embeds and computes representations for a pair of sequences, with the prediction vector $p$ computed as:
where $o_i$ denotes the output representation of each siamese twin $i$ , $W_{\textnormal {out}}$ and $b_{\textnormal {out}}$ denote the weight matrix and bias of the output layer, and $\sigma $ denotes the sigmoid activation function.
Methods ::: ULMFiT
ULMFiT BIBREF5 was introduced as a TL method for Natural Language Processing (NLP) that works akin to ImageNet BIBREF13 pretraining in Computer Vision.
It uses an AWD-LSTM BIBREF14 pretrained on a language modeling objective as a base model, which is then finetuned to a downstream task in two steps.
First, the language model is finetuned to the text of the target task to adapt to the task syntactically. Second, a classification layer is appended to the model and is finetuned to the classification task conservatively. During finetuning, multiple different techniques are introduced to prevent catastrophic forgetting.
ULMFiT delivers state-of-the-art performance for text classification, and is notable for being able to set comparable scores with as little as 1000 samples of data, making it attractive for use in low-resource settings BIBREF5.
Methods ::: BERT
BERT is a Transformer-based BIBREF15 language model designed to pretrain “deep bidirectional representations” that can be finetuned to different tasks, with state-of-the-art results achieved in multiple language understanding benchmarks BIBREF6.
As with all Transformers, it draws power from a mechanism called “Attention” BIBREF16, which allows the model to compute weighted importance for each token in a sequence, effectively pinpointing context reference BIBREF15. Precisely, we compute attention on a set of queries packed as a matrix $Q$ on key and value matrices $K$ and $V$, respectively, as:
where $d_{k}$ is the dimensions of the key matrix $K$. Attention allows the Transformer to refer to multiple positions in a sequence for context at any given time regardless of distance, which is an advantage over Recurrent Neural Networks (RNN).
BERT's advantage over ULMFiT is its bidirectionality, leveraging both left and right context using a pretraining method called “Masked Language Modeling.” In addition, BERT also benefits from being deep, allowing it to capture more context and information. BERT-Base, the smallest BERT model, has 12 layers (768 units in each hidden layer) and 12 attention heads for a total of 110M parameters. Its larger sibling, BERT-Large, has 24 layers (1024 units in each hidden layer) and 16 attention heads for a total of 340M parameters.
Methods ::: GPT-2
The GPT-2 BIBREF8 technique builds up from the original GPT BIBREF7. Its main contribution is the way it is trained. With an improved architecture, it learns to do multiple tasks by just training on vanilla language modeling.
Architecture-wise, it is a Transformer-based model similar to BERT, with a few differences. It uses two feed-forward layers per transformer “block,” in addition to using “delayed residuals” which allows the model to choose which transformed representations to output.
GPT-2 is notable for being extremely deep, with 1.5B parameters, 10x more than the original GPT architecture. This gives it more flexibility in learning tasks unsupervised from language modeling, especially when trained on a very large unlabeled corpus.
Methods ::: Multitask Finetuning
BERT and GPT-2 both lack an explicit “language model finetuning step,” which gives ULMFiT an advantage where it learns to adapt to the stylometry and linguistic features of the text used by its target task. Motivated by this, we propose to augment Transformer-based TL techniques with a language model finetuning step.
Motivated by recent advancements in multitask learning, we finetune the model to the stylometry of the target task at the same time as we finetune the classifier, instead of setting it as a separate step. This produces two losses to be optimized together during training, and ensures that no task (stylometric adaptation or classification) will be prioritized over the other. This concept has been proposed and explored to improve the performance of transfer learning in multiple language tasks BIBREF9, BIBREF10.
We show that this method improves performance on both BERT and GPT-2, given that it learns to adapt to the idiosyncracies of its target task in a similar way that ULMFiT also does.
Experimental Setup ::: Fake News Dataset
We work with a dataset composed of 3,206 news articles, each labeled real or fake, with a perfect 50/50 split between 1,603 real and fake articles, respectively. Fake articles were sourced from online sites that were tagged as fake news sites by the non-profit independent media fact-checking organization Verafiles and the National Union of Journalists in the Philippines (NUJP). Real articles were sourced from mainstream news websites in the Philippines, including Pilipino Star Ngayon, Abante, and Bandera.
For preprocessing, we only perform tokenization on our dataset, specifically “Byte-Pair Encoding” (BPE) BIBREF17. BPE is a form of fixed-vocabulary subword tokenization that considers subword units as the most primitive form of entity (i.e. a token) instead of canonical words (i.e. “I am walking today” $\rightarrow $ “I am walk ##ing to ##day”). BPE is useful as it allows our model to represent out-of-vocabulary (OOV) words unlike standard tokenization. In addition, it helps language models in learning morphologically-rich languages as it now treats morphemes as primary enitites instead of canonical word tokens.
For training/finetuning the classifiers, we use a 70%-30% train-test split of the dataset.
Experimental Setup ::: Pretraining Corpora
To pretrain BERT and GPT-2 language models, as well as an AWD-LSTM language model for use in ULMFiT, a large unlabeled training corpora is needed. For this purpose, we construct a corpus of 172,815 articles from Tagalog Wikipedia which we call WikiText-TL-39 BIBREF18. We form training-validation-test splits of 70%-15%-15% from this corpora.
Preprocessing is similar to the fake news dataset, with the corpus only being lightly preprocessed and tokenized using Byte-Pair Encoding.
Corpus statistics for the pretraining corpora are shown on table TABREF17.
Experimental Setup ::: Siamese Network Training
We train a siamese recurrent neural network as our baseline. For each twin, we use 300 dimensions for the embedding layer and a hidden size of 512 for all hidden state vectors.
To optimize the network, we use a regularized cross-entropy objective of the following form:
where y$(x_1, x_2)$ = 1 when $x_1$ and $x_2$ are from the same class and 0 otherwise. We use the Adam optimizer BIBREF19 with an initial learning rate of 1e-4 to train the network for a maximum of 500 epochs.
Experimental Setup ::: Transfer Pretraining
We pretrain a cased BERT-Base model using our prepared unlabeled text corpora using Google's provided pretraining scripts. For the masked language model pretraining objective, we use a 0.15 probability of a word being masked. We also set the maximum number of masked language model predictions to 20, and a maximum sequence length of 512. For training, we use a learning rate of 1e-4 and a batch size of 256. We train the model for 1,000,000 steps with 10,000 steps of learning rate warmup for 157 hours on a Google Cloud Tensor processing Unit (TPU) v3-8.
For GPT-2, we pretrain a GPT-2 Transformer model on our prepared text corpora using language modeling as its sole pretraining task, according to the specifications of BIBREF8. We use an embedding dimension of 410, a hidden dimension of 2100, and a maximum sequence length of 256. We use 10 attention heads per multihead attention block, with 16 blocks composing the encoder of the transformer. We use dropout on all linear layers to a probability of 0.1. We initialize all parameters to a standard deviation of 0.02. For training, we use a learning rate of 2.5e-4, and a batch size of 32, much smaller than BERT considering the large size of the model. We train the model for 200 epochs with 1,000 steps of learning rate warmup using the Adam optimizer. The model was pretrained for 178 hours on a machine with one NVIDIA Tesla V100 GPU.
For ULMFiT, we pretrain a 3-layer AWD-LSTM model with an embedding size of 400 and a hidden size of 1150. We set the dropout values for the embedding, the RNN input, the hidden-to-hidden transition, and the RNN output to (0.1, 0.3, 0.3, 0.4) respectively. We use a weight dropout of 0.5 on the LSTM’s recurrent weight matrices. The model was trained for 30 epochs with a learning rate of 1e-3, a batch size of 128, and a weight decay of 0.1. We use the Adam optimizer and use slanted triangular learning rate schedules BIBREF5. We train the model on a machine with one NVIDIA Tesla V100 GPU for a total of 11 hours.
For each pretraining scheme, we checkpoint models every epoch to preserve a copy of the weights such that we may restore them once the model starts overfitting. This is done as an extra regularization technique.
Experimental Setup ::: Finetuning
We finetune our models to the target fake news classification task using the pretrained weights with an appended classification layer or head.
For BERT, we append a classification head composed of a single linear layer followed by a softmax transformation to the transformer model. We then finetune our BERT-Base model on the fake news classification task for 3 epochs, using a batch size of 32, and a learning rate of 2e-5.
For GPT-2, our classification head is first comprised of a layer normalization transform, followed by a linear layer, then a softmax transform. We finetune the pretrained GPT-2 transformer for 3 epochs, using a batch size of 32, and a learning rate of 3e-5.
For ULMFiT, we perform language model finetuning on the fake news dataset (appending no extra classification heads yet) for a total of 10 epochs, using a learning rate of 1e-2, a batch size of 80, and weight decay of 0.3. For the final ULMFiT finetuning stage, we append a compound classification head (linear $\rightarrow $ batch normalization $\rightarrow $ ReLU $\rightarrow $ linear $\rightarrow $ batch normalization $\rightarrow $ softmax). We then finetune for 5 epochs, gradually unfreezing layers from the last to the first until all layers are unfrozen on the fourth epoch. We use a learning rate of 1e-2 and set Adam's $\alpha $ and $\beta $ parameters to 0.8 and 0.7, respectively.
To show the efficacy of Multitask Finetuning, we augment BERT and GPT-2 to use this finetuning setup with their classification heads. We finetune both models to the target task for 3 epochs, using a batch size of 32, and a learning rate of 3e-5. For optimization, we use Adam with a warmup steps of 10% the number of steps, comprising 3 epochs.
Experimental Setup ::: Generalizability Across Domains
To study the generalizability of the model to different news domains, we test our models against test cases not found in the training dataset. We mainly focus on three domains: political news, opinion articles, and entertainment/gossip articles. Articles used for testing are sourced from the same websites that the training dataset was taken from.
Results and Discussion ::: Classification Results
Our baseline model, the siamese recurrent network, achieved an accuracy of 77.42% on the test set of the fake news classification task.
The transfer learning methods gave comparable scores. BERT finetuned to a final 87.47% accuracy, a 10.05% improvement over the siamese network's performance. GPT-2 finetuned to a final accuracy of 90.99%, a 13.57% improvement from the baseline performance. ULMFiT finetuning gave a final accuracy of 91.59%, an improvement of 14.17% over the baseline Siamese Network.
We could see that TL techniques outperformed the siamese network baseline, which we hypothesize is due to the intact pretrained knowledge in the language models used to finetune the classifiers. The pretraining step aided the model in forming relationships between text, and thus, performed better at stylometric based tasks with little finetuning.
The model results are all summarized in table TABREF26.
Results and Discussion ::: Language Model Finetuning Significance
One of the most surprising results is that BERT and GPT-2 performed worse than ULMFiT in the fake news classification task despite being deeper models capable of more complex relationships between data.
We hypothesize that ULMFiT achieved better accuracy because of its additional language model finetuning step. We provide evidence for this assumption with an additional experiment that shows a decrease in performance when the language model finetuning step is removed, droppping ULMFiT's accuracy to 78.11%, making it only perform marginally better than the baseline model. Results for this experiment are outlined in Table TABREF28
In this finetuning stage, the model is said to “adapt to the idiosyncracies of the task it is solving” BIBREF5. Given that our techniques rely on linguistic cues and features to make accurate predictions, having the model adapt to the stylometry or “writing style” of an article will therefore improve performance.
Results and Discussion ::: Multitask-based Finetuning
We used a multitask finetuning technique over the standard finetuning steps for BERT and GPT-2, motivated by the advantage that language model finetuning provides to ULMFiT, and found that it greatly improves the performance of our models.
BERT achieved a final accuracy of 91.20%, now marginally comparable to ULMFiT's full performance. GPT-2, on the other hand, finetuned to a final accuracy of 96.28%, a full 4.69% improvement over the performance of ULMFiT. This provides evidence towards our hypothesis that a language model finetuning step will allow transformer-based TL techniques to perform better, given their inherent advantage in modeling complexity over more shallow models such as the AWD-LSTM used by ULMFiT. Rersults for this experiment are outlined in Table TABREF30.
Ablation Studies
Several ablation studies are performed to establish causation between the model architectures and the performance boosts in the study.
Ablation Studies ::: Pretraining Effects
An ablation on pretraining was done to establish evidence that pretraining before finetuning accounts for a significant boost in performance over the baseline model. Using non-pretrained models, we finetune for the fake news classification task using the same settings as in the prior experiments.
In Table TABREF32, it can be seen that generative pretraining via language modeling does account for a considerable amount of performance, constituting 44.32% of the overall performance (a boost of 42.67% in accuracy) in the multitasking setup, and constituting 43.93% of the overall performance (a boost of 39.97%) in the standard finetuning setup.
This provides evidence that the pretraining step is necessary in achieving state-of-the-art performance.
Ablation Studies ::: Attention Head Effects
An ablation study was done to establish causality between the multiheaded nature of the attention mechanisms and state-of-the-art performance. We posit that since the model can refer to multiple context points at once, it improves in performance.
For this experiment, we performed several pretraining-finetuning setups with varied numbers of attention heads using the multitask-based finetuning scheme. Using a pretrained GPT-2 model, attention heads were masked with zero-tensors to downsample the number of positions the model could attend to at one time.
As shown in Table TABREF34, reducing the number of attention heads severely decreases multitasking performance. Using only one attention head, thereby attending to only one context position at once, degrades the performance to less than the performance of 10 heads using the standard finetuning scheme. This shows that more attention heads, thereby attending to multiple different contexts at once, is important to boosting performance to state-of-the-art results.
While increasing the number of attention heads improves performance, keeping on adding extra heads will not result to an equivalent boost as the performance plateaus after a number of heads.
As shown in Figure FIGREF35, the performance boost of the model plateaus after 10 attention heads, which was the default used in the study. While the performance of 16 heads is greater than 10, it is only a marginal improvement, and does not justify the added costs to training with more attention heads.
Stylometric Tests
To supplement our understanding of the features our models learn and establish empirical difference in their stylometries, we use two stylometric tests traditionally used for authorship attribution: Mendenhall's Characteristic Curves BIBREF20 and John Burrow's Delta Method BIBREF21.
We provide a characteristic curve comparison to establish differences between real and fake news. For the rest of this section, we refer to the characteristic curves on Figure FIGREF36.
When looking at the y-axis, there is a big difference in word count. The fake news corpora has twice the amount of words as the real news corpora. This means that fake news articles are at average lengthier than real news articles. The only differences seen in the x-axis is the order of appearance of word lengths 6, 7, and 1. The characteristic curves also exhibit differences in trend. While the head and tail look similar, the body show different trends. When graphing the corpora by news category, the heads and tails look similar to the general real and fake news characteristic curve but the body exhibits a trend different from the general corpora. This difference in trend may be attributed to either a lack of text data to properly represent real and fake news or the existence of a stylistic difference between real and fake news.
We also use Burrow’s Delta method to see a numeric distance between text samples. Using the labeled news article corpora, we compare samples outside of the corpora towards real and fake news to see how similar they are in terms of vocabulary distance. The test produces smaller distance for the correct label, which further reaffirms our hypothesis that there is a stylistic difference between the labels. However, the difference in distance between real and fake news against the sample is not significantly large. For articles on politics, business, entertainment, and viral events, the test generates distances that are significant. Meanwhile news in the safety, sports, technology, infrastructure, educational, and health categories have negligible differences in distance. This suggests that some categories are written similarly despite veracity.
Further Discussions ::: Pretraining Tasks
All the TL techniques were pretrained with a language modeling-based task. While language modeling has been empirically proven as a good pretraining task, we surmise that other pretraining tasks could replace or support it.
Since automatic fake news detection uses stylometric information (i.e. writing style, language cues), we predict that the task could benefit from pretraining objectives that also learn stylometric information such as authorship attribution.
Further Discussions ::: Generalizability Across Domains
When testing on three different types of articles (Political News, Opinion, Entertainment/Gossip), we find that writing style is a prominent indicator for fake articles, supporting previous findings regarding writing style in fake news detection BIBREF22.
Supported by our findings on the stylometric differences of fake and real news, we show that the model predicts a label based on the test article's stylometry. It produces correct labels when tested on real and fake news.
We provide further evidence that the models learn stylometry by testing on out-of-domain articles, particularly opinion and gossip articles. While these articles aren't necessarily real or fake, their stylometries are akin to real and fake articles respectively, and so are classified as such.
Conclusion
In this paper, we show that TL techniques can be used to train robust fake news classifiers in low-resource settings, with TL methods performing better than few-shot techniques, despite being a setting they are designed in mind with.
We also show the significance of language model finetuning for tasks that involve stylometric cues, with ULMFiT performing better than transformer-based techniques with deeper language model backbones. Motivated by this, we augment the methodology with a multitask learning-inspired finetuning technique that allowed transformer-based transfer learning techniques to adapt to the stylometry of a target task, much like ULMFiT, resulting in better performance.
For future work, we propose that more pretraining tasks be explored, particularly ones that learn stylometric information inherently (such as authorship attribution).
Acknowledgments
The authors would like to acknowledge the efforts of VeraFiles and the National Union of Journalists in the Philippines (NUJP) for their work covering and combating the spread of fake news.
We are partially supported by Google's Tensoflow Research Cloud (TFRC) program. Access to the TPU units provided by the program allowed the BERT models in this paper, as well as the countless experiments that brought it to fruition, possible. | 3,206 |
7d3c036ec514d9c09c612a214498fc99bf163752 | 7d3c036ec514d9c09c612a214498fc99bf163752_0 | Q: What is the source of the dataset?
Text: Introduction
There is a growing interest in research revolving around automated fake news detection and fact checking as its need increases due to the dangerous speed fake news spreads on social media BIBREF0. With as much as 68% of adults in the United States regularly consuming news on social media, being able to distinguish fake from non-fake is a pressing need.
Numerous recent studies have tackled fake news detection with various techniques. The work of BIBREF1 identifies and verifies the stance of a headline with respect to its content as a first step in identifying potential fake news, achieving an accuracy of 89.59% on a publicly available article stance dataset. The work of BIBREF2 uses a deep learning approach and integrates multiple sources to assign a degree of “fakeness” to an article, beating representative baselines on a publicly-available fake news dataset.
More recent approaches also incorporate newer, novel methods to aid in detection. The work of BIBREF3 handles fake news detection as a specific case of cross-level stance detection. In addition, their work also uses the presence of an “inverted pyramid” structure as an indicator of real news, using a neural network to encode a given article's structure.
While these approaches are valid and robust, most, if not all, modern fake news detection techniques assume the existence of large, expertly-annotated corpora to train models from scratch. Both BIBREF1 and BIBREF3 use the Fake News Challenge dataset, with 49,972 labeled stances for each headline-body pairs. BIBREF2, on the other hand, uses the LIAR dataset BIBREF4, which contains 12,836 labeled short statements as well as sources to support the labels.
This requirement for large datasets to effectively train fake news detection models from scratch makes it difficult to adapt these techniques into low-resource languages. Our work focuses on the use of Transfer Learning (TL) to evade this data scarcity problem.
We make three contributions.
First, we construct the first fake news dataset in the low-resourced Filipino language, alleviating data scarcity for research in this domain.
Second, we show that TL techniques such as ULMFiT BIBREF5, BERT BIBREF6, and GPT-2 BIBREF7, BIBREF8 perform better compared to few-shot techniques by a considerable margin.
Third, we show that auxiliary language modeling losses BIBREF9, BIBREF10 allows transformers to adapt to the stylometry of downstream tasks, which produces more robust fake news classifiers.
Methods
We provide a baseline model as a comparison point, using a few-shot learning-based technique to benchmark transfer learning against methods designed with low resource settings in mind. After which, we show three TL techniques that we studied and adapted to the task of fake news detection.
Methods ::: Baseline
We use a siamese neural network, shown to perform state-of-the-art few-shot learning BIBREF11, as our baseline model.
A siamese network is composed of weight-tied twin networks that accept distinct inputs, joined by an energy function, which computes a distance metric between the representations given by both twins. The network could then be trained to differentiate between classes in order to perform classification BIBREF11.
We modify the original to account for sequential data, with each twin composed of an embedding layer, a Long-Short Term Memory (LSTM) BIBREF12 layer, and a feed-forward layer with Rectified Linear Unit (ReLU) activations.
Each twin embeds and computes representations for a pair of sequences, with the prediction vector $p$ computed as:
where $o_i$ denotes the output representation of each siamese twin $i$ , $W_{\textnormal {out}}$ and $b_{\textnormal {out}}$ denote the weight matrix and bias of the output layer, and $\sigma $ denotes the sigmoid activation function.
Methods ::: ULMFiT
ULMFiT BIBREF5 was introduced as a TL method for Natural Language Processing (NLP) that works akin to ImageNet BIBREF13 pretraining in Computer Vision.
It uses an AWD-LSTM BIBREF14 pretrained on a language modeling objective as a base model, which is then finetuned to a downstream task in two steps.
First, the language model is finetuned to the text of the target task to adapt to the task syntactically. Second, a classification layer is appended to the model and is finetuned to the classification task conservatively. During finetuning, multiple different techniques are introduced to prevent catastrophic forgetting.
ULMFiT delivers state-of-the-art performance for text classification, and is notable for being able to set comparable scores with as little as 1000 samples of data, making it attractive for use in low-resource settings BIBREF5.
Methods ::: BERT
BERT is a Transformer-based BIBREF15 language model designed to pretrain “deep bidirectional representations” that can be finetuned to different tasks, with state-of-the-art results achieved in multiple language understanding benchmarks BIBREF6.
As with all Transformers, it draws power from a mechanism called “Attention” BIBREF16, which allows the model to compute weighted importance for each token in a sequence, effectively pinpointing context reference BIBREF15. Precisely, we compute attention on a set of queries packed as a matrix $Q$ on key and value matrices $K$ and $V$, respectively, as:
where $d_{k}$ is the dimensions of the key matrix $K$. Attention allows the Transformer to refer to multiple positions in a sequence for context at any given time regardless of distance, which is an advantage over Recurrent Neural Networks (RNN).
BERT's advantage over ULMFiT is its bidirectionality, leveraging both left and right context using a pretraining method called “Masked Language Modeling.” In addition, BERT also benefits from being deep, allowing it to capture more context and information. BERT-Base, the smallest BERT model, has 12 layers (768 units in each hidden layer) and 12 attention heads for a total of 110M parameters. Its larger sibling, BERT-Large, has 24 layers (1024 units in each hidden layer) and 16 attention heads for a total of 340M parameters.
Methods ::: GPT-2
The GPT-2 BIBREF8 technique builds up from the original GPT BIBREF7. Its main contribution is the way it is trained. With an improved architecture, it learns to do multiple tasks by just training on vanilla language modeling.
Architecture-wise, it is a Transformer-based model similar to BERT, with a few differences. It uses two feed-forward layers per transformer “block,” in addition to using “delayed residuals” which allows the model to choose which transformed representations to output.
GPT-2 is notable for being extremely deep, with 1.5B parameters, 10x more than the original GPT architecture. This gives it more flexibility in learning tasks unsupervised from language modeling, especially when trained on a very large unlabeled corpus.
Methods ::: Multitask Finetuning
BERT and GPT-2 both lack an explicit “language model finetuning step,” which gives ULMFiT an advantage where it learns to adapt to the stylometry and linguistic features of the text used by its target task. Motivated by this, we propose to augment Transformer-based TL techniques with a language model finetuning step.
Motivated by recent advancements in multitask learning, we finetune the model to the stylometry of the target task at the same time as we finetune the classifier, instead of setting it as a separate step. This produces two losses to be optimized together during training, and ensures that no task (stylometric adaptation or classification) will be prioritized over the other. This concept has been proposed and explored to improve the performance of transfer learning in multiple language tasks BIBREF9, BIBREF10.
We show that this method improves performance on both BERT and GPT-2, given that it learns to adapt to the idiosyncracies of its target task in a similar way that ULMFiT also does.
Experimental Setup ::: Fake News Dataset
We work with a dataset composed of 3,206 news articles, each labeled real or fake, with a perfect 50/50 split between 1,603 real and fake articles, respectively. Fake articles were sourced from online sites that were tagged as fake news sites by the non-profit independent media fact-checking organization Verafiles and the National Union of Journalists in the Philippines (NUJP). Real articles were sourced from mainstream news websites in the Philippines, including Pilipino Star Ngayon, Abante, and Bandera.
For preprocessing, we only perform tokenization on our dataset, specifically “Byte-Pair Encoding” (BPE) BIBREF17. BPE is a form of fixed-vocabulary subword tokenization that considers subword units as the most primitive form of entity (i.e. a token) instead of canonical words (i.e. “I am walking today” $\rightarrow $ “I am walk ##ing to ##day”). BPE is useful as it allows our model to represent out-of-vocabulary (OOV) words unlike standard tokenization. In addition, it helps language models in learning morphologically-rich languages as it now treats morphemes as primary enitites instead of canonical word tokens.
For training/finetuning the classifiers, we use a 70%-30% train-test split of the dataset.
Experimental Setup ::: Pretraining Corpora
To pretrain BERT and GPT-2 language models, as well as an AWD-LSTM language model for use in ULMFiT, a large unlabeled training corpora is needed. For this purpose, we construct a corpus of 172,815 articles from Tagalog Wikipedia which we call WikiText-TL-39 BIBREF18. We form training-validation-test splits of 70%-15%-15% from this corpora.
Preprocessing is similar to the fake news dataset, with the corpus only being lightly preprocessed and tokenized using Byte-Pair Encoding.
Corpus statistics for the pretraining corpora are shown on table TABREF17.
Experimental Setup ::: Siamese Network Training
We train a siamese recurrent neural network as our baseline. For each twin, we use 300 dimensions for the embedding layer and a hidden size of 512 for all hidden state vectors.
To optimize the network, we use a regularized cross-entropy objective of the following form:
where y$(x_1, x_2)$ = 1 when $x_1$ and $x_2$ are from the same class and 0 otherwise. We use the Adam optimizer BIBREF19 with an initial learning rate of 1e-4 to train the network for a maximum of 500 epochs.
Experimental Setup ::: Transfer Pretraining
We pretrain a cased BERT-Base model using our prepared unlabeled text corpora using Google's provided pretraining scripts. For the masked language model pretraining objective, we use a 0.15 probability of a word being masked. We also set the maximum number of masked language model predictions to 20, and a maximum sequence length of 512. For training, we use a learning rate of 1e-4 and a batch size of 256. We train the model for 1,000,000 steps with 10,000 steps of learning rate warmup for 157 hours on a Google Cloud Tensor processing Unit (TPU) v3-8.
For GPT-2, we pretrain a GPT-2 Transformer model on our prepared text corpora using language modeling as its sole pretraining task, according to the specifications of BIBREF8. We use an embedding dimension of 410, a hidden dimension of 2100, and a maximum sequence length of 256. We use 10 attention heads per multihead attention block, with 16 blocks composing the encoder of the transformer. We use dropout on all linear layers to a probability of 0.1. We initialize all parameters to a standard deviation of 0.02. For training, we use a learning rate of 2.5e-4, and a batch size of 32, much smaller than BERT considering the large size of the model. We train the model for 200 epochs with 1,000 steps of learning rate warmup using the Adam optimizer. The model was pretrained for 178 hours on a machine with one NVIDIA Tesla V100 GPU.
For ULMFiT, we pretrain a 3-layer AWD-LSTM model with an embedding size of 400 and a hidden size of 1150. We set the dropout values for the embedding, the RNN input, the hidden-to-hidden transition, and the RNN output to (0.1, 0.3, 0.3, 0.4) respectively. We use a weight dropout of 0.5 on the LSTM’s recurrent weight matrices. The model was trained for 30 epochs with a learning rate of 1e-3, a batch size of 128, and a weight decay of 0.1. We use the Adam optimizer and use slanted triangular learning rate schedules BIBREF5. We train the model on a machine with one NVIDIA Tesla V100 GPU for a total of 11 hours.
For each pretraining scheme, we checkpoint models every epoch to preserve a copy of the weights such that we may restore them once the model starts overfitting. This is done as an extra regularization technique.
Experimental Setup ::: Finetuning
We finetune our models to the target fake news classification task using the pretrained weights with an appended classification layer or head.
For BERT, we append a classification head composed of a single linear layer followed by a softmax transformation to the transformer model. We then finetune our BERT-Base model on the fake news classification task for 3 epochs, using a batch size of 32, and a learning rate of 2e-5.
For GPT-2, our classification head is first comprised of a layer normalization transform, followed by a linear layer, then a softmax transform. We finetune the pretrained GPT-2 transformer for 3 epochs, using a batch size of 32, and a learning rate of 3e-5.
For ULMFiT, we perform language model finetuning on the fake news dataset (appending no extra classification heads yet) for a total of 10 epochs, using a learning rate of 1e-2, a batch size of 80, and weight decay of 0.3. For the final ULMFiT finetuning stage, we append a compound classification head (linear $\rightarrow $ batch normalization $\rightarrow $ ReLU $\rightarrow $ linear $\rightarrow $ batch normalization $\rightarrow $ softmax). We then finetune for 5 epochs, gradually unfreezing layers from the last to the first until all layers are unfrozen on the fourth epoch. We use a learning rate of 1e-2 and set Adam's $\alpha $ and $\beta $ parameters to 0.8 and 0.7, respectively.
To show the efficacy of Multitask Finetuning, we augment BERT and GPT-2 to use this finetuning setup with their classification heads. We finetune both models to the target task for 3 epochs, using a batch size of 32, and a learning rate of 3e-5. For optimization, we use Adam with a warmup steps of 10% the number of steps, comprising 3 epochs.
Experimental Setup ::: Generalizability Across Domains
To study the generalizability of the model to different news domains, we test our models against test cases not found in the training dataset. We mainly focus on three domains: political news, opinion articles, and entertainment/gossip articles. Articles used for testing are sourced from the same websites that the training dataset was taken from.
Results and Discussion ::: Classification Results
Our baseline model, the siamese recurrent network, achieved an accuracy of 77.42% on the test set of the fake news classification task.
The transfer learning methods gave comparable scores. BERT finetuned to a final 87.47% accuracy, a 10.05% improvement over the siamese network's performance. GPT-2 finetuned to a final accuracy of 90.99%, a 13.57% improvement from the baseline performance. ULMFiT finetuning gave a final accuracy of 91.59%, an improvement of 14.17% over the baseline Siamese Network.
We could see that TL techniques outperformed the siamese network baseline, which we hypothesize is due to the intact pretrained knowledge in the language models used to finetune the classifiers. The pretraining step aided the model in forming relationships between text, and thus, performed better at stylometric based tasks with little finetuning.
The model results are all summarized in table TABREF26.
Results and Discussion ::: Language Model Finetuning Significance
One of the most surprising results is that BERT and GPT-2 performed worse than ULMFiT in the fake news classification task despite being deeper models capable of more complex relationships between data.
We hypothesize that ULMFiT achieved better accuracy because of its additional language model finetuning step. We provide evidence for this assumption with an additional experiment that shows a decrease in performance when the language model finetuning step is removed, droppping ULMFiT's accuracy to 78.11%, making it only perform marginally better than the baseline model. Results for this experiment are outlined in Table TABREF28
In this finetuning stage, the model is said to “adapt to the idiosyncracies of the task it is solving” BIBREF5. Given that our techniques rely on linguistic cues and features to make accurate predictions, having the model adapt to the stylometry or “writing style” of an article will therefore improve performance.
Results and Discussion ::: Multitask-based Finetuning
We used a multitask finetuning technique over the standard finetuning steps for BERT and GPT-2, motivated by the advantage that language model finetuning provides to ULMFiT, and found that it greatly improves the performance of our models.
BERT achieved a final accuracy of 91.20%, now marginally comparable to ULMFiT's full performance. GPT-2, on the other hand, finetuned to a final accuracy of 96.28%, a full 4.69% improvement over the performance of ULMFiT. This provides evidence towards our hypothesis that a language model finetuning step will allow transformer-based TL techniques to perform better, given their inherent advantage in modeling complexity over more shallow models such as the AWD-LSTM used by ULMFiT. Rersults for this experiment are outlined in Table TABREF30.
Ablation Studies
Several ablation studies are performed to establish causation between the model architectures and the performance boosts in the study.
Ablation Studies ::: Pretraining Effects
An ablation on pretraining was done to establish evidence that pretraining before finetuning accounts for a significant boost in performance over the baseline model. Using non-pretrained models, we finetune for the fake news classification task using the same settings as in the prior experiments.
In Table TABREF32, it can be seen that generative pretraining via language modeling does account for a considerable amount of performance, constituting 44.32% of the overall performance (a boost of 42.67% in accuracy) in the multitasking setup, and constituting 43.93% of the overall performance (a boost of 39.97%) in the standard finetuning setup.
This provides evidence that the pretraining step is necessary in achieving state-of-the-art performance.
Ablation Studies ::: Attention Head Effects
An ablation study was done to establish causality between the multiheaded nature of the attention mechanisms and state-of-the-art performance. We posit that since the model can refer to multiple context points at once, it improves in performance.
For this experiment, we performed several pretraining-finetuning setups with varied numbers of attention heads using the multitask-based finetuning scheme. Using a pretrained GPT-2 model, attention heads were masked with zero-tensors to downsample the number of positions the model could attend to at one time.
As shown in Table TABREF34, reducing the number of attention heads severely decreases multitasking performance. Using only one attention head, thereby attending to only one context position at once, degrades the performance to less than the performance of 10 heads using the standard finetuning scheme. This shows that more attention heads, thereby attending to multiple different contexts at once, is important to boosting performance to state-of-the-art results.
While increasing the number of attention heads improves performance, keeping on adding extra heads will not result to an equivalent boost as the performance plateaus after a number of heads.
As shown in Figure FIGREF35, the performance boost of the model plateaus after 10 attention heads, which was the default used in the study. While the performance of 16 heads is greater than 10, it is only a marginal improvement, and does not justify the added costs to training with more attention heads.
Stylometric Tests
To supplement our understanding of the features our models learn and establish empirical difference in their stylometries, we use two stylometric tests traditionally used for authorship attribution: Mendenhall's Characteristic Curves BIBREF20 and John Burrow's Delta Method BIBREF21.
We provide a characteristic curve comparison to establish differences between real and fake news. For the rest of this section, we refer to the characteristic curves on Figure FIGREF36.
When looking at the y-axis, there is a big difference in word count. The fake news corpora has twice the amount of words as the real news corpora. This means that fake news articles are at average lengthier than real news articles. The only differences seen in the x-axis is the order of appearance of word lengths 6, 7, and 1. The characteristic curves also exhibit differences in trend. While the head and tail look similar, the body show different trends. When graphing the corpora by news category, the heads and tails look similar to the general real and fake news characteristic curve but the body exhibits a trend different from the general corpora. This difference in trend may be attributed to either a lack of text data to properly represent real and fake news or the existence of a stylistic difference between real and fake news.
We also use Burrow’s Delta method to see a numeric distance between text samples. Using the labeled news article corpora, we compare samples outside of the corpora towards real and fake news to see how similar they are in terms of vocabulary distance. The test produces smaller distance for the correct label, which further reaffirms our hypothesis that there is a stylistic difference between the labels. However, the difference in distance between real and fake news against the sample is not significantly large. For articles on politics, business, entertainment, and viral events, the test generates distances that are significant. Meanwhile news in the safety, sports, technology, infrastructure, educational, and health categories have negligible differences in distance. This suggests that some categories are written similarly despite veracity.
Further Discussions ::: Pretraining Tasks
All the TL techniques were pretrained with a language modeling-based task. While language modeling has been empirically proven as a good pretraining task, we surmise that other pretraining tasks could replace or support it.
Since automatic fake news detection uses stylometric information (i.e. writing style, language cues), we predict that the task could benefit from pretraining objectives that also learn stylometric information such as authorship attribution.
Further Discussions ::: Generalizability Across Domains
When testing on three different types of articles (Political News, Opinion, Entertainment/Gossip), we find that writing style is a prominent indicator for fake articles, supporting previous findings regarding writing style in fake news detection BIBREF22.
Supported by our findings on the stylometric differences of fake and real news, we show that the model predicts a label based on the test article's stylometry. It produces correct labels when tested on real and fake news.
We provide further evidence that the models learn stylometry by testing on out-of-domain articles, particularly opinion and gossip articles. While these articles aren't necessarily real or fake, their stylometries are akin to real and fake articles respectively, and so are classified as such.
Conclusion
In this paper, we show that TL techniques can be used to train robust fake news classifiers in low-resource settings, with TL methods performing better than few-shot techniques, despite being a setting they are designed in mind with.
We also show the significance of language model finetuning for tasks that involve stylometric cues, with ULMFiT performing better than transformer-based techniques with deeper language model backbones. Motivated by this, we augment the methodology with a multitask learning-inspired finetuning technique that allowed transformer-based transfer learning techniques to adapt to the stylometry of a target task, much like ULMFiT, resulting in better performance.
For future work, we propose that more pretraining tasks be explored, particularly ones that learn stylometric information inherently (such as authorship attribution).
Acknowledgments
The authors would like to acknowledge the efforts of VeraFiles and the National Union of Journalists in the Philippines (NUJP) for their work covering and combating the spread of fake news.
We are partially supported by Google's Tensoflow Research Cloud (TFRC) program. Access to the TPU units provided by the program allowed the BERT models in this paper, as well as the countless experiments that brought it to fruition, possible. | Online sites tagged as fake news site by Verafiles and NUJP and news website in the Philippines, including Pilipino Star Ngayon, Abante, and Bandera |
ef7b62a705f887326b7ebacbd62567ee1f2129b3 | ef7b62a705f887326b7ebacbd62567ee1f2129b3_0 | Q: What were the baselines?
Text: Introduction
There is a growing interest in research revolving around automated fake news detection and fact checking as its need increases due to the dangerous speed fake news spreads on social media BIBREF0. With as much as 68% of adults in the United States regularly consuming news on social media, being able to distinguish fake from non-fake is a pressing need.
Numerous recent studies have tackled fake news detection with various techniques. The work of BIBREF1 identifies and verifies the stance of a headline with respect to its content as a first step in identifying potential fake news, achieving an accuracy of 89.59% on a publicly available article stance dataset. The work of BIBREF2 uses a deep learning approach and integrates multiple sources to assign a degree of “fakeness” to an article, beating representative baselines on a publicly-available fake news dataset.
More recent approaches also incorporate newer, novel methods to aid in detection. The work of BIBREF3 handles fake news detection as a specific case of cross-level stance detection. In addition, their work also uses the presence of an “inverted pyramid” structure as an indicator of real news, using a neural network to encode a given article's structure.
While these approaches are valid and robust, most, if not all, modern fake news detection techniques assume the existence of large, expertly-annotated corpora to train models from scratch. Both BIBREF1 and BIBREF3 use the Fake News Challenge dataset, with 49,972 labeled stances for each headline-body pairs. BIBREF2, on the other hand, uses the LIAR dataset BIBREF4, which contains 12,836 labeled short statements as well as sources to support the labels.
This requirement for large datasets to effectively train fake news detection models from scratch makes it difficult to adapt these techniques into low-resource languages. Our work focuses on the use of Transfer Learning (TL) to evade this data scarcity problem.
We make three contributions.
First, we construct the first fake news dataset in the low-resourced Filipino language, alleviating data scarcity for research in this domain.
Second, we show that TL techniques such as ULMFiT BIBREF5, BERT BIBREF6, and GPT-2 BIBREF7, BIBREF8 perform better compared to few-shot techniques by a considerable margin.
Third, we show that auxiliary language modeling losses BIBREF9, BIBREF10 allows transformers to adapt to the stylometry of downstream tasks, which produces more robust fake news classifiers.
Methods
We provide a baseline model as a comparison point, using a few-shot learning-based technique to benchmark transfer learning against methods designed with low resource settings in mind. After which, we show three TL techniques that we studied and adapted to the task of fake news detection.
Methods ::: Baseline
We use a siamese neural network, shown to perform state-of-the-art few-shot learning BIBREF11, as our baseline model.
A siamese network is composed of weight-tied twin networks that accept distinct inputs, joined by an energy function, which computes a distance metric between the representations given by both twins. The network could then be trained to differentiate between classes in order to perform classification BIBREF11.
We modify the original to account for sequential data, with each twin composed of an embedding layer, a Long-Short Term Memory (LSTM) BIBREF12 layer, and a feed-forward layer with Rectified Linear Unit (ReLU) activations.
Each twin embeds and computes representations for a pair of sequences, with the prediction vector $p$ computed as:
where $o_i$ denotes the output representation of each siamese twin $i$ , $W_{\textnormal {out}}$ and $b_{\textnormal {out}}$ denote the weight matrix and bias of the output layer, and $\sigma $ denotes the sigmoid activation function.
Methods ::: ULMFiT
ULMFiT BIBREF5 was introduced as a TL method for Natural Language Processing (NLP) that works akin to ImageNet BIBREF13 pretraining in Computer Vision.
It uses an AWD-LSTM BIBREF14 pretrained on a language modeling objective as a base model, which is then finetuned to a downstream task in two steps.
First, the language model is finetuned to the text of the target task to adapt to the task syntactically. Second, a classification layer is appended to the model and is finetuned to the classification task conservatively. During finetuning, multiple different techniques are introduced to prevent catastrophic forgetting.
ULMFiT delivers state-of-the-art performance for text classification, and is notable for being able to set comparable scores with as little as 1000 samples of data, making it attractive for use in low-resource settings BIBREF5.
Methods ::: BERT
BERT is a Transformer-based BIBREF15 language model designed to pretrain “deep bidirectional representations” that can be finetuned to different tasks, with state-of-the-art results achieved in multiple language understanding benchmarks BIBREF6.
As with all Transformers, it draws power from a mechanism called “Attention” BIBREF16, which allows the model to compute weighted importance for each token in a sequence, effectively pinpointing context reference BIBREF15. Precisely, we compute attention on a set of queries packed as a matrix $Q$ on key and value matrices $K$ and $V$, respectively, as:
where $d_{k}$ is the dimensions of the key matrix $K$. Attention allows the Transformer to refer to multiple positions in a sequence for context at any given time regardless of distance, which is an advantage over Recurrent Neural Networks (RNN).
BERT's advantage over ULMFiT is its bidirectionality, leveraging both left and right context using a pretraining method called “Masked Language Modeling.” In addition, BERT also benefits from being deep, allowing it to capture more context and information. BERT-Base, the smallest BERT model, has 12 layers (768 units in each hidden layer) and 12 attention heads for a total of 110M parameters. Its larger sibling, BERT-Large, has 24 layers (1024 units in each hidden layer) and 16 attention heads for a total of 340M parameters.
Methods ::: GPT-2
The GPT-2 BIBREF8 technique builds up from the original GPT BIBREF7. Its main contribution is the way it is trained. With an improved architecture, it learns to do multiple tasks by just training on vanilla language modeling.
Architecture-wise, it is a Transformer-based model similar to BERT, with a few differences. It uses two feed-forward layers per transformer “block,” in addition to using “delayed residuals” which allows the model to choose which transformed representations to output.
GPT-2 is notable for being extremely deep, with 1.5B parameters, 10x more than the original GPT architecture. This gives it more flexibility in learning tasks unsupervised from language modeling, especially when trained on a very large unlabeled corpus.
Methods ::: Multitask Finetuning
BERT and GPT-2 both lack an explicit “language model finetuning step,” which gives ULMFiT an advantage where it learns to adapt to the stylometry and linguistic features of the text used by its target task. Motivated by this, we propose to augment Transformer-based TL techniques with a language model finetuning step.
Motivated by recent advancements in multitask learning, we finetune the model to the stylometry of the target task at the same time as we finetune the classifier, instead of setting it as a separate step. This produces two losses to be optimized together during training, and ensures that no task (stylometric adaptation or classification) will be prioritized over the other. This concept has been proposed and explored to improve the performance of transfer learning in multiple language tasks BIBREF9, BIBREF10.
We show that this method improves performance on both BERT and GPT-2, given that it learns to adapt to the idiosyncracies of its target task in a similar way that ULMFiT also does.
Experimental Setup ::: Fake News Dataset
We work with a dataset composed of 3,206 news articles, each labeled real or fake, with a perfect 50/50 split between 1,603 real and fake articles, respectively. Fake articles were sourced from online sites that were tagged as fake news sites by the non-profit independent media fact-checking organization Verafiles and the National Union of Journalists in the Philippines (NUJP). Real articles were sourced from mainstream news websites in the Philippines, including Pilipino Star Ngayon, Abante, and Bandera.
For preprocessing, we only perform tokenization on our dataset, specifically “Byte-Pair Encoding” (BPE) BIBREF17. BPE is a form of fixed-vocabulary subword tokenization that considers subword units as the most primitive form of entity (i.e. a token) instead of canonical words (i.e. “I am walking today” $\rightarrow $ “I am walk ##ing to ##day”). BPE is useful as it allows our model to represent out-of-vocabulary (OOV) words unlike standard tokenization. In addition, it helps language models in learning morphologically-rich languages as it now treats morphemes as primary enitites instead of canonical word tokens.
For training/finetuning the classifiers, we use a 70%-30% train-test split of the dataset.
Experimental Setup ::: Pretraining Corpora
To pretrain BERT and GPT-2 language models, as well as an AWD-LSTM language model for use in ULMFiT, a large unlabeled training corpora is needed. For this purpose, we construct a corpus of 172,815 articles from Tagalog Wikipedia which we call WikiText-TL-39 BIBREF18. We form training-validation-test splits of 70%-15%-15% from this corpora.
Preprocessing is similar to the fake news dataset, with the corpus only being lightly preprocessed and tokenized using Byte-Pair Encoding.
Corpus statistics for the pretraining corpora are shown on table TABREF17.
Experimental Setup ::: Siamese Network Training
We train a siamese recurrent neural network as our baseline. For each twin, we use 300 dimensions for the embedding layer and a hidden size of 512 for all hidden state vectors.
To optimize the network, we use a regularized cross-entropy objective of the following form:
where y$(x_1, x_2)$ = 1 when $x_1$ and $x_2$ are from the same class and 0 otherwise. We use the Adam optimizer BIBREF19 with an initial learning rate of 1e-4 to train the network for a maximum of 500 epochs.
Experimental Setup ::: Transfer Pretraining
We pretrain a cased BERT-Base model using our prepared unlabeled text corpora using Google's provided pretraining scripts. For the masked language model pretraining objective, we use a 0.15 probability of a word being masked. We also set the maximum number of masked language model predictions to 20, and a maximum sequence length of 512. For training, we use a learning rate of 1e-4 and a batch size of 256. We train the model for 1,000,000 steps with 10,000 steps of learning rate warmup for 157 hours on a Google Cloud Tensor processing Unit (TPU) v3-8.
For GPT-2, we pretrain a GPT-2 Transformer model on our prepared text corpora using language modeling as its sole pretraining task, according to the specifications of BIBREF8. We use an embedding dimension of 410, a hidden dimension of 2100, and a maximum sequence length of 256. We use 10 attention heads per multihead attention block, with 16 blocks composing the encoder of the transformer. We use dropout on all linear layers to a probability of 0.1. We initialize all parameters to a standard deviation of 0.02. For training, we use a learning rate of 2.5e-4, and a batch size of 32, much smaller than BERT considering the large size of the model. We train the model for 200 epochs with 1,000 steps of learning rate warmup using the Adam optimizer. The model was pretrained for 178 hours on a machine with one NVIDIA Tesla V100 GPU.
For ULMFiT, we pretrain a 3-layer AWD-LSTM model with an embedding size of 400 and a hidden size of 1150. We set the dropout values for the embedding, the RNN input, the hidden-to-hidden transition, and the RNN output to (0.1, 0.3, 0.3, 0.4) respectively. We use a weight dropout of 0.5 on the LSTM’s recurrent weight matrices. The model was trained for 30 epochs with a learning rate of 1e-3, a batch size of 128, and a weight decay of 0.1. We use the Adam optimizer and use slanted triangular learning rate schedules BIBREF5. We train the model on a machine with one NVIDIA Tesla V100 GPU for a total of 11 hours.
For each pretraining scheme, we checkpoint models every epoch to preserve a copy of the weights such that we may restore them once the model starts overfitting. This is done as an extra regularization technique.
Experimental Setup ::: Finetuning
We finetune our models to the target fake news classification task using the pretrained weights with an appended classification layer or head.
For BERT, we append a classification head composed of a single linear layer followed by a softmax transformation to the transformer model. We then finetune our BERT-Base model on the fake news classification task for 3 epochs, using a batch size of 32, and a learning rate of 2e-5.
For GPT-2, our classification head is first comprised of a layer normalization transform, followed by a linear layer, then a softmax transform. We finetune the pretrained GPT-2 transformer for 3 epochs, using a batch size of 32, and a learning rate of 3e-5.
For ULMFiT, we perform language model finetuning on the fake news dataset (appending no extra classification heads yet) for a total of 10 epochs, using a learning rate of 1e-2, a batch size of 80, and weight decay of 0.3. For the final ULMFiT finetuning stage, we append a compound classification head (linear $\rightarrow $ batch normalization $\rightarrow $ ReLU $\rightarrow $ linear $\rightarrow $ batch normalization $\rightarrow $ softmax). We then finetune for 5 epochs, gradually unfreezing layers from the last to the first until all layers are unfrozen on the fourth epoch. We use a learning rate of 1e-2 and set Adam's $\alpha $ and $\beta $ parameters to 0.8 and 0.7, respectively.
To show the efficacy of Multitask Finetuning, we augment BERT and GPT-2 to use this finetuning setup with their classification heads. We finetune both models to the target task for 3 epochs, using a batch size of 32, and a learning rate of 3e-5. For optimization, we use Adam with a warmup steps of 10% the number of steps, comprising 3 epochs.
Experimental Setup ::: Generalizability Across Domains
To study the generalizability of the model to different news domains, we test our models against test cases not found in the training dataset. We mainly focus on three domains: political news, opinion articles, and entertainment/gossip articles. Articles used for testing are sourced from the same websites that the training dataset was taken from.
Results and Discussion ::: Classification Results
Our baseline model, the siamese recurrent network, achieved an accuracy of 77.42% on the test set of the fake news classification task.
The transfer learning methods gave comparable scores. BERT finetuned to a final 87.47% accuracy, a 10.05% improvement over the siamese network's performance. GPT-2 finetuned to a final accuracy of 90.99%, a 13.57% improvement from the baseline performance. ULMFiT finetuning gave a final accuracy of 91.59%, an improvement of 14.17% over the baseline Siamese Network.
We could see that TL techniques outperformed the siamese network baseline, which we hypothesize is due to the intact pretrained knowledge in the language models used to finetune the classifiers. The pretraining step aided the model in forming relationships between text, and thus, performed better at stylometric based tasks with little finetuning.
The model results are all summarized in table TABREF26.
Results and Discussion ::: Language Model Finetuning Significance
One of the most surprising results is that BERT and GPT-2 performed worse than ULMFiT in the fake news classification task despite being deeper models capable of more complex relationships between data.
We hypothesize that ULMFiT achieved better accuracy because of its additional language model finetuning step. We provide evidence for this assumption with an additional experiment that shows a decrease in performance when the language model finetuning step is removed, droppping ULMFiT's accuracy to 78.11%, making it only perform marginally better than the baseline model. Results for this experiment are outlined in Table TABREF28
In this finetuning stage, the model is said to “adapt to the idiosyncracies of the task it is solving” BIBREF5. Given that our techniques rely on linguistic cues and features to make accurate predictions, having the model adapt to the stylometry or “writing style” of an article will therefore improve performance.
Results and Discussion ::: Multitask-based Finetuning
We used a multitask finetuning technique over the standard finetuning steps for BERT and GPT-2, motivated by the advantage that language model finetuning provides to ULMFiT, and found that it greatly improves the performance of our models.
BERT achieved a final accuracy of 91.20%, now marginally comparable to ULMFiT's full performance. GPT-2, on the other hand, finetuned to a final accuracy of 96.28%, a full 4.69% improvement over the performance of ULMFiT. This provides evidence towards our hypothesis that a language model finetuning step will allow transformer-based TL techniques to perform better, given their inherent advantage in modeling complexity over more shallow models such as the AWD-LSTM used by ULMFiT. Rersults for this experiment are outlined in Table TABREF30.
Ablation Studies
Several ablation studies are performed to establish causation between the model architectures and the performance boosts in the study.
Ablation Studies ::: Pretraining Effects
An ablation on pretraining was done to establish evidence that pretraining before finetuning accounts for a significant boost in performance over the baseline model. Using non-pretrained models, we finetune for the fake news classification task using the same settings as in the prior experiments.
In Table TABREF32, it can be seen that generative pretraining via language modeling does account for a considerable amount of performance, constituting 44.32% of the overall performance (a boost of 42.67% in accuracy) in the multitasking setup, and constituting 43.93% of the overall performance (a boost of 39.97%) in the standard finetuning setup.
This provides evidence that the pretraining step is necessary in achieving state-of-the-art performance.
Ablation Studies ::: Attention Head Effects
An ablation study was done to establish causality between the multiheaded nature of the attention mechanisms and state-of-the-art performance. We posit that since the model can refer to multiple context points at once, it improves in performance.
For this experiment, we performed several pretraining-finetuning setups with varied numbers of attention heads using the multitask-based finetuning scheme. Using a pretrained GPT-2 model, attention heads were masked with zero-tensors to downsample the number of positions the model could attend to at one time.
As shown in Table TABREF34, reducing the number of attention heads severely decreases multitasking performance. Using only one attention head, thereby attending to only one context position at once, degrades the performance to less than the performance of 10 heads using the standard finetuning scheme. This shows that more attention heads, thereby attending to multiple different contexts at once, is important to boosting performance to state-of-the-art results.
While increasing the number of attention heads improves performance, keeping on adding extra heads will not result to an equivalent boost as the performance plateaus after a number of heads.
As shown in Figure FIGREF35, the performance boost of the model plateaus after 10 attention heads, which was the default used in the study. While the performance of 16 heads is greater than 10, it is only a marginal improvement, and does not justify the added costs to training with more attention heads.
Stylometric Tests
To supplement our understanding of the features our models learn and establish empirical difference in their stylometries, we use two stylometric tests traditionally used for authorship attribution: Mendenhall's Characteristic Curves BIBREF20 and John Burrow's Delta Method BIBREF21.
We provide a characteristic curve comparison to establish differences between real and fake news. For the rest of this section, we refer to the characteristic curves on Figure FIGREF36.
When looking at the y-axis, there is a big difference in word count. The fake news corpora has twice the amount of words as the real news corpora. This means that fake news articles are at average lengthier than real news articles. The only differences seen in the x-axis is the order of appearance of word lengths 6, 7, and 1. The characteristic curves also exhibit differences in trend. While the head and tail look similar, the body show different trends. When graphing the corpora by news category, the heads and tails look similar to the general real and fake news characteristic curve but the body exhibits a trend different from the general corpora. This difference in trend may be attributed to either a lack of text data to properly represent real and fake news or the existence of a stylistic difference between real and fake news.
We also use Burrow’s Delta method to see a numeric distance between text samples. Using the labeled news article corpora, we compare samples outside of the corpora towards real and fake news to see how similar they are in terms of vocabulary distance. The test produces smaller distance for the correct label, which further reaffirms our hypothesis that there is a stylistic difference between the labels. However, the difference in distance between real and fake news against the sample is not significantly large. For articles on politics, business, entertainment, and viral events, the test generates distances that are significant. Meanwhile news in the safety, sports, technology, infrastructure, educational, and health categories have negligible differences in distance. This suggests that some categories are written similarly despite veracity.
Further Discussions ::: Pretraining Tasks
All the TL techniques were pretrained with a language modeling-based task. While language modeling has been empirically proven as a good pretraining task, we surmise that other pretraining tasks could replace or support it.
Since automatic fake news detection uses stylometric information (i.e. writing style, language cues), we predict that the task could benefit from pretraining objectives that also learn stylometric information such as authorship attribution.
Further Discussions ::: Generalizability Across Domains
When testing on three different types of articles (Political News, Opinion, Entertainment/Gossip), we find that writing style is a prominent indicator for fake articles, supporting previous findings regarding writing style in fake news detection BIBREF22.
Supported by our findings on the stylometric differences of fake and real news, we show that the model predicts a label based on the test article's stylometry. It produces correct labels when tested on real and fake news.
We provide further evidence that the models learn stylometry by testing on out-of-domain articles, particularly opinion and gossip articles. While these articles aren't necessarily real or fake, their stylometries are akin to real and fake articles respectively, and so are classified as such.
Conclusion
In this paper, we show that TL techniques can be used to train robust fake news classifiers in low-resource settings, with TL methods performing better than few-shot techniques, despite being a setting they are designed in mind with.
We also show the significance of language model finetuning for tasks that involve stylometric cues, with ULMFiT performing better than transformer-based techniques with deeper language model backbones. Motivated by this, we augment the methodology with a multitask learning-inspired finetuning technique that allowed transformer-based transfer learning techniques to adapt to the stylometry of a target task, much like ULMFiT, resulting in better performance.
For future work, we propose that more pretraining tasks be explored, particularly ones that learn stylometric information inherently (such as authorship attribution).
Acknowledgments
The authors would like to acknowledge the efforts of VeraFiles and the National Union of Journalists in the Philippines (NUJP) for their work covering and combating the spread of fake news.
We are partially supported by Google's Tensoflow Research Cloud (TFRC) program. Access to the TPU units provided by the program allowed the BERT models in this paper, as well as the countless experiments that brought it to fruition, possible. | Siamese neural network consisting of an embedding layer, a LSTM layer and a feed-forward layer with ReLU activations |
23d0637f8ae72ae343556ab135eedc7f4cb58032 | 23d0637f8ae72ae343556ab135eedc7f4cb58032_0 | Q: How do they show that acquiring names of places helps self-localization?
Text: Introduction
Autonomous robots, such as service robots, operating in the human living environment with humans have to be able to perform various tasks and language communication. To this end, robots are required to acquire novel concepts and vocabulary on the basis of the information obtained from their sensors, e.g., laser sensors, microphones, and cameras, and recognize a variety of objects, places, and situations in an ambient environment. Above all, we consider it important for the robot to learn the names that humans associate with places in the environment and the spatial areas corresponding to these names; i.e., the robot has to be able to understand words related to places. Therefore, it is important to deal with considerable uncertainty, such as the robot's movement errors, sensor noise, and speech recognition errors.
Several studies on language acquisition by robots have assumed that robots have no prior lexical knowledge. These studies differ from speech recognition studies based on a large vocabulary and natural language processing studies based on lexical, syntactic, and semantic knowledge BIBREF0 , BIBREF1 . Studies on language acquisition by robots also constitute a constructive approach to the human developmental process and the emergence of symbols.
The objectives of this study were to build a robot that learns words related to places and efficiently utilizes this learned vocabulary in self-localization. Lexical acquisition related to places is expected to enable a robot to improve its spatial cognition. A schematic representation depicting the target task of this study is shown in Fig. FIGREF3 . This study assumes that a robot does not have any vocabularies in advance but can recognize syllables or phonemes. The robot then performs self-localization while moving around in the environment, as shown in Fig. FIGREF3 (a). An utterer speaks a sentence including the name of the place to the robot, as shown in Fig. FIGREF3 (b). For the purposes of this study, we need to consider the problems of self-localization and lexical acquisition simultaneously.
When a robot learns novel words from utterances, it is difficult to determine segmentation boundaries and the identity of different phoneme sequences from the speech recognition results, which can lead to errors. First, let us consider the case of the lexical acquisition of an isolated word. For example, if a robot obtains the speech recognition results “aporu”, “epou”, and “aqpuru” (incorrect phoneme recognition of apple), it is difficult for the robot to determine whether they denote the same referent without prior knowledge. Second, let us consider a case of the lexical acquisition of the utterance of a sentence. For example, a robot obtains a speech recognition result, such as “thisizanaporu.” The robot has to necessarily segment a sentence into individual words, e.g., “this”, “iz”, “an”, and “aporu”. In addition, it is necessary for the robot to recognize words referring to the same referent, e.g., the fruit apple, from among the many segmented results that contain errors. In case of Fig. FIGREF3 (c), there is some possibility of learning names including phoneme errors, e.g., “afroqtabutibe,” because the robot does not have any lexical knowledge. On the other hand, when a robot performs online probabilistic self-localization, we assume that the robot uses sensor data and control data, e.g., values obtained using a range sensor and odometry. If the position of the robot on the global map is unclear, the difficulties associated with the identification of the self-position by only using local sensor information become problematic. In the case of global localization using local information, e.g., a range sensor, the problem that the hypothesis of self-position is present in multiple remote locations, frequently occurs, as shown in Fig. FIGREF3 (d).
In order to solve the abovementioned problems, in this study, we adopted the following approach. An utterance is recognized as not a single phoneme sequence but a set of candidates of multiple phonemes. We attempt to suppress the variability in the speech recognition results by performing word discovery taking into account the multiple candidates of speech recognition. In addition, the names of places are learned by associating with words and positions. The lexical acquisition is complemented by using certain particular spatial information; i.e., this information is obtained by hearing utterances including the same word in the same place many times. Furthermore, in this study, we attempt to address the problem of the uncertainty of self-localization by improving the self-position errors by using a recognized utterance including the name of the current place and the acquired spatial concepts, as shown in Fig. FIGREF3 (e).
In this paper, we propose nonparametric Bayesian spatial concept acquisition method (SpCoA) on basis of unsupervised word segmentation and a nonparametric Bayesian generative model that integrates self-localization and a clustering in both words and places. The main contributions of this paper are as follows:
The remainder of this paper is organized as follows: In Section SECREF2 , previous studies on language acquisition and lexical acquisition relevant to our study are described. In Section SECREF3 , the proposed method SpCoA is presented. In Sections SECREF4 and SECREF5 , we discuss the effectiveness of SpCoA in the simulation and in the real environment. Section SECREF6 concludes this paper.
Lexical acquisition
Most studies on lexical acquisition typically focus on lexicons about objects BIBREF0 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . Many of these studies have not be able to address the lexical acquisition of words other than those related to objects, e.g., words about places.
Roy et al. proposed a computational model that enables a robot to learn the names of objects from an object image and spontaneous infant-directed speech BIBREF0 . Their results showed that the model performed speech segmentation, word discovery, and visual categorization. Iwahashi et al. reported that a robot properly understands the situation and acquires the relationship of object behaviors and sentences BIBREF2 , BIBREF3 , BIBREF4 . Qu & Chai focused on the conjunction between speech and eye gaze and the use of domain knowledge in lexical acquisition BIBREF6 , BIBREF7 . They proposed an unsupervised learning method that automatically acquires novel words for an interactive system. Qu & Chai's method based on the IBM translation model BIBREF11 estimates the word-entity association probability.
Nakamura et al. proposed a method to learn object concepts and word meanings from multimodal information and verbal information BIBREF9 . The method proposed in BIBREF9 is a categorization method based on multimodal latent Dirichlet allocation (MLDA) that enables the acquisition of object concepts from multimodal information, such as visual, auditory, and haptic information BIBREF12 . Araki et al. addressed the development of a method combining unsupervised word segmentation from uttered sentences by a nested Pitman-Yor language model (NPYLM) BIBREF13 and the learning of object concepts by MLDA BIBREF10 . However, the disadvantage of using NPYLM was that phoneme sequences with errors did not result in appropriate word segmentation.
These studies did not address the lexical acquisition of the space and place that can also tolerate the uncertainty of phoneme recognition. However, for the introduction of robots into the human living environment, robots need to acquire a lexicon related to not only objects but also places. Our study focuses on the lexical acquisition related to places. Robots can adaptively learn the names of places in various human living environments by using SpCoA. We consider that the acquired names of places can be useful for various tasks, e.g., tasks with a movement of robots by the speech instruction.
Simultaneous learning of places and vocabulary
The following studies have addressed lexical acquisition related to places. However, these studies could not utilize the learned language knowledge in other estimations such as the self-localization of a robot.
Taguchi et al. proposed a method for the unsupervised learning of phoneme sequences and relationships between words and objects from various user utterances without any prior linguistic knowledge other than an acoustic model of phonemes BIBREF1 , BIBREF14 . Further, they proposed a method for the simultaneous categorization of self-position coordinates and lexical learning BIBREF15 . These experimental results showed that it was possible to learn the name of a place from utterances in some cases and to output words corresponding to places in a location that was not used for learning.
Milford et al. proposed RatSLAM inspired by the biological knowledge of a pose cell of the hippocampus of rodents BIBREF16 . Milford et al. proposed a method that enables a robot to acquire spatial concepts by using RatSLAM BIBREF17 . Further, Lingodroids, mobile robots that learn a language through robot-to-robot communication, have been studied BIBREF18 , BIBREF19 , BIBREF20 . Here, a robot communicated the name of a place to other robots at various locations. Experimental results showed that two robots acquired the lexicon of places that they had in common. In BIBREF20 , the researchers showed that it was possible to learn temporal concepts in a manner analogous to the acquisition of spatial concepts. These studies reported that the robots created their own vocabulary. However, these studies did not consider the acquisition of a lexicon by human-to-robot speech interactions.
Welke et al. proposed a method that acquires spatial representation by the integration of the representation of the continuous state space on the sensorimotor level and the discrete symbolic entities used in high-level reasoning BIBREF21 . This method estimates the probable spatial domain and word from the given objects by using the spatial lexical knowledge extracted from Google Corpus and the position information of the object. Their study is different from ours because their study did not consider lexicon learning from human speech.
In the case of global localization, the hypothesis of self-position often remains in multiple remote places. In this case, there is some possibility of performing an incorrect estimation and increasing the estimation error. This problem exists during teaching tasks and self-localization after the lexical acquisition. The abovementioned studies could not deal with this problem. In this paper, we have proposed a method that enables a robot to perform more accurate self-localization by reducing the estimation error of the teaching time by using a smoothing method in the teaching task and by utilizing words acquired through the lexical acquisition. The strengths of this study are that learning of spatial concept and self-localization represented as one generative model and robots are able to utilize acquired lexicon to self-localization autonomously.
Spatial Concept Acquisition
We propose nonparametric Bayesian spatial concept acquisition method (SpCoA) that integrates a nonparametric morphological analyzer for the lattice BIBREF22 , i.e., latticelm, a spatial clustering method, and Monte Carlo localization (MCL) BIBREF23 .
Generative model
In our study, we define a position as a specific coordinate or a local point in the environment, and the position distribution as the spatial area of the environment. Further, we define a spatial concept as the names of places and the position distributions corresponding to these names.
The model that was developed for spatial concept acquisition is a probabilistic generative model that integrates a self-localization with the simultaneous clustering of places and words. Fig. FIGREF13 shows the graphical model for spatial concept acquisition. Table TABREF14 shows each variable of the graphical model. The number of words in a sentence at time INLINEFORM0 is denoted as INLINEFORM1 . The generative model of the proposed method is defined as equation ( EQREF11 -). DISPLAYFORM0
Then, the probability distribution for equation () can be defined as follows: DISPLAYFORM0
The prior distribution configured by using the stick breaking process (SBP) BIBREF24 is denoted as INLINEFORM0 , the multinomial distribution as INLINEFORM1 , the Dirichlet distribution as INLINEFORM2 , the inverse–Wishart distribution as INLINEFORM3 , and the multivariate Gaussian (normal) distribution as INLINEFORM4 . The motion model and the sensor model of self-localization are denoted as INLINEFORM5 and INLINEFORM6 in equations () and (), respectively.
This model can learn an appropriate number of spatial concepts, depending on the data, by using a nonparametric Bayesian approach. We use the SBP, which is one of the methods based on the Dirichlet process. In particular, this model can consider a theoretically infinite number of spatial concepts INLINEFORM0 and position distributions INLINEFORM1 . SBP computations are difficult because they generate an infinite number of parameters. In this study, we approximate a number of parameters by setting sufficiently large values, i.e., a weak-limit approximation BIBREF25 .
It is possible to correlate a name with multiple places, e.g., “staircase” is in two different places, and a place with multiple names, e.g., “toilet” and “restroom” refer to the same place. Spatial concepts are represented by a word distribution of the names of the place INLINEFORM0 and several position distributions ( INLINEFORM1 , INLINEFORM2 ) indicated by a multinomial distribution INLINEFORM3 . In other words, this model is capable of relating the mixture of Gaussian distributions to a multinomial distribution of the names of places. It should be noted that the arrows connecting INLINEFORM4 to the surrounding nodes of the proposed graphical model differ from those of ordinal Gaussian mixture model (GMM). We assume that words obtained by the robot do not change its position, but that the position of the robot affects the distribution of words. Therefore, the proposed generative process assumes that the index of position distribution INLINEFORM5 , i.e., the category of the place, is generated from the position of the robot INLINEFORM6 . This change can be naturally introduced without any troubles by introducing equation ( EQREF12 ).
Overview of the proposed method SpCoA
We assume that a robot performs self-localization by using control data and sensor data at all times. The procedure for the learning of spatial concepts is as follows:
An utterer teaches a robot the names of places, as shown in Fig. FIGREF3 (b). Every time the robot arrives at a place that was a designated learning target, the utterer says a sentence, including the name of the current place.
The robot performs speech recognition from the uttered speech signal data. Thus, the speech recognition system includes a word dictionary of only Japanese syllables. The speech recognition results are obtained in a lattice format.
Word segmentation is performed by using the lattices of the speech recognition results.
The robot learns spatial concepts from words obtained by word segmentation and robot positions obtained by self-localization for all teaching times. The details of the learning are given in SECREF23 .
The procedure for self-localization utilizing spatial concepts is as follows:
The words of the learned spatial concepts are registered to the word dictionary of the speech recognition system.
When a robot obtains a speech signal, speech recognition is performed. Then, a word sequence as the 1-best speech recognition result is obtained.
The robot modifies the self-localization from words obtained by speech recognition and the position likelihood obtained by spatial concepts. The details of self-localization are provided in SECREF35 .
The proposed method can learn words related to places from the utterances of sentences. We use an unsupervised word segmentation method latticelm that can directly segment words from the lattices of the speech recognition results of the uttered sentences BIBREF22 . The lattice can represent to a compact the set of more promising hypotheses of a speech recognition result, such as N-best, in a directed graph format. Unsupervised word segmentation using the lattices of syllable recognition is expected to be able to reduce the variability and errors in phonemes as compared to NPYLM BIBREF13 , i.e., word segmentation using the 1-best speech recognition results.
The self-localization method adopts MCL BIBREF23 , a method that is generally used as the localization of mobile robots for simultaneous localization and mapping (SLAM) BIBREF26 . We assume that a robot generates an environment map by using MCL-based SLAM such as FastSLAM BIBREF27 , BIBREF28 in advance, and then, performs localization by using the generated map. Then, the environment map of both an occupancy grid map and a landmark map is acceptable.
Learning of spatial concept
Spatial concepts are learned from multiple teaching data, control data, and sensor data. The teaching data are a set of uttered sentences for all teaching times. Segmented words of an uttered sentence are converted into a bag-of-words (BoW) representation as a vector of the occurrence counts of words INLINEFORM0 . The set of the teaching times is denoted as INLINEFORM1 , and the number of teaching data items is denoted as INLINEFORM2 . The model parameters are denoted as INLINEFORM3 . The initial values of the model parameters can be set arbitrarily in accordance with a condition. Further, the sampling values of the model parameters from the following joint posterior distribution are obtained by performing Gibbs sampling. DISPLAYFORM0
where the hyperparameters of the model are denoted as INLINEFORM0 . The algorithm of the learning of spatial concepts is shown in Algorithm SECREF23 .
The conditional posterior distribution of each element used for performing Gibbs sampling can be expressed as follows: An index INLINEFORM0 of the position distribution is sampled for each data INLINEFORM1 from a posterior distribution as follows: DISPLAYFORM0
An index INLINEFORM0 of the spatial concepts is sampled for each data item INLINEFORM1 from a posterior distribution as follows: DISPLAYFORM0
where INLINEFORM0 denotes a vector of the occurrence counts of words in the sentence at time INLINEFORM1 . A posterior distribution representing word probabilities of the name of place INLINEFORM2 is calculated as follows: DISPLAYFORM0
where variables with the subscript INLINEFORM0 denote the set of all teaching times. A word probability of the name of place INLINEFORM1 is sampled for each INLINEFORM2 as follows: DISPLAYFORM0
where INLINEFORM0 represents the posterior parameter and INLINEFORM1 denotes the BoW representation of all sentences of INLINEFORM2 in INLINEFORM3 . A posterior distribution representing the position distribution INLINEFORM4 is calculated as follows: DISPLAYFORM0
A position distribution INLINEFORM0 , INLINEFORM1 is sampled for each INLINEFORM2 as follows: DISPLAYFORM0
where INLINEFORM0 denotes the Gaussian–inverse–Wishart distribution; INLINEFORM1 , and INLINEFORM2 represent the posterior parameters; and INLINEFORM3 indicates the set of the teaching positions of INLINEFORM4 in INLINEFORM5 . A topic probability distribution INLINEFORM6 of spatial concepts is sampled as follows: DISPLAYFORM0
A posterior distribution representing the mixed weights INLINEFORM0 of the position distributions is calculated as follows: DISPLAYFORM0
A mixed weight INLINEFORM0 of the position distributions is sampled for each INLINEFORM1 as follows: DISPLAYFORM0
where INLINEFORM0 denotes a vector counting all the indices of the Gaussian distribution of INLINEFORM1 in INLINEFORM2 .
Self-positions INLINEFORM0 are sampled by using a Monte Carlo fixed-lag smoother BIBREF29 in the learning phase. The smoother can estimate self-position INLINEFORM1 and not INLINEFORM2 , i.e., a sequential estimation from the given data INLINEFORM3 until time INLINEFORM4 , but it can estimate INLINEFORM5 , i.e., an estimation from the given data INLINEFORM6 until time INLINEFORM7 later than INLINEFORM8 INLINEFORM9 . In general, the smoothing method can provide a more accurate estimation than the MCL of online estimation. In contrast, if the self-position of a robot INLINEFORM10 is sampled like direct assignment sampling for each time INLINEFORM11 , the sampling of INLINEFORM12 is divided in the case with the teaching time INLINEFORM13 and another time INLINEFORM14 as follows: DISPLAYFORM0
[tb] Learning of spatial concepts [1] INLINEFORM0 , INLINEFORM1 Localization and speech recognition INLINEFORM2 to INLINEFORM3 INLINEFORM4 BIBREF29 the speech signal is observed INLINEFORM5 add INLINEFORM6 to INLINEFORM7 Registering the lattice add INLINEFORM8 to INLINEFORM9 Registering the teaching time Word segmentation using lattices INLINEFORM10 BIBREF22 Gibbs sampling Initialize parameters INLINEFORM11 , INLINEFORM12 , INLINEFORM13 INLINEFORM14 to INLINEFORM15 INLINEFORM16 ( EQREF25 ) INLINEFORM17 ( EQREF26 ) INLINEFORM18 ( EQREF28 ) INLINEFORM19 ( EQREF30 ) INLINEFORM20 ( EQREF31 ) INLINEFORM21 ( EQREF33 ) INLINEFORM22 to INLINEFORM23 INLINEFORM24 ( EQREF34 ) INLINEFORM25
Self-localization of after learning spatial concepts
A robot that acquires spatial concepts can leverage spatial concepts to self-localization. The estimated model parameters INLINEFORM0 and a speech recognition sentence INLINEFORM1 at time INLINEFORM2 are given to the condition part of the probability formula of MCL as follows: DISPLAYFORM0
When the robot hears the name of a place spoken by the utterer, in addition to the likelihood of the sensor model of MCL, the likelihood of INLINEFORM0 with respect to a speech recognition sentence is calculated as follows: DISPLAYFORM0
The algorithm of self-localization utilizing spatial concepts is shown in Algorithm SECREF35 . The set of particles is denoted as INLINEFORM0 , the temporary set that stores the pairs of the particle INLINEFORM1 and the weight INLINEFORM2 , i.e., INLINEFORM3 , is denoted as INLINEFORM4 . The number of particles is INLINEFORM5 . The function INLINEFORM6 is a function that moves each particle from its previous state INLINEFORM7 to its current state INLINEFORM8 by using control data. The function INLINEFORM9 calculates the likelihood of each particle INLINEFORM10 using sensor data INLINEFORM11 . These functions are normally used in MCL. For further details, please refer to BIBREF26 . In this case, a speech recognition sentence INLINEFORM12 is obtained by the speech recognition system using a word dictionary containing all the learned words. [tb] Self-localization utilizing spatial concepts [1] INLINEFORM13 INLINEFORM14 INLINEFORM15 INLINEFORM16 to INLINEFORM17 INLINEFORM18 () INLINEFORM19 () the speech signal is observed INLINEFORM20 add INLINEFORM21 to INLINEFORM22 INLINEFORM23 to INLINEFORM24 draw INLINEFORM25 with probability INLINEFORM26 add INLINEFORM27 to INLINEFORM28 INLINEFORM29
Experiment I
In this experiment, we validate the evidence of the proposed method (SpCoA) in an environment simulated on the simulator platform SIGVerse BIBREF30 , which enables the simulation of social interactions. The speech recognition is performed using the Japanese continuous speech recognition system Julius BIBREF31 , BIBREF32 . The set of 43 Japanese phonemes defined by Acoustical Society of Japan (ASJ)'s speech database committee is adopted by Julius BIBREF31 . The representation of these phonemes is also adopted in this study. The Julius system uses a word dictionary containing 115 Japanese syllables. The microphone attached on the robot is SHURE's PG27-USB. Further, an unsupervised morphological analyzer, a latticelm 0.4, is implemented BIBREF22 .
In the experiment, we compare the following three types of word segmentation methods. A set of syllable sequences is given to the graphical model of SpCoA by each method. This set is used for the learning of spatial concepts as recognized uttered sentences INLINEFORM0 .
The remainder of this section is organized as follows: In Section SECREF43 , the conditions and results of learning spatial concepts are described. The experiments performed using the learned spatial concepts are described in Section SECREF49 to SECREF64 . In Section SECREF49 , we evaluate the accuracy of the phoneme recognition and word segmentation for uttered sentences. In Section SECREF56 , we evaluate the clustering accuracy of the estimation results of index INLINEFORM0 of spatial concepts for each teaching utterance. In Section SECREF60 , we evaluate the accuracy of the acquisition of names of places. In Section SECREF64 , we show that spatial concepts can be utilized for effective self-localization.
Learning of spatial concepts
We conduct this experiment of spatial concept acquisition in the environment prepared on SIGVerse. The experimental environment is shown in Fig. FIGREF45 . A mobile robot can move by performing forward, backward, right rotation, or left rotation movements on a two-dimensional plane. In this experiment, the robot can use an approximately correct map of the considered environment. The robot has a range sensor in front and performs self-localization on the basis of an occupancy grid map. The initial particles are defined by the true initial position of the robot. The number of particles is INLINEFORM0 .
The lag value of the Monte Carlo fixed-lag smoothing is fixed at 100. The other parameters of this experiment are as follows: INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 , INLINEFORM7 , and INLINEFORM8 . The number of iterations used for Gibbs sampling is 100. This experiment does not include the direct assignment sampling of INLINEFORM9 in equation ( EQREF34 ), i.e., lines 22–24 of Algorithm SECREF23 are omitted, because we consider that the self-position can be obtained with sufficiently good accuracy by using the Monte Carlo smoothing. Eight places are selected as the learning targets, and eight types of place names are considered. Each uttered place name is shown in Fig. FIGREF45 . These utterances include the same name in different places, i.e., “teeburunoatari” (which means near the table in English), and different names in the same place, i.e., “kiqchiN” and “daidokoro” (which mean a kitchen in English). The other teaching names are “geNkaN” (which means an entrance or a doorway in English); “terebimae” (which means the front of the TV in English); “gomibako” (which means a trash box in English); “hoNdana” (which means a bookshelf in English); and “sofaamae” (which means the front of the sofa in English). The teaching utterances, including the 10 types of phrases, are spoken for a total of 90 times. The phrases in each uttered sentence are listed in Table TABREF46 .
The learning results of spatial concepts obtained by using the proposed method are presented here. Fig. FIGREF47 shows the position distributions learned in the experimental environment. Fig. FIGREF47 (top) shows the word distributions of the names of places for each spatial concept, and Fig. FIGREF47 (bottom) shows the multinomial distributions of the indices of the position distributions. Consequently, the proposed method can learn the names of places corresponding to each place of the learning target. In the spatial concept of index INLINEFORM0 , the highest probability of words was “sofamae”, and the highest probability of the indices of the position distribution was INLINEFORM1 ; therefore, the name of a place “sofamae” was learned to correspond to the position distribution of INLINEFORM2 . In the spatial concept of index INLINEFORM3 , “kiqchi” and “daidokoro” were learned to correspond to the position distribution of INLINEFORM4 . Therefore, this result shows that multiple names can be learned for the same place. In the spatial concept of index INLINEFORM5 , “te” and “durunoatari” (one word in a normal situation) were learned to correspond to the position distributions of INLINEFORM6 and INLINEFORM7 . Therefore, this result shows that the same name can be learned for multiple places.
Phoneme recognition accuracy of uttered sentences
We compared the performance of three types of word segmentation methods for all the considered uttered sentences. It was difficult to weigh the ambiguous syllable recognition and the unsupervised word segmentation separately. Therefore, this experiment considered the positions of a delimiter as a single letter. We calculated the matching rate of a phoneme string of a recognition result of each uttered sentence and the correct phoneme string of the teaching data that was suitably segmented into Japanese morphemes using MeCab, which is an off-the-shelf Japanese morphological analyzer that is widely used for natural language processing. The matching rate of the phoneme string was calculated by using the phoneme accuracy rate (PAR) as follows: DISPLAYFORM0
The numerator of equation ( EQREF52 ) is calculated by using the Levenshtein distance between the correct phoneme string and the recognition phoneme string. INLINEFORM0 denotes the number of substitutions; INLINEFORM1 , the number of deletions; and INLINEFORM2 , the number of insertions. INLINEFORM3 represents the number of phonemes of the correct phoneme string.
Table TABREF54 shows the results of PAR. Table TABREF55 presents examples of the word segmentation results of the three considered methods. We found that the unsupervised morphological analyzer capable of using lattices improved the accuracy of phoneme recognition and word segmentation. Consequently, this result suggests that this word segmentation method considers the multiple hypothesis of speech recognition as a whole and reduces uncertainty such as variability in recognition by using the syllable recognition results in the lattice format.
Estimation accuracy of spatial concepts
We compared the matching rate with the estimation results of index INLINEFORM0 of the spatial concepts of each teaching utterance and the classification results of the correct answer given by humans. The evaluation of this experiment used the adjusted Rand index (ARI) BIBREF33 . ARI is a measure of the degree of similarity between two clustering results.
Further, we compared the proposed method with a method of word clustering without location information for the investigation of the effect of lexical acquisition using location information. In particular, a method of word clustering without location information used the Dirichlet process mixture (DPM) of the unigram model of an SBP representation. The parameters corresponding to those of the proposed method were the same as the parameters of the proposed method and were estimated using Gibbs sampling.
Fig. FIGREF59 shows the results of the average of the ARI values of 10 trials of learning by Gibbs sampling. Here, we found that the proposed method showed the best score. These results and the results reported in Section SECREF49 suggest that learning by uttered sentences obtained by better phoneme recognition and better word segmentation produces a good result for the acquisition of spatial concepts. Furthermore, in a comparison of two clustering methods, we found that SpCoA was considerably better than DPM, a word clustering method without location information, irrespective of the word segmentation method used. The experimental results showed that it is possible to improve the estimation accuracy of spatial concepts and vocabulary by performing word clustering that considered location information.
Accuracy of acquired phoneme sequences representing the names of places
We evaluated whether the names of places were properly learned for the considered teaching places. This experiment assumes a request for the best phoneme sequence INLINEFORM0 representing the self-position INLINEFORM1 for a robot. The robot moves close to each teaching place. The probability of a word INLINEFORM2 when the self-position INLINEFORM3 of the robot is given, INLINEFORM4 , can be obtained by using equation ( EQREF37 ). The word having the best probability was selected. We compared the PAR with the correct phoneme sequence and a selected name of the place. Because “kiqchiN” and “daidokoro” were taught for the same place, the word whose PAR was the higher score was adopted.
Fig. FIGREF63 shows the results of PAR for the word considered the name of a place. SpCoA (latticelm), the proposed method using the results of unsupervised word segmentation on the basis of the speech recognition results in the lattice format, showed the best PAR score. In the 1-best and BoS methods, a part syllable sequence of the name of a place was more minutely segmented as shown in Table TABREF55 . Therefore, the robot could not learn the name of the teaching place as a coherent phoneme sequence. In contrast, the robot could learn the names of teaching places more accurately by using the proposed method.
Self-localization that utilizes acquired spatial concepts
In this experiment, we validate that the robot can make efficient use of the acquired spatial concepts. We compare the estimation accuracy of localization for the proposed method (SpCoA MCL) and the conventional MCL. When a robot comes to the learning target, the utterer speaks out the sentence containing the name of the place once again for the robot. The moving trajectory of the robot and the uttered positions are the same in all the trials. In particular, the uttered sentence is “kokowa ** dayo”. When learning a task, this phrase is not used. The number of particles is INLINEFORM0 , and the initial particles are uniformly distributed in the considered environment. The robot performs a control operation for each time step.
The estimation error in the localization is evaluated as follows: While running localization, we record the estimation error (equation ( EQREF66 )) on the INLINEFORM0 plane of the floor for each time step. DISPLAYFORM0
where INLINEFORM0 denote the true position coordinates of the robot as obtained from the simulator, and INLINEFORM1 , INLINEFORM2 represent the weighted mean values of localization coordinates. The normalized weight INLINEFORM3 is obtained from the sensor model in MCL as a likelihood. In the utterance time, this likelihood is multiplied by the value calculated using equation ( EQREF37 ). INLINEFORM4 , INLINEFORM5 denote the INLINEFORM6 -coordinate and the INLINEFORM7 -coordinate of index INLINEFORM8 of each particle at time INLINEFORM9 . After running the localization, we calculated the average of INLINEFORM10 .
Further, we compared the estimation accuracy rate (EAR) of the global localization. In each trial, we calculated the proportion of time step in which the estimation error was less than 50 cm.
Fig. FIGREF68 shows the results of the estimation error and the EAR for 10 trials of each method. All trials of SpCoA MCL (latticelm) and almost all trials of the method using 1-best NPYLM and BoS showed relatively small estimation errors. Results of the second trial of 1-best NPYLM and the fifth trial of BoS showed higher estimation errors. In these trials, many particles converged to other places instead of the place where the robot was, based on utterance information. Nevertheless, compared with those of the conventional MCL, the results obtained using spatial concepts showed an obvious improvement in the estimation accuracy. Consequently, spatial concepts acquired by using the proposed method proved to be very helpful in improving the localization accuracy.
Experiment II
In this experiment, the effectiveness of the proposed method was tested by using an autonomous mobile robot TurtleBot 2 in a real environment. Fig. FIGREF70 shows TurtleBot 2 used in the experiments.
Mapping and self-localization are performed by the robot operating system (ROS). The speech recognition system, the microphone, and the unsupervised morphological analyzer were the same as those described in Section SECREF4 .
Learning of spatial concepts in the real environment
We conducted an experiment of the spatial concept acquisition in a real environment of an entire floor of a building. In this experiment, self-localization was performed using a map generated by SLAM. The initial particles are defined by the true initial position of the robot. The generated map in the real environment and the names of teaching places are shown in Fig. FIGREF73 . The number of teaching places was 19, and the number of teaching names was 16. The teaching utterances were performed for a total of 100 times.
Fig. FIGREF75 shows the position distributions learned on the map. Table TABREF76 shows the five best elements of the multinomial distributions of the name of place INLINEFORM0 and the multinomial distributions of the indices of the position distribution INLINEFORM1 for each index of spatial concept INLINEFORM2 .
Thus, we found that the proposed method can learn the names of places corresponding to the considered teaching places in the real environment. For example, in the spatial concept of index INLINEFORM0 , “torire” was learned to correspond to a position distribution of INLINEFORM1 . Similarly, “kidanokeN” corresponded to INLINEFORM2 in INLINEFORM3 , and “kaigihitsu” was corresponded to INLINEFORM4 in INLINEFORM5 . In the spatial concept of index INLINEFORM6 , a part of the syllable sequences was minutely segmented as “sohatsuke”, “N”, and “tani”, “guchi”. In this case, the robot was taught two types of names. These words were learned to correspond to the same position distribution of INLINEFORM7 . In INLINEFORM8 , “gomibako” showed a high probability, and it corresponded to three distributions of the position of INLINEFORM9 . The position distribution of INLINEFORM10 had the fourth highest probability in the spatial concept INLINEFORM11 . Therefore, “raqkukeN,” which had the fifth highest probability in the spatial concept INLINEFORM12 (and was expected to relate to the spatial concept INLINEFORM13 ), can be estimated as the word drawn from spatial concept INLINEFORM14 . However, in practice, this situation did not cause any severe problems because the spatial concept of the index INLINEFORM15 had the highest probabilities for the word “rapukeN” and the position distribution INLINEFORM16 than INLINEFORM17 . In the probabilistic model, the relative probability and the integrative information are important. When the robot listened to an utterance related to “raqkukeN,” it could make use of the spatial concept of index INLINEFORM18 for self-localization with a high probability, and appropriately updated its estimated self-location. We expected that the spatial concept of index INLINEFORM19 was learned as two separate spatial concepts. However, “watarirooka” and “kaidaNmae” were learned as the same spatial concept. Therefore, the multinomial distribution INLINEFORM20 showed a higher probability for the indices of the position distribution corresponding to the teaching places of both “watarirooka” and “kaidaNmae”.
The proposed method adopts a nonparametric Bayesian method in which it is possible to form spatial concepts that allow many-to-many correspondences between names and places. In contrast, this can create ambiguity that classifies originally different spatial concepts into one spatial concept as a side effect. There is a possibility that the ambiguity of concepts such as INLINEFORM0 will have a negative effect on self-localization, even though the self-localization performance was (overall) clearly increased by employing the proposed method. The solution of this problem will be considered in future work.
In terms of the PAR of uttered sentences, the evaluation value from the evaluation method used in Section SECREF49 is 0.83; this value is comparable to the result in Section SECREF49 . However, in terms of the PAR of the name of the place, the evaluation value from the evaluation method used in Section SECREF60 is 0.35, which is lower than that in Section SECREF60 . We consider that the increase in uncertainty in the real environment and the increase in the number of teaching words reduced the performance. We expect that this problem could be improved using further experience related to places, e.g., if the number of utterances per place is increased, and additional sensory information is provided.
Modification of localization by the acquired spatial concepts
In this experiment, we verified the modification results of self-localization by using spatial concepts in global self-localization. This experiment used the learning results of spatial concepts presented in Section SECREF71 . The experimental procedures are shown below. The initial particles were uniformly distributed on the entire floor. The robot begins to move from a little distance away to the target place. When the robot reached the target place, the utterer spoke the sentence containing the name of the place for the robot. Upon obtaining the speech information, the robot modifies the self-localization on the basis of the acquired spatial concepts. The number of particles was the same as that mentioned in Section SECREF71 .
Fig. FIGREF80 shows the results of the self-localization before (the top part of the figure) and after (the bottom part of the figure) the utterance for three places. The particle states are denoted by red arrows. The moving trajectory of the robot is indicated by a green dotted arrow. Figs. FIGREF80 (a), (b), and (c) show the results for the names of places “toire”, “souhatsukeN”, and “gomibako”. Further, three spatial concepts, i.e., those at INLINEFORM0 , were learned as “gomibako”. In this experiment, the utterer uttered to the robot when the robot came close to the place of INLINEFORM1 . In all the examples shown in the top part of the figure, the particles were dispersed in several places. In contrast, the number of particles near the true position of the robot showed an almost accurate increase in all the examples shown in the bottom part of the figure. Thus, we can conclude that the proposed method can modify self-localization by using spatial concepts.
Conclusion and Future Work
In this paper, we discussed the spatial concept acquisition, lexical acquisition related to places, and self-localization using acquired spatial concepts. We proposed nonparametric Bayesian spatial concept acquisition method SpCoA that integrates latticelm BIBREF22 , a spatial clustering method, and MCL. We conducted experiments for evaluating the performance of SpCoA in a simulation and a real environment. SpCoA showed good results in all the experiments. In experiments of the learning of spatial concepts, the robot could form spatial concepts for the places of the learning targets from human continuous speech signals in both the room of the simulation environment and the entire floor of the real environment. Further, the unsupervised word segmentation method latticelm could reduce the variability and errors in the recognition of phonemes in all the utterances. SpCoA achieved more accurate lexical acquisition by performing word segmentation using the lattices of the speech recognition results. In the self-localization experiments, the robot could effectively utilize the acquired spatial concepts for recognizing self-position and reducing the estimation errors in self-localization. As a method that further improves the performance of the lexical acquisition, a mutual learning method was proposed by Nakamura et al. on the basis of the integration of the learning of object concepts with a language model BIBREF34 , BIBREF35 . Following a similar approach, Heymann et al. proposed a method that alternately and repeatedly updates phoneme recognition results and the language model by using unsupervised word segmentation BIBREF36 . As a result, they achieved robust lexical acquisition. In our study, we can expect to improve the accuracy of lexical acquisition for spatial concepts by estimating both the spatial concepts and the language model.
Furthermore, as a future work, we consider it necessary for robots to learn spatial concepts online and to recognize whether the uttered word indicates the current place or destination. Furthermore, developing a method that simultaneously acquires spatial concepts and builds a map is one of our future objectives. We believe that the spatial concepts will have a positive effect on the mapping. We also intend to examine a method that associates the image and the landscape with spatial concepts and a method that estimates both spatial concepts and object concepts.
[] Akira Taniguchi received his BE degree from Ritsumeikan University in 2013 and his ME degree from the Graduate School of Information Science and Engineering, Ritsumeikan University, in 2015. He is currently working toward his PhD degree at the Emergent System Lab, Ritsumeikan University, Japan. His research interests include language acquisition, concept acquisition, and symbol emergence in robotics.
[] Tadahiro Taniguchi received the ME and PhD degrees from Kyoto University in 2003 and 2006, respectively. From April 2005 to March 2006, he was a Japan Society for the Promotion of Science (JSPS) research fellow (DC2) in the Department of Mechanical Engineering and Science, Graduate School of Engineering, Kyoto University. From April 2006 to March 2007, he was a JSPS research fellow (PD) in the same department. From April 2007 to March 2008, he was a JSPS research fellow in the Department of Systems Science, Graduate School of Informatics, Kyoto University. From April 2008 to March 2010, he was an assistant professor at the Department of Human and Computer Intelligence, Ritsumeikan University. Since April 2010, he has been an associate professor in the same department. He is currently engaged in research on machine learning, emergent systems, and semiotics.
[] Tetsunari Inamura received the BE, MS and PhD degrees from the University of Tokyo, in 1995, 1997 and 2000, respectively. He was a Researcher of the CREST program, Japanese Science and Technology Cooperation, from 2000 to 2003, and then joined the Department of Mechano-Informatics, School of Information Science and Technology, University of Tokyo as a Lecturer, from 2003 to 2006. He is now an Associate Professor in the Principles of Informatics Research Division, National Institute of Informatics, and an Associate Professor in the Department of Informatics, School of Multidisciplinary Sciences, Graduate University for Advanced Studies (SOKENDAI). His research interests include imitation learning and symbol emergence on humanoid robots, development of interactive robots through virtual reality and so on. | unsupervised morphological analyzer capable of using lattices improved the accuracy of phoneme recognition and word segmentation, Consequently, this result suggests that this word segmentation method considers the multiple hypothesis of speech recognition as a whole and reduces uncertainty such as variability in recognition |
21c104d14ba3db7fe2cd804a191f9e6258208235 | 21c104d14ba3db7fe2cd804a191f9e6258208235_0 | Q: How do they evaluate how their model acquired words?
Text: Introduction
Autonomous robots, such as service robots, operating in the human living environment with humans have to be able to perform various tasks and language communication. To this end, robots are required to acquire novel concepts and vocabulary on the basis of the information obtained from their sensors, e.g., laser sensors, microphones, and cameras, and recognize a variety of objects, places, and situations in an ambient environment. Above all, we consider it important for the robot to learn the names that humans associate with places in the environment and the spatial areas corresponding to these names; i.e., the robot has to be able to understand words related to places. Therefore, it is important to deal with considerable uncertainty, such as the robot's movement errors, sensor noise, and speech recognition errors.
Several studies on language acquisition by robots have assumed that robots have no prior lexical knowledge. These studies differ from speech recognition studies based on a large vocabulary and natural language processing studies based on lexical, syntactic, and semantic knowledge BIBREF0 , BIBREF1 . Studies on language acquisition by robots also constitute a constructive approach to the human developmental process and the emergence of symbols.
The objectives of this study were to build a robot that learns words related to places and efficiently utilizes this learned vocabulary in self-localization. Lexical acquisition related to places is expected to enable a robot to improve its spatial cognition. A schematic representation depicting the target task of this study is shown in Fig. FIGREF3 . This study assumes that a robot does not have any vocabularies in advance but can recognize syllables or phonemes. The robot then performs self-localization while moving around in the environment, as shown in Fig. FIGREF3 (a). An utterer speaks a sentence including the name of the place to the robot, as shown in Fig. FIGREF3 (b). For the purposes of this study, we need to consider the problems of self-localization and lexical acquisition simultaneously.
When a robot learns novel words from utterances, it is difficult to determine segmentation boundaries and the identity of different phoneme sequences from the speech recognition results, which can lead to errors. First, let us consider the case of the lexical acquisition of an isolated word. For example, if a robot obtains the speech recognition results “aporu”, “epou”, and “aqpuru” (incorrect phoneme recognition of apple), it is difficult for the robot to determine whether they denote the same referent without prior knowledge. Second, let us consider a case of the lexical acquisition of the utterance of a sentence. For example, a robot obtains a speech recognition result, such as “thisizanaporu.” The robot has to necessarily segment a sentence into individual words, e.g., “this”, “iz”, “an”, and “aporu”. In addition, it is necessary for the robot to recognize words referring to the same referent, e.g., the fruit apple, from among the many segmented results that contain errors. In case of Fig. FIGREF3 (c), there is some possibility of learning names including phoneme errors, e.g., “afroqtabutibe,” because the robot does not have any lexical knowledge. On the other hand, when a robot performs online probabilistic self-localization, we assume that the robot uses sensor data and control data, e.g., values obtained using a range sensor and odometry. If the position of the robot on the global map is unclear, the difficulties associated with the identification of the self-position by only using local sensor information become problematic. In the case of global localization using local information, e.g., a range sensor, the problem that the hypothesis of self-position is present in multiple remote locations, frequently occurs, as shown in Fig. FIGREF3 (d).
In order to solve the abovementioned problems, in this study, we adopted the following approach. An utterance is recognized as not a single phoneme sequence but a set of candidates of multiple phonemes. We attempt to suppress the variability in the speech recognition results by performing word discovery taking into account the multiple candidates of speech recognition. In addition, the names of places are learned by associating with words and positions. The lexical acquisition is complemented by using certain particular spatial information; i.e., this information is obtained by hearing utterances including the same word in the same place many times. Furthermore, in this study, we attempt to address the problem of the uncertainty of self-localization by improving the self-position errors by using a recognized utterance including the name of the current place and the acquired spatial concepts, as shown in Fig. FIGREF3 (e).
In this paper, we propose nonparametric Bayesian spatial concept acquisition method (SpCoA) on basis of unsupervised word segmentation and a nonparametric Bayesian generative model that integrates self-localization and a clustering in both words and places. The main contributions of this paper are as follows:
The remainder of this paper is organized as follows: In Section SECREF2 , previous studies on language acquisition and lexical acquisition relevant to our study are described. In Section SECREF3 , the proposed method SpCoA is presented. In Sections SECREF4 and SECREF5 , we discuss the effectiveness of SpCoA in the simulation and in the real environment. Section SECREF6 concludes this paper.
Lexical acquisition
Most studies on lexical acquisition typically focus on lexicons about objects BIBREF0 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . Many of these studies have not be able to address the lexical acquisition of words other than those related to objects, e.g., words about places.
Roy et al. proposed a computational model that enables a robot to learn the names of objects from an object image and spontaneous infant-directed speech BIBREF0 . Their results showed that the model performed speech segmentation, word discovery, and visual categorization. Iwahashi et al. reported that a robot properly understands the situation and acquires the relationship of object behaviors and sentences BIBREF2 , BIBREF3 , BIBREF4 . Qu & Chai focused on the conjunction between speech and eye gaze and the use of domain knowledge in lexical acquisition BIBREF6 , BIBREF7 . They proposed an unsupervised learning method that automatically acquires novel words for an interactive system. Qu & Chai's method based on the IBM translation model BIBREF11 estimates the word-entity association probability.
Nakamura et al. proposed a method to learn object concepts and word meanings from multimodal information and verbal information BIBREF9 . The method proposed in BIBREF9 is a categorization method based on multimodal latent Dirichlet allocation (MLDA) that enables the acquisition of object concepts from multimodal information, such as visual, auditory, and haptic information BIBREF12 . Araki et al. addressed the development of a method combining unsupervised word segmentation from uttered sentences by a nested Pitman-Yor language model (NPYLM) BIBREF13 and the learning of object concepts by MLDA BIBREF10 . However, the disadvantage of using NPYLM was that phoneme sequences with errors did not result in appropriate word segmentation.
These studies did not address the lexical acquisition of the space and place that can also tolerate the uncertainty of phoneme recognition. However, for the introduction of robots into the human living environment, robots need to acquire a lexicon related to not only objects but also places. Our study focuses on the lexical acquisition related to places. Robots can adaptively learn the names of places in various human living environments by using SpCoA. We consider that the acquired names of places can be useful for various tasks, e.g., tasks with a movement of robots by the speech instruction.
Simultaneous learning of places and vocabulary
The following studies have addressed lexical acquisition related to places. However, these studies could not utilize the learned language knowledge in other estimations such as the self-localization of a robot.
Taguchi et al. proposed a method for the unsupervised learning of phoneme sequences and relationships between words and objects from various user utterances without any prior linguistic knowledge other than an acoustic model of phonemes BIBREF1 , BIBREF14 . Further, they proposed a method for the simultaneous categorization of self-position coordinates and lexical learning BIBREF15 . These experimental results showed that it was possible to learn the name of a place from utterances in some cases and to output words corresponding to places in a location that was not used for learning.
Milford et al. proposed RatSLAM inspired by the biological knowledge of a pose cell of the hippocampus of rodents BIBREF16 . Milford et al. proposed a method that enables a robot to acquire spatial concepts by using RatSLAM BIBREF17 . Further, Lingodroids, mobile robots that learn a language through robot-to-robot communication, have been studied BIBREF18 , BIBREF19 , BIBREF20 . Here, a robot communicated the name of a place to other robots at various locations. Experimental results showed that two robots acquired the lexicon of places that they had in common. In BIBREF20 , the researchers showed that it was possible to learn temporal concepts in a manner analogous to the acquisition of spatial concepts. These studies reported that the robots created their own vocabulary. However, these studies did not consider the acquisition of a lexicon by human-to-robot speech interactions.
Welke et al. proposed a method that acquires spatial representation by the integration of the representation of the continuous state space on the sensorimotor level and the discrete symbolic entities used in high-level reasoning BIBREF21 . This method estimates the probable spatial domain and word from the given objects by using the spatial lexical knowledge extracted from Google Corpus and the position information of the object. Their study is different from ours because their study did not consider lexicon learning from human speech.
In the case of global localization, the hypothesis of self-position often remains in multiple remote places. In this case, there is some possibility of performing an incorrect estimation and increasing the estimation error. This problem exists during teaching tasks and self-localization after the lexical acquisition. The abovementioned studies could not deal with this problem. In this paper, we have proposed a method that enables a robot to perform more accurate self-localization by reducing the estimation error of the teaching time by using a smoothing method in the teaching task and by utilizing words acquired through the lexical acquisition. The strengths of this study are that learning of spatial concept and self-localization represented as one generative model and robots are able to utilize acquired lexicon to self-localization autonomously.
Spatial Concept Acquisition
We propose nonparametric Bayesian spatial concept acquisition method (SpCoA) that integrates a nonparametric morphological analyzer for the lattice BIBREF22 , i.e., latticelm, a spatial clustering method, and Monte Carlo localization (MCL) BIBREF23 .
Generative model
In our study, we define a position as a specific coordinate or a local point in the environment, and the position distribution as the spatial area of the environment. Further, we define a spatial concept as the names of places and the position distributions corresponding to these names.
The model that was developed for spatial concept acquisition is a probabilistic generative model that integrates a self-localization with the simultaneous clustering of places and words. Fig. FIGREF13 shows the graphical model for spatial concept acquisition. Table TABREF14 shows each variable of the graphical model. The number of words in a sentence at time INLINEFORM0 is denoted as INLINEFORM1 . The generative model of the proposed method is defined as equation ( EQREF11 -). DISPLAYFORM0
Then, the probability distribution for equation () can be defined as follows: DISPLAYFORM0
The prior distribution configured by using the stick breaking process (SBP) BIBREF24 is denoted as INLINEFORM0 , the multinomial distribution as INLINEFORM1 , the Dirichlet distribution as INLINEFORM2 , the inverse–Wishart distribution as INLINEFORM3 , and the multivariate Gaussian (normal) distribution as INLINEFORM4 . The motion model and the sensor model of self-localization are denoted as INLINEFORM5 and INLINEFORM6 in equations () and (), respectively.
This model can learn an appropriate number of spatial concepts, depending on the data, by using a nonparametric Bayesian approach. We use the SBP, which is one of the methods based on the Dirichlet process. In particular, this model can consider a theoretically infinite number of spatial concepts INLINEFORM0 and position distributions INLINEFORM1 . SBP computations are difficult because they generate an infinite number of parameters. In this study, we approximate a number of parameters by setting sufficiently large values, i.e., a weak-limit approximation BIBREF25 .
It is possible to correlate a name with multiple places, e.g., “staircase” is in two different places, and a place with multiple names, e.g., “toilet” and “restroom” refer to the same place. Spatial concepts are represented by a word distribution of the names of the place INLINEFORM0 and several position distributions ( INLINEFORM1 , INLINEFORM2 ) indicated by a multinomial distribution INLINEFORM3 . In other words, this model is capable of relating the mixture of Gaussian distributions to a multinomial distribution of the names of places. It should be noted that the arrows connecting INLINEFORM4 to the surrounding nodes of the proposed graphical model differ from those of ordinal Gaussian mixture model (GMM). We assume that words obtained by the robot do not change its position, but that the position of the robot affects the distribution of words. Therefore, the proposed generative process assumes that the index of position distribution INLINEFORM5 , i.e., the category of the place, is generated from the position of the robot INLINEFORM6 . This change can be naturally introduced without any troubles by introducing equation ( EQREF12 ).
Overview of the proposed method SpCoA
We assume that a robot performs self-localization by using control data and sensor data at all times. The procedure for the learning of spatial concepts is as follows:
An utterer teaches a robot the names of places, as shown in Fig. FIGREF3 (b). Every time the robot arrives at a place that was a designated learning target, the utterer says a sentence, including the name of the current place.
The robot performs speech recognition from the uttered speech signal data. Thus, the speech recognition system includes a word dictionary of only Japanese syllables. The speech recognition results are obtained in a lattice format.
Word segmentation is performed by using the lattices of the speech recognition results.
The robot learns spatial concepts from words obtained by word segmentation and robot positions obtained by self-localization for all teaching times. The details of the learning are given in SECREF23 .
The procedure for self-localization utilizing spatial concepts is as follows:
The words of the learned spatial concepts are registered to the word dictionary of the speech recognition system.
When a robot obtains a speech signal, speech recognition is performed. Then, a word sequence as the 1-best speech recognition result is obtained.
The robot modifies the self-localization from words obtained by speech recognition and the position likelihood obtained by spatial concepts. The details of self-localization are provided in SECREF35 .
The proposed method can learn words related to places from the utterances of sentences. We use an unsupervised word segmentation method latticelm that can directly segment words from the lattices of the speech recognition results of the uttered sentences BIBREF22 . The lattice can represent to a compact the set of more promising hypotheses of a speech recognition result, such as N-best, in a directed graph format. Unsupervised word segmentation using the lattices of syllable recognition is expected to be able to reduce the variability and errors in phonemes as compared to NPYLM BIBREF13 , i.e., word segmentation using the 1-best speech recognition results.
The self-localization method adopts MCL BIBREF23 , a method that is generally used as the localization of mobile robots for simultaneous localization and mapping (SLAM) BIBREF26 . We assume that a robot generates an environment map by using MCL-based SLAM such as FastSLAM BIBREF27 , BIBREF28 in advance, and then, performs localization by using the generated map. Then, the environment map of both an occupancy grid map and a landmark map is acceptable.
Learning of spatial concept
Spatial concepts are learned from multiple teaching data, control data, and sensor data. The teaching data are a set of uttered sentences for all teaching times. Segmented words of an uttered sentence are converted into a bag-of-words (BoW) representation as a vector of the occurrence counts of words INLINEFORM0 . The set of the teaching times is denoted as INLINEFORM1 , and the number of teaching data items is denoted as INLINEFORM2 . The model parameters are denoted as INLINEFORM3 . The initial values of the model parameters can be set arbitrarily in accordance with a condition. Further, the sampling values of the model parameters from the following joint posterior distribution are obtained by performing Gibbs sampling. DISPLAYFORM0
where the hyperparameters of the model are denoted as INLINEFORM0 . The algorithm of the learning of spatial concepts is shown in Algorithm SECREF23 .
The conditional posterior distribution of each element used for performing Gibbs sampling can be expressed as follows: An index INLINEFORM0 of the position distribution is sampled for each data INLINEFORM1 from a posterior distribution as follows: DISPLAYFORM0
An index INLINEFORM0 of the spatial concepts is sampled for each data item INLINEFORM1 from a posterior distribution as follows: DISPLAYFORM0
where INLINEFORM0 denotes a vector of the occurrence counts of words in the sentence at time INLINEFORM1 . A posterior distribution representing word probabilities of the name of place INLINEFORM2 is calculated as follows: DISPLAYFORM0
where variables with the subscript INLINEFORM0 denote the set of all teaching times. A word probability of the name of place INLINEFORM1 is sampled for each INLINEFORM2 as follows: DISPLAYFORM0
where INLINEFORM0 represents the posterior parameter and INLINEFORM1 denotes the BoW representation of all sentences of INLINEFORM2 in INLINEFORM3 . A posterior distribution representing the position distribution INLINEFORM4 is calculated as follows: DISPLAYFORM0
A position distribution INLINEFORM0 , INLINEFORM1 is sampled for each INLINEFORM2 as follows: DISPLAYFORM0
where INLINEFORM0 denotes the Gaussian–inverse–Wishart distribution; INLINEFORM1 , and INLINEFORM2 represent the posterior parameters; and INLINEFORM3 indicates the set of the teaching positions of INLINEFORM4 in INLINEFORM5 . A topic probability distribution INLINEFORM6 of spatial concepts is sampled as follows: DISPLAYFORM0
A posterior distribution representing the mixed weights INLINEFORM0 of the position distributions is calculated as follows: DISPLAYFORM0
A mixed weight INLINEFORM0 of the position distributions is sampled for each INLINEFORM1 as follows: DISPLAYFORM0
where INLINEFORM0 denotes a vector counting all the indices of the Gaussian distribution of INLINEFORM1 in INLINEFORM2 .
Self-positions INLINEFORM0 are sampled by using a Monte Carlo fixed-lag smoother BIBREF29 in the learning phase. The smoother can estimate self-position INLINEFORM1 and not INLINEFORM2 , i.e., a sequential estimation from the given data INLINEFORM3 until time INLINEFORM4 , but it can estimate INLINEFORM5 , i.e., an estimation from the given data INLINEFORM6 until time INLINEFORM7 later than INLINEFORM8 INLINEFORM9 . In general, the smoothing method can provide a more accurate estimation than the MCL of online estimation. In contrast, if the self-position of a robot INLINEFORM10 is sampled like direct assignment sampling for each time INLINEFORM11 , the sampling of INLINEFORM12 is divided in the case with the teaching time INLINEFORM13 and another time INLINEFORM14 as follows: DISPLAYFORM0
[tb] Learning of spatial concepts [1] INLINEFORM0 , INLINEFORM1 Localization and speech recognition INLINEFORM2 to INLINEFORM3 INLINEFORM4 BIBREF29 the speech signal is observed INLINEFORM5 add INLINEFORM6 to INLINEFORM7 Registering the lattice add INLINEFORM8 to INLINEFORM9 Registering the teaching time Word segmentation using lattices INLINEFORM10 BIBREF22 Gibbs sampling Initialize parameters INLINEFORM11 , INLINEFORM12 , INLINEFORM13 INLINEFORM14 to INLINEFORM15 INLINEFORM16 ( EQREF25 ) INLINEFORM17 ( EQREF26 ) INLINEFORM18 ( EQREF28 ) INLINEFORM19 ( EQREF30 ) INLINEFORM20 ( EQREF31 ) INLINEFORM21 ( EQREF33 ) INLINEFORM22 to INLINEFORM23 INLINEFORM24 ( EQREF34 ) INLINEFORM25
Self-localization of after learning spatial concepts
A robot that acquires spatial concepts can leverage spatial concepts to self-localization. The estimated model parameters INLINEFORM0 and a speech recognition sentence INLINEFORM1 at time INLINEFORM2 are given to the condition part of the probability formula of MCL as follows: DISPLAYFORM0
When the robot hears the name of a place spoken by the utterer, in addition to the likelihood of the sensor model of MCL, the likelihood of INLINEFORM0 with respect to a speech recognition sentence is calculated as follows: DISPLAYFORM0
The algorithm of self-localization utilizing spatial concepts is shown in Algorithm SECREF35 . The set of particles is denoted as INLINEFORM0 , the temporary set that stores the pairs of the particle INLINEFORM1 and the weight INLINEFORM2 , i.e., INLINEFORM3 , is denoted as INLINEFORM4 . The number of particles is INLINEFORM5 . The function INLINEFORM6 is a function that moves each particle from its previous state INLINEFORM7 to its current state INLINEFORM8 by using control data. The function INLINEFORM9 calculates the likelihood of each particle INLINEFORM10 using sensor data INLINEFORM11 . These functions are normally used in MCL. For further details, please refer to BIBREF26 . In this case, a speech recognition sentence INLINEFORM12 is obtained by the speech recognition system using a word dictionary containing all the learned words. [tb] Self-localization utilizing spatial concepts [1] INLINEFORM13 INLINEFORM14 INLINEFORM15 INLINEFORM16 to INLINEFORM17 INLINEFORM18 () INLINEFORM19 () the speech signal is observed INLINEFORM20 add INLINEFORM21 to INLINEFORM22 INLINEFORM23 to INLINEFORM24 draw INLINEFORM25 with probability INLINEFORM26 add INLINEFORM27 to INLINEFORM28 INLINEFORM29
Experiment I
In this experiment, we validate the evidence of the proposed method (SpCoA) in an environment simulated on the simulator platform SIGVerse BIBREF30 , which enables the simulation of social interactions. The speech recognition is performed using the Japanese continuous speech recognition system Julius BIBREF31 , BIBREF32 . The set of 43 Japanese phonemes defined by Acoustical Society of Japan (ASJ)'s speech database committee is adopted by Julius BIBREF31 . The representation of these phonemes is also adopted in this study. The Julius system uses a word dictionary containing 115 Japanese syllables. The microphone attached on the robot is SHURE's PG27-USB. Further, an unsupervised morphological analyzer, a latticelm 0.4, is implemented BIBREF22 .
In the experiment, we compare the following three types of word segmentation methods. A set of syllable sequences is given to the graphical model of SpCoA by each method. This set is used for the learning of spatial concepts as recognized uttered sentences INLINEFORM0 .
The remainder of this section is organized as follows: In Section SECREF43 , the conditions and results of learning spatial concepts are described. The experiments performed using the learned spatial concepts are described in Section SECREF49 to SECREF64 . In Section SECREF49 , we evaluate the accuracy of the phoneme recognition and word segmentation for uttered sentences. In Section SECREF56 , we evaluate the clustering accuracy of the estimation results of index INLINEFORM0 of spatial concepts for each teaching utterance. In Section SECREF60 , we evaluate the accuracy of the acquisition of names of places. In Section SECREF64 , we show that spatial concepts can be utilized for effective self-localization.
Learning of spatial concepts
We conduct this experiment of spatial concept acquisition in the environment prepared on SIGVerse. The experimental environment is shown in Fig. FIGREF45 . A mobile robot can move by performing forward, backward, right rotation, or left rotation movements on a two-dimensional plane. In this experiment, the robot can use an approximately correct map of the considered environment. The robot has a range sensor in front and performs self-localization on the basis of an occupancy grid map. The initial particles are defined by the true initial position of the robot. The number of particles is INLINEFORM0 .
The lag value of the Monte Carlo fixed-lag smoothing is fixed at 100. The other parameters of this experiment are as follows: INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 , INLINEFORM7 , and INLINEFORM8 . The number of iterations used for Gibbs sampling is 100. This experiment does not include the direct assignment sampling of INLINEFORM9 in equation ( EQREF34 ), i.e., lines 22–24 of Algorithm SECREF23 are omitted, because we consider that the self-position can be obtained with sufficiently good accuracy by using the Monte Carlo smoothing. Eight places are selected as the learning targets, and eight types of place names are considered. Each uttered place name is shown in Fig. FIGREF45 . These utterances include the same name in different places, i.e., “teeburunoatari” (which means near the table in English), and different names in the same place, i.e., “kiqchiN” and “daidokoro” (which mean a kitchen in English). The other teaching names are “geNkaN” (which means an entrance or a doorway in English); “terebimae” (which means the front of the TV in English); “gomibako” (which means a trash box in English); “hoNdana” (which means a bookshelf in English); and “sofaamae” (which means the front of the sofa in English). The teaching utterances, including the 10 types of phrases, are spoken for a total of 90 times. The phrases in each uttered sentence are listed in Table TABREF46 .
The learning results of spatial concepts obtained by using the proposed method are presented here. Fig. FIGREF47 shows the position distributions learned in the experimental environment. Fig. FIGREF47 (top) shows the word distributions of the names of places for each spatial concept, and Fig. FIGREF47 (bottom) shows the multinomial distributions of the indices of the position distributions. Consequently, the proposed method can learn the names of places corresponding to each place of the learning target. In the spatial concept of index INLINEFORM0 , the highest probability of words was “sofamae”, and the highest probability of the indices of the position distribution was INLINEFORM1 ; therefore, the name of a place “sofamae” was learned to correspond to the position distribution of INLINEFORM2 . In the spatial concept of index INLINEFORM3 , “kiqchi” and “daidokoro” were learned to correspond to the position distribution of INLINEFORM4 . Therefore, this result shows that multiple names can be learned for the same place. In the spatial concept of index INLINEFORM5 , “te” and “durunoatari” (one word in a normal situation) were learned to correspond to the position distributions of INLINEFORM6 and INLINEFORM7 . Therefore, this result shows that the same name can be learned for multiple places.
Phoneme recognition accuracy of uttered sentences
We compared the performance of three types of word segmentation methods for all the considered uttered sentences. It was difficult to weigh the ambiguous syllable recognition and the unsupervised word segmentation separately. Therefore, this experiment considered the positions of a delimiter as a single letter. We calculated the matching rate of a phoneme string of a recognition result of each uttered sentence and the correct phoneme string of the teaching data that was suitably segmented into Japanese morphemes using MeCab, which is an off-the-shelf Japanese morphological analyzer that is widely used for natural language processing. The matching rate of the phoneme string was calculated by using the phoneme accuracy rate (PAR) as follows: DISPLAYFORM0
The numerator of equation ( EQREF52 ) is calculated by using the Levenshtein distance between the correct phoneme string and the recognition phoneme string. INLINEFORM0 denotes the number of substitutions; INLINEFORM1 , the number of deletions; and INLINEFORM2 , the number of insertions. INLINEFORM3 represents the number of phonemes of the correct phoneme string.
Table TABREF54 shows the results of PAR. Table TABREF55 presents examples of the word segmentation results of the three considered methods. We found that the unsupervised morphological analyzer capable of using lattices improved the accuracy of phoneme recognition and word segmentation. Consequently, this result suggests that this word segmentation method considers the multiple hypothesis of speech recognition as a whole and reduces uncertainty such as variability in recognition by using the syllable recognition results in the lattice format.
Estimation accuracy of spatial concepts
We compared the matching rate with the estimation results of index INLINEFORM0 of the spatial concepts of each teaching utterance and the classification results of the correct answer given by humans. The evaluation of this experiment used the adjusted Rand index (ARI) BIBREF33 . ARI is a measure of the degree of similarity between two clustering results.
Further, we compared the proposed method with a method of word clustering without location information for the investigation of the effect of lexical acquisition using location information. In particular, a method of word clustering without location information used the Dirichlet process mixture (DPM) of the unigram model of an SBP representation. The parameters corresponding to those of the proposed method were the same as the parameters of the proposed method and were estimated using Gibbs sampling.
Fig. FIGREF59 shows the results of the average of the ARI values of 10 trials of learning by Gibbs sampling. Here, we found that the proposed method showed the best score. These results and the results reported in Section SECREF49 suggest that learning by uttered sentences obtained by better phoneme recognition and better word segmentation produces a good result for the acquisition of spatial concepts. Furthermore, in a comparison of two clustering methods, we found that SpCoA was considerably better than DPM, a word clustering method without location information, irrespective of the word segmentation method used. The experimental results showed that it is possible to improve the estimation accuracy of spatial concepts and vocabulary by performing word clustering that considered location information.
Accuracy of acquired phoneme sequences representing the names of places
We evaluated whether the names of places were properly learned for the considered teaching places. This experiment assumes a request for the best phoneme sequence INLINEFORM0 representing the self-position INLINEFORM1 for a robot. The robot moves close to each teaching place. The probability of a word INLINEFORM2 when the self-position INLINEFORM3 of the robot is given, INLINEFORM4 , can be obtained by using equation ( EQREF37 ). The word having the best probability was selected. We compared the PAR with the correct phoneme sequence and a selected name of the place. Because “kiqchiN” and “daidokoro” were taught for the same place, the word whose PAR was the higher score was adopted.
Fig. FIGREF63 shows the results of PAR for the word considered the name of a place. SpCoA (latticelm), the proposed method using the results of unsupervised word segmentation on the basis of the speech recognition results in the lattice format, showed the best PAR score. In the 1-best and BoS methods, a part syllable sequence of the name of a place was more minutely segmented as shown in Table TABREF55 . Therefore, the robot could not learn the name of the teaching place as a coherent phoneme sequence. In contrast, the robot could learn the names of teaching places more accurately by using the proposed method.
Self-localization that utilizes acquired spatial concepts
In this experiment, we validate that the robot can make efficient use of the acquired spatial concepts. We compare the estimation accuracy of localization for the proposed method (SpCoA MCL) and the conventional MCL. When a robot comes to the learning target, the utterer speaks out the sentence containing the name of the place once again for the robot. The moving trajectory of the robot and the uttered positions are the same in all the trials. In particular, the uttered sentence is “kokowa ** dayo”. When learning a task, this phrase is not used. The number of particles is INLINEFORM0 , and the initial particles are uniformly distributed in the considered environment. The robot performs a control operation for each time step.
The estimation error in the localization is evaluated as follows: While running localization, we record the estimation error (equation ( EQREF66 )) on the INLINEFORM0 plane of the floor for each time step. DISPLAYFORM0
where INLINEFORM0 denote the true position coordinates of the robot as obtained from the simulator, and INLINEFORM1 , INLINEFORM2 represent the weighted mean values of localization coordinates. The normalized weight INLINEFORM3 is obtained from the sensor model in MCL as a likelihood. In the utterance time, this likelihood is multiplied by the value calculated using equation ( EQREF37 ). INLINEFORM4 , INLINEFORM5 denote the INLINEFORM6 -coordinate and the INLINEFORM7 -coordinate of index INLINEFORM8 of each particle at time INLINEFORM9 . After running the localization, we calculated the average of INLINEFORM10 .
Further, we compared the estimation accuracy rate (EAR) of the global localization. In each trial, we calculated the proportion of time step in which the estimation error was less than 50 cm.
Fig. FIGREF68 shows the results of the estimation error and the EAR for 10 trials of each method. All trials of SpCoA MCL (latticelm) and almost all trials of the method using 1-best NPYLM and BoS showed relatively small estimation errors. Results of the second trial of 1-best NPYLM and the fifth trial of BoS showed higher estimation errors. In these trials, many particles converged to other places instead of the place where the robot was, based on utterance information. Nevertheless, compared with those of the conventional MCL, the results obtained using spatial concepts showed an obvious improvement in the estimation accuracy. Consequently, spatial concepts acquired by using the proposed method proved to be very helpful in improving the localization accuracy.
Experiment II
In this experiment, the effectiveness of the proposed method was tested by using an autonomous mobile robot TurtleBot 2 in a real environment. Fig. FIGREF70 shows TurtleBot 2 used in the experiments.
Mapping and self-localization are performed by the robot operating system (ROS). The speech recognition system, the microphone, and the unsupervised morphological analyzer were the same as those described in Section SECREF4 .
Learning of spatial concepts in the real environment
We conducted an experiment of the spatial concept acquisition in a real environment of an entire floor of a building. In this experiment, self-localization was performed using a map generated by SLAM. The initial particles are defined by the true initial position of the robot. The generated map in the real environment and the names of teaching places are shown in Fig. FIGREF73 . The number of teaching places was 19, and the number of teaching names was 16. The teaching utterances were performed for a total of 100 times.
Fig. FIGREF75 shows the position distributions learned on the map. Table TABREF76 shows the five best elements of the multinomial distributions of the name of place INLINEFORM0 and the multinomial distributions of the indices of the position distribution INLINEFORM1 for each index of spatial concept INLINEFORM2 .
Thus, we found that the proposed method can learn the names of places corresponding to the considered teaching places in the real environment. For example, in the spatial concept of index INLINEFORM0 , “torire” was learned to correspond to a position distribution of INLINEFORM1 . Similarly, “kidanokeN” corresponded to INLINEFORM2 in INLINEFORM3 , and “kaigihitsu” was corresponded to INLINEFORM4 in INLINEFORM5 . In the spatial concept of index INLINEFORM6 , a part of the syllable sequences was minutely segmented as “sohatsuke”, “N”, and “tani”, “guchi”. In this case, the robot was taught two types of names. These words were learned to correspond to the same position distribution of INLINEFORM7 . In INLINEFORM8 , “gomibako” showed a high probability, and it corresponded to three distributions of the position of INLINEFORM9 . The position distribution of INLINEFORM10 had the fourth highest probability in the spatial concept INLINEFORM11 . Therefore, “raqkukeN,” which had the fifth highest probability in the spatial concept INLINEFORM12 (and was expected to relate to the spatial concept INLINEFORM13 ), can be estimated as the word drawn from spatial concept INLINEFORM14 . However, in practice, this situation did not cause any severe problems because the spatial concept of the index INLINEFORM15 had the highest probabilities for the word “rapukeN” and the position distribution INLINEFORM16 than INLINEFORM17 . In the probabilistic model, the relative probability and the integrative information are important. When the robot listened to an utterance related to “raqkukeN,” it could make use of the spatial concept of index INLINEFORM18 for self-localization with a high probability, and appropriately updated its estimated self-location. We expected that the spatial concept of index INLINEFORM19 was learned as two separate spatial concepts. However, “watarirooka” and “kaidaNmae” were learned as the same spatial concept. Therefore, the multinomial distribution INLINEFORM20 showed a higher probability for the indices of the position distribution corresponding to the teaching places of both “watarirooka” and “kaidaNmae”.
The proposed method adopts a nonparametric Bayesian method in which it is possible to form spatial concepts that allow many-to-many correspondences between names and places. In contrast, this can create ambiguity that classifies originally different spatial concepts into one spatial concept as a side effect. There is a possibility that the ambiguity of concepts such as INLINEFORM0 will have a negative effect on self-localization, even though the self-localization performance was (overall) clearly increased by employing the proposed method. The solution of this problem will be considered in future work.
In terms of the PAR of uttered sentences, the evaluation value from the evaluation method used in Section SECREF49 is 0.83; this value is comparable to the result in Section SECREF49 . However, in terms of the PAR of the name of the place, the evaluation value from the evaluation method used in Section SECREF60 is 0.35, which is lower than that in Section SECREF60 . We consider that the increase in uncertainty in the real environment and the increase in the number of teaching words reduced the performance. We expect that this problem could be improved using further experience related to places, e.g., if the number of utterances per place is increased, and additional sensory information is provided.
Modification of localization by the acquired spatial concepts
In this experiment, we verified the modification results of self-localization by using spatial concepts in global self-localization. This experiment used the learning results of spatial concepts presented in Section SECREF71 . The experimental procedures are shown below. The initial particles were uniformly distributed on the entire floor. The robot begins to move from a little distance away to the target place. When the robot reached the target place, the utterer spoke the sentence containing the name of the place for the robot. Upon obtaining the speech information, the robot modifies the self-localization on the basis of the acquired spatial concepts. The number of particles was the same as that mentioned in Section SECREF71 .
Fig. FIGREF80 shows the results of the self-localization before (the top part of the figure) and after (the bottom part of the figure) the utterance for three places. The particle states are denoted by red arrows. The moving trajectory of the robot is indicated by a green dotted arrow. Figs. FIGREF80 (a), (b), and (c) show the results for the names of places “toire”, “souhatsukeN”, and “gomibako”. Further, three spatial concepts, i.e., those at INLINEFORM0 , were learned as “gomibako”. In this experiment, the utterer uttered to the robot when the robot came close to the place of INLINEFORM1 . In all the examples shown in the top part of the figure, the particles were dispersed in several places. In contrast, the number of particles near the true position of the robot showed an almost accurate increase in all the examples shown in the bottom part of the figure. Thus, we can conclude that the proposed method can modify self-localization by using spatial concepts.
Conclusion and Future Work
In this paper, we discussed the spatial concept acquisition, lexical acquisition related to places, and self-localization using acquired spatial concepts. We proposed nonparametric Bayesian spatial concept acquisition method SpCoA that integrates latticelm BIBREF22 , a spatial clustering method, and MCL. We conducted experiments for evaluating the performance of SpCoA in a simulation and a real environment. SpCoA showed good results in all the experiments. In experiments of the learning of spatial concepts, the robot could form spatial concepts for the places of the learning targets from human continuous speech signals in both the room of the simulation environment and the entire floor of the real environment. Further, the unsupervised word segmentation method latticelm could reduce the variability and errors in the recognition of phonemes in all the utterances. SpCoA achieved more accurate lexical acquisition by performing word segmentation using the lattices of the speech recognition results. In the self-localization experiments, the robot could effectively utilize the acquired spatial concepts for recognizing self-position and reducing the estimation errors in self-localization. As a method that further improves the performance of the lexical acquisition, a mutual learning method was proposed by Nakamura et al. on the basis of the integration of the learning of object concepts with a language model BIBREF34 , BIBREF35 . Following a similar approach, Heymann et al. proposed a method that alternately and repeatedly updates phoneme recognition results and the language model by using unsupervised word segmentation BIBREF36 . As a result, they achieved robust lexical acquisition. In our study, we can expect to improve the accuracy of lexical acquisition for spatial concepts by estimating both the spatial concepts and the language model.
Furthermore, as a future work, we consider it necessary for robots to learn spatial concepts online and to recognize whether the uttered word indicates the current place or destination. Furthermore, developing a method that simultaneously acquires spatial concepts and builds a map is one of our future objectives. We believe that the spatial concepts will have a positive effect on the mapping. We also intend to examine a method that associates the image and the landscape with spatial concepts and a method that estimates both spatial concepts and object concepts.
[] Akira Taniguchi received his BE degree from Ritsumeikan University in 2013 and his ME degree from the Graduate School of Information Science and Engineering, Ritsumeikan University, in 2015. He is currently working toward his PhD degree at the Emergent System Lab, Ritsumeikan University, Japan. His research interests include language acquisition, concept acquisition, and symbol emergence in robotics.
[] Tadahiro Taniguchi received the ME and PhD degrees from Kyoto University in 2003 and 2006, respectively. From April 2005 to March 2006, he was a Japan Society for the Promotion of Science (JSPS) research fellow (DC2) in the Department of Mechanical Engineering and Science, Graduate School of Engineering, Kyoto University. From April 2006 to March 2007, he was a JSPS research fellow (PD) in the same department. From April 2007 to March 2008, he was a JSPS research fellow in the Department of Systems Science, Graduate School of Informatics, Kyoto University. From April 2008 to March 2010, he was an assistant professor at the Department of Human and Computer Intelligence, Ritsumeikan University. Since April 2010, he has been an associate professor in the same department. He is currently engaged in research on machine learning, emergent systems, and semiotics.
[] Tetsunari Inamura received the BE, MS and PhD degrees from the University of Tokyo, in 1995, 1997 and 2000, respectively. He was a Researcher of the CREST program, Japanese Science and Technology Cooperation, from 2000 to 2003, and then joined the Department of Mechano-Informatics, School of Information Science and Technology, University of Tokyo as a Lecturer, from 2003 to 2006. He is now an Associate Professor in the Principles of Informatics Research Division, National Institute of Informatics, and an Associate Professor in the Department of Informatics, School of Multidisciplinary Sciences, Graduate University for Advanced Studies (SOKENDAI). His research interests include imitation learning and symbol emergence on humanoid robots, development of interactive robots through virtual reality and so on. | PAR score |
d557752c4706b65dcdb7718272180c59d77fb7a7 | d557752c4706b65dcdb7718272180c59d77fb7a7_0 | Q: Which method do they use for word segmentation?
Text: Introduction
Autonomous robots, such as service robots, operating in the human living environment with humans have to be able to perform various tasks and language communication. To this end, robots are required to acquire novel concepts and vocabulary on the basis of the information obtained from their sensors, e.g., laser sensors, microphones, and cameras, and recognize a variety of objects, places, and situations in an ambient environment. Above all, we consider it important for the robot to learn the names that humans associate with places in the environment and the spatial areas corresponding to these names; i.e., the robot has to be able to understand words related to places. Therefore, it is important to deal with considerable uncertainty, such as the robot's movement errors, sensor noise, and speech recognition errors.
Several studies on language acquisition by robots have assumed that robots have no prior lexical knowledge. These studies differ from speech recognition studies based on a large vocabulary and natural language processing studies based on lexical, syntactic, and semantic knowledge BIBREF0 , BIBREF1 . Studies on language acquisition by robots also constitute a constructive approach to the human developmental process and the emergence of symbols.
The objectives of this study were to build a robot that learns words related to places and efficiently utilizes this learned vocabulary in self-localization. Lexical acquisition related to places is expected to enable a robot to improve its spatial cognition. A schematic representation depicting the target task of this study is shown in Fig. FIGREF3 . This study assumes that a robot does not have any vocabularies in advance but can recognize syllables or phonemes. The robot then performs self-localization while moving around in the environment, as shown in Fig. FIGREF3 (a). An utterer speaks a sentence including the name of the place to the robot, as shown in Fig. FIGREF3 (b). For the purposes of this study, we need to consider the problems of self-localization and lexical acquisition simultaneously.
When a robot learns novel words from utterances, it is difficult to determine segmentation boundaries and the identity of different phoneme sequences from the speech recognition results, which can lead to errors. First, let us consider the case of the lexical acquisition of an isolated word. For example, if a robot obtains the speech recognition results “aporu”, “epou”, and “aqpuru” (incorrect phoneme recognition of apple), it is difficult for the robot to determine whether they denote the same referent without prior knowledge. Second, let us consider a case of the lexical acquisition of the utterance of a sentence. For example, a robot obtains a speech recognition result, such as “thisizanaporu.” The robot has to necessarily segment a sentence into individual words, e.g., “this”, “iz”, “an”, and “aporu”. In addition, it is necessary for the robot to recognize words referring to the same referent, e.g., the fruit apple, from among the many segmented results that contain errors. In case of Fig. FIGREF3 (c), there is some possibility of learning names including phoneme errors, e.g., “afroqtabutibe,” because the robot does not have any lexical knowledge. On the other hand, when a robot performs online probabilistic self-localization, we assume that the robot uses sensor data and control data, e.g., values obtained using a range sensor and odometry. If the position of the robot on the global map is unclear, the difficulties associated with the identification of the self-position by only using local sensor information become problematic. In the case of global localization using local information, e.g., a range sensor, the problem that the hypothesis of self-position is present in multiple remote locations, frequently occurs, as shown in Fig. FIGREF3 (d).
In order to solve the abovementioned problems, in this study, we adopted the following approach. An utterance is recognized as not a single phoneme sequence but a set of candidates of multiple phonemes. We attempt to suppress the variability in the speech recognition results by performing word discovery taking into account the multiple candidates of speech recognition. In addition, the names of places are learned by associating with words and positions. The lexical acquisition is complemented by using certain particular spatial information; i.e., this information is obtained by hearing utterances including the same word in the same place many times. Furthermore, in this study, we attempt to address the problem of the uncertainty of self-localization by improving the self-position errors by using a recognized utterance including the name of the current place and the acquired spatial concepts, as shown in Fig. FIGREF3 (e).
In this paper, we propose nonparametric Bayesian spatial concept acquisition method (SpCoA) on basis of unsupervised word segmentation and a nonparametric Bayesian generative model that integrates self-localization and a clustering in both words and places. The main contributions of this paper are as follows:
The remainder of this paper is organized as follows: In Section SECREF2 , previous studies on language acquisition and lexical acquisition relevant to our study are described. In Section SECREF3 , the proposed method SpCoA is presented. In Sections SECREF4 and SECREF5 , we discuss the effectiveness of SpCoA in the simulation and in the real environment. Section SECREF6 concludes this paper.
Lexical acquisition
Most studies on lexical acquisition typically focus on lexicons about objects BIBREF0 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . Many of these studies have not be able to address the lexical acquisition of words other than those related to objects, e.g., words about places.
Roy et al. proposed a computational model that enables a robot to learn the names of objects from an object image and spontaneous infant-directed speech BIBREF0 . Their results showed that the model performed speech segmentation, word discovery, and visual categorization. Iwahashi et al. reported that a robot properly understands the situation and acquires the relationship of object behaviors and sentences BIBREF2 , BIBREF3 , BIBREF4 . Qu & Chai focused on the conjunction between speech and eye gaze and the use of domain knowledge in lexical acquisition BIBREF6 , BIBREF7 . They proposed an unsupervised learning method that automatically acquires novel words for an interactive system. Qu & Chai's method based on the IBM translation model BIBREF11 estimates the word-entity association probability.
Nakamura et al. proposed a method to learn object concepts and word meanings from multimodal information and verbal information BIBREF9 . The method proposed in BIBREF9 is a categorization method based on multimodal latent Dirichlet allocation (MLDA) that enables the acquisition of object concepts from multimodal information, such as visual, auditory, and haptic information BIBREF12 . Araki et al. addressed the development of a method combining unsupervised word segmentation from uttered sentences by a nested Pitman-Yor language model (NPYLM) BIBREF13 and the learning of object concepts by MLDA BIBREF10 . However, the disadvantage of using NPYLM was that phoneme sequences with errors did not result in appropriate word segmentation.
These studies did not address the lexical acquisition of the space and place that can also tolerate the uncertainty of phoneme recognition. However, for the introduction of robots into the human living environment, robots need to acquire a lexicon related to not only objects but also places. Our study focuses on the lexical acquisition related to places. Robots can adaptively learn the names of places in various human living environments by using SpCoA. We consider that the acquired names of places can be useful for various tasks, e.g., tasks with a movement of robots by the speech instruction.
Simultaneous learning of places and vocabulary
The following studies have addressed lexical acquisition related to places. However, these studies could not utilize the learned language knowledge in other estimations such as the self-localization of a robot.
Taguchi et al. proposed a method for the unsupervised learning of phoneme sequences and relationships between words and objects from various user utterances without any prior linguistic knowledge other than an acoustic model of phonemes BIBREF1 , BIBREF14 . Further, they proposed a method for the simultaneous categorization of self-position coordinates and lexical learning BIBREF15 . These experimental results showed that it was possible to learn the name of a place from utterances in some cases and to output words corresponding to places in a location that was not used for learning.
Milford et al. proposed RatSLAM inspired by the biological knowledge of a pose cell of the hippocampus of rodents BIBREF16 . Milford et al. proposed a method that enables a robot to acquire spatial concepts by using RatSLAM BIBREF17 . Further, Lingodroids, mobile robots that learn a language through robot-to-robot communication, have been studied BIBREF18 , BIBREF19 , BIBREF20 . Here, a robot communicated the name of a place to other robots at various locations. Experimental results showed that two robots acquired the lexicon of places that they had in common. In BIBREF20 , the researchers showed that it was possible to learn temporal concepts in a manner analogous to the acquisition of spatial concepts. These studies reported that the robots created their own vocabulary. However, these studies did not consider the acquisition of a lexicon by human-to-robot speech interactions.
Welke et al. proposed a method that acquires spatial representation by the integration of the representation of the continuous state space on the sensorimotor level and the discrete symbolic entities used in high-level reasoning BIBREF21 . This method estimates the probable spatial domain and word from the given objects by using the spatial lexical knowledge extracted from Google Corpus and the position information of the object. Their study is different from ours because their study did not consider lexicon learning from human speech.
In the case of global localization, the hypothesis of self-position often remains in multiple remote places. In this case, there is some possibility of performing an incorrect estimation and increasing the estimation error. This problem exists during teaching tasks and self-localization after the lexical acquisition. The abovementioned studies could not deal with this problem. In this paper, we have proposed a method that enables a robot to perform more accurate self-localization by reducing the estimation error of the teaching time by using a smoothing method in the teaching task and by utilizing words acquired through the lexical acquisition. The strengths of this study are that learning of spatial concept and self-localization represented as one generative model and robots are able to utilize acquired lexicon to self-localization autonomously.
Spatial Concept Acquisition
We propose nonparametric Bayesian spatial concept acquisition method (SpCoA) that integrates a nonparametric morphological analyzer for the lattice BIBREF22 , i.e., latticelm, a spatial clustering method, and Monte Carlo localization (MCL) BIBREF23 .
Generative model
In our study, we define a position as a specific coordinate or a local point in the environment, and the position distribution as the spatial area of the environment. Further, we define a spatial concept as the names of places and the position distributions corresponding to these names.
The model that was developed for spatial concept acquisition is a probabilistic generative model that integrates a self-localization with the simultaneous clustering of places and words. Fig. FIGREF13 shows the graphical model for spatial concept acquisition. Table TABREF14 shows each variable of the graphical model. The number of words in a sentence at time INLINEFORM0 is denoted as INLINEFORM1 . The generative model of the proposed method is defined as equation ( EQREF11 -). DISPLAYFORM0
Then, the probability distribution for equation () can be defined as follows: DISPLAYFORM0
The prior distribution configured by using the stick breaking process (SBP) BIBREF24 is denoted as INLINEFORM0 , the multinomial distribution as INLINEFORM1 , the Dirichlet distribution as INLINEFORM2 , the inverse–Wishart distribution as INLINEFORM3 , and the multivariate Gaussian (normal) distribution as INLINEFORM4 . The motion model and the sensor model of self-localization are denoted as INLINEFORM5 and INLINEFORM6 in equations () and (), respectively.
This model can learn an appropriate number of spatial concepts, depending on the data, by using a nonparametric Bayesian approach. We use the SBP, which is one of the methods based on the Dirichlet process. In particular, this model can consider a theoretically infinite number of spatial concepts INLINEFORM0 and position distributions INLINEFORM1 . SBP computations are difficult because they generate an infinite number of parameters. In this study, we approximate a number of parameters by setting sufficiently large values, i.e., a weak-limit approximation BIBREF25 .
It is possible to correlate a name with multiple places, e.g., “staircase” is in two different places, and a place with multiple names, e.g., “toilet” and “restroom” refer to the same place. Spatial concepts are represented by a word distribution of the names of the place INLINEFORM0 and several position distributions ( INLINEFORM1 , INLINEFORM2 ) indicated by a multinomial distribution INLINEFORM3 . In other words, this model is capable of relating the mixture of Gaussian distributions to a multinomial distribution of the names of places. It should be noted that the arrows connecting INLINEFORM4 to the surrounding nodes of the proposed graphical model differ from those of ordinal Gaussian mixture model (GMM). We assume that words obtained by the robot do not change its position, but that the position of the robot affects the distribution of words. Therefore, the proposed generative process assumes that the index of position distribution INLINEFORM5 , i.e., the category of the place, is generated from the position of the robot INLINEFORM6 . This change can be naturally introduced without any troubles by introducing equation ( EQREF12 ).
Overview of the proposed method SpCoA
We assume that a robot performs self-localization by using control data and sensor data at all times. The procedure for the learning of spatial concepts is as follows:
An utterer teaches a robot the names of places, as shown in Fig. FIGREF3 (b). Every time the robot arrives at a place that was a designated learning target, the utterer says a sentence, including the name of the current place.
The robot performs speech recognition from the uttered speech signal data. Thus, the speech recognition system includes a word dictionary of only Japanese syllables. The speech recognition results are obtained in a lattice format.
Word segmentation is performed by using the lattices of the speech recognition results.
The robot learns spatial concepts from words obtained by word segmentation and robot positions obtained by self-localization for all teaching times. The details of the learning are given in SECREF23 .
The procedure for self-localization utilizing spatial concepts is as follows:
The words of the learned spatial concepts are registered to the word dictionary of the speech recognition system.
When a robot obtains a speech signal, speech recognition is performed. Then, a word sequence as the 1-best speech recognition result is obtained.
The robot modifies the self-localization from words obtained by speech recognition and the position likelihood obtained by spatial concepts. The details of self-localization are provided in SECREF35 .
The proposed method can learn words related to places from the utterances of sentences. We use an unsupervised word segmentation method latticelm that can directly segment words from the lattices of the speech recognition results of the uttered sentences BIBREF22 . The lattice can represent to a compact the set of more promising hypotheses of a speech recognition result, such as N-best, in a directed graph format. Unsupervised word segmentation using the lattices of syllable recognition is expected to be able to reduce the variability and errors in phonemes as compared to NPYLM BIBREF13 , i.e., word segmentation using the 1-best speech recognition results.
The self-localization method adopts MCL BIBREF23 , a method that is generally used as the localization of mobile robots for simultaneous localization and mapping (SLAM) BIBREF26 . We assume that a robot generates an environment map by using MCL-based SLAM such as FastSLAM BIBREF27 , BIBREF28 in advance, and then, performs localization by using the generated map. Then, the environment map of both an occupancy grid map and a landmark map is acceptable.
Learning of spatial concept
Spatial concepts are learned from multiple teaching data, control data, and sensor data. The teaching data are a set of uttered sentences for all teaching times. Segmented words of an uttered sentence are converted into a bag-of-words (BoW) representation as a vector of the occurrence counts of words INLINEFORM0 . The set of the teaching times is denoted as INLINEFORM1 , and the number of teaching data items is denoted as INLINEFORM2 . The model parameters are denoted as INLINEFORM3 . The initial values of the model parameters can be set arbitrarily in accordance with a condition. Further, the sampling values of the model parameters from the following joint posterior distribution are obtained by performing Gibbs sampling. DISPLAYFORM0
where the hyperparameters of the model are denoted as INLINEFORM0 . The algorithm of the learning of spatial concepts is shown in Algorithm SECREF23 .
The conditional posterior distribution of each element used for performing Gibbs sampling can be expressed as follows: An index INLINEFORM0 of the position distribution is sampled for each data INLINEFORM1 from a posterior distribution as follows: DISPLAYFORM0
An index INLINEFORM0 of the spatial concepts is sampled for each data item INLINEFORM1 from a posterior distribution as follows: DISPLAYFORM0
where INLINEFORM0 denotes a vector of the occurrence counts of words in the sentence at time INLINEFORM1 . A posterior distribution representing word probabilities of the name of place INLINEFORM2 is calculated as follows: DISPLAYFORM0
where variables with the subscript INLINEFORM0 denote the set of all teaching times. A word probability of the name of place INLINEFORM1 is sampled for each INLINEFORM2 as follows: DISPLAYFORM0
where INLINEFORM0 represents the posterior parameter and INLINEFORM1 denotes the BoW representation of all sentences of INLINEFORM2 in INLINEFORM3 . A posterior distribution representing the position distribution INLINEFORM4 is calculated as follows: DISPLAYFORM0
A position distribution INLINEFORM0 , INLINEFORM1 is sampled for each INLINEFORM2 as follows: DISPLAYFORM0
where INLINEFORM0 denotes the Gaussian–inverse–Wishart distribution; INLINEFORM1 , and INLINEFORM2 represent the posterior parameters; and INLINEFORM3 indicates the set of the teaching positions of INLINEFORM4 in INLINEFORM5 . A topic probability distribution INLINEFORM6 of spatial concepts is sampled as follows: DISPLAYFORM0
A posterior distribution representing the mixed weights INLINEFORM0 of the position distributions is calculated as follows: DISPLAYFORM0
A mixed weight INLINEFORM0 of the position distributions is sampled for each INLINEFORM1 as follows: DISPLAYFORM0
where INLINEFORM0 denotes a vector counting all the indices of the Gaussian distribution of INLINEFORM1 in INLINEFORM2 .
Self-positions INLINEFORM0 are sampled by using a Monte Carlo fixed-lag smoother BIBREF29 in the learning phase. The smoother can estimate self-position INLINEFORM1 and not INLINEFORM2 , i.e., a sequential estimation from the given data INLINEFORM3 until time INLINEFORM4 , but it can estimate INLINEFORM5 , i.e., an estimation from the given data INLINEFORM6 until time INLINEFORM7 later than INLINEFORM8 INLINEFORM9 . In general, the smoothing method can provide a more accurate estimation than the MCL of online estimation. In contrast, if the self-position of a robot INLINEFORM10 is sampled like direct assignment sampling for each time INLINEFORM11 , the sampling of INLINEFORM12 is divided in the case with the teaching time INLINEFORM13 and another time INLINEFORM14 as follows: DISPLAYFORM0
[tb] Learning of spatial concepts [1] INLINEFORM0 , INLINEFORM1 Localization and speech recognition INLINEFORM2 to INLINEFORM3 INLINEFORM4 BIBREF29 the speech signal is observed INLINEFORM5 add INLINEFORM6 to INLINEFORM7 Registering the lattice add INLINEFORM8 to INLINEFORM9 Registering the teaching time Word segmentation using lattices INLINEFORM10 BIBREF22 Gibbs sampling Initialize parameters INLINEFORM11 , INLINEFORM12 , INLINEFORM13 INLINEFORM14 to INLINEFORM15 INLINEFORM16 ( EQREF25 ) INLINEFORM17 ( EQREF26 ) INLINEFORM18 ( EQREF28 ) INLINEFORM19 ( EQREF30 ) INLINEFORM20 ( EQREF31 ) INLINEFORM21 ( EQREF33 ) INLINEFORM22 to INLINEFORM23 INLINEFORM24 ( EQREF34 ) INLINEFORM25
Self-localization of after learning spatial concepts
A robot that acquires spatial concepts can leverage spatial concepts to self-localization. The estimated model parameters INLINEFORM0 and a speech recognition sentence INLINEFORM1 at time INLINEFORM2 are given to the condition part of the probability formula of MCL as follows: DISPLAYFORM0
When the robot hears the name of a place spoken by the utterer, in addition to the likelihood of the sensor model of MCL, the likelihood of INLINEFORM0 with respect to a speech recognition sentence is calculated as follows: DISPLAYFORM0
The algorithm of self-localization utilizing spatial concepts is shown in Algorithm SECREF35 . The set of particles is denoted as INLINEFORM0 , the temporary set that stores the pairs of the particle INLINEFORM1 and the weight INLINEFORM2 , i.e., INLINEFORM3 , is denoted as INLINEFORM4 . The number of particles is INLINEFORM5 . The function INLINEFORM6 is a function that moves each particle from its previous state INLINEFORM7 to its current state INLINEFORM8 by using control data. The function INLINEFORM9 calculates the likelihood of each particle INLINEFORM10 using sensor data INLINEFORM11 . These functions are normally used in MCL. For further details, please refer to BIBREF26 . In this case, a speech recognition sentence INLINEFORM12 is obtained by the speech recognition system using a word dictionary containing all the learned words. [tb] Self-localization utilizing spatial concepts [1] INLINEFORM13 INLINEFORM14 INLINEFORM15 INLINEFORM16 to INLINEFORM17 INLINEFORM18 () INLINEFORM19 () the speech signal is observed INLINEFORM20 add INLINEFORM21 to INLINEFORM22 INLINEFORM23 to INLINEFORM24 draw INLINEFORM25 with probability INLINEFORM26 add INLINEFORM27 to INLINEFORM28 INLINEFORM29
Experiment I
In this experiment, we validate the evidence of the proposed method (SpCoA) in an environment simulated on the simulator platform SIGVerse BIBREF30 , which enables the simulation of social interactions. The speech recognition is performed using the Japanese continuous speech recognition system Julius BIBREF31 , BIBREF32 . The set of 43 Japanese phonemes defined by Acoustical Society of Japan (ASJ)'s speech database committee is adopted by Julius BIBREF31 . The representation of these phonemes is also adopted in this study. The Julius system uses a word dictionary containing 115 Japanese syllables. The microphone attached on the robot is SHURE's PG27-USB. Further, an unsupervised morphological analyzer, a latticelm 0.4, is implemented BIBREF22 .
In the experiment, we compare the following three types of word segmentation methods. A set of syllable sequences is given to the graphical model of SpCoA by each method. This set is used for the learning of spatial concepts as recognized uttered sentences INLINEFORM0 .
The remainder of this section is organized as follows: In Section SECREF43 , the conditions and results of learning spatial concepts are described. The experiments performed using the learned spatial concepts are described in Section SECREF49 to SECREF64 . In Section SECREF49 , we evaluate the accuracy of the phoneme recognition and word segmentation for uttered sentences. In Section SECREF56 , we evaluate the clustering accuracy of the estimation results of index INLINEFORM0 of spatial concepts for each teaching utterance. In Section SECREF60 , we evaluate the accuracy of the acquisition of names of places. In Section SECREF64 , we show that spatial concepts can be utilized for effective self-localization.
Learning of spatial concepts
We conduct this experiment of spatial concept acquisition in the environment prepared on SIGVerse. The experimental environment is shown in Fig. FIGREF45 . A mobile robot can move by performing forward, backward, right rotation, or left rotation movements on a two-dimensional plane. In this experiment, the robot can use an approximately correct map of the considered environment. The robot has a range sensor in front and performs self-localization on the basis of an occupancy grid map. The initial particles are defined by the true initial position of the robot. The number of particles is INLINEFORM0 .
The lag value of the Monte Carlo fixed-lag smoothing is fixed at 100. The other parameters of this experiment are as follows: INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 , INLINEFORM7 , and INLINEFORM8 . The number of iterations used for Gibbs sampling is 100. This experiment does not include the direct assignment sampling of INLINEFORM9 in equation ( EQREF34 ), i.e., lines 22–24 of Algorithm SECREF23 are omitted, because we consider that the self-position can be obtained with sufficiently good accuracy by using the Monte Carlo smoothing. Eight places are selected as the learning targets, and eight types of place names are considered. Each uttered place name is shown in Fig. FIGREF45 . These utterances include the same name in different places, i.e., “teeburunoatari” (which means near the table in English), and different names in the same place, i.e., “kiqchiN” and “daidokoro” (which mean a kitchen in English). The other teaching names are “geNkaN” (which means an entrance or a doorway in English); “terebimae” (which means the front of the TV in English); “gomibako” (which means a trash box in English); “hoNdana” (which means a bookshelf in English); and “sofaamae” (which means the front of the sofa in English). The teaching utterances, including the 10 types of phrases, are spoken for a total of 90 times. The phrases in each uttered sentence are listed in Table TABREF46 .
The learning results of spatial concepts obtained by using the proposed method are presented here. Fig. FIGREF47 shows the position distributions learned in the experimental environment. Fig. FIGREF47 (top) shows the word distributions of the names of places for each spatial concept, and Fig. FIGREF47 (bottom) shows the multinomial distributions of the indices of the position distributions. Consequently, the proposed method can learn the names of places corresponding to each place of the learning target. In the spatial concept of index INLINEFORM0 , the highest probability of words was “sofamae”, and the highest probability of the indices of the position distribution was INLINEFORM1 ; therefore, the name of a place “sofamae” was learned to correspond to the position distribution of INLINEFORM2 . In the spatial concept of index INLINEFORM3 , “kiqchi” and “daidokoro” were learned to correspond to the position distribution of INLINEFORM4 . Therefore, this result shows that multiple names can be learned for the same place. In the spatial concept of index INLINEFORM5 , “te” and “durunoatari” (one word in a normal situation) were learned to correspond to the position distributions of INLINEFORM6 and INLINEFORM7 . Therefore, this result shows that the same name can be learned for multiple places.
Phoneme recognition accuracy of uttered sentences
We compared the performance of three types of word segmentation methods for all the considered uttered sentences. It was difficult to weigh the ambiguous syllable recognition and the unsupervised word segmentation separately. Therefore, this experiment considered the positions of a delimiter as a single letter. We calculated the matching rate of a phoneme string of a recognition result of each uttered sentence and the correct phoneme string of the teaching data that was suitably segmented into Japanese morphemes using MeCab, which is an off-the-shelf Japanese morphological analyzer that is widely used for natural language processing. The matching rate of the phoneme string was calculated by using the phoneme accuracy rate (PAR) as follows: DISPLAYFORM0
The numerator of equation ( EQREF52 ) is calculated by using the Levenshtein distance between the correct phoneme string and the recognition phoneme string. INLINEFORM0 denotes the number of substitutions; INLINEFORM1 , the number of deletions; and INLINEFORM2 , the number of insertions. INLINEFORM3 represents the number of phonemes of the correct phoneme string.
Table TABREF54 shows the results of PAR. Table TABREF55 presents examples of the word segmentation results of the three considered methods. We found that the unsupervised morphological analyzer capable of using lattices improved the accuracy of phoneme recognition and word segmentation. Consequently, this result suggests that this word segmentation method considers the multiple hypothesis of speech recognition as a whole and reduces uncertainty such as variability in recognition by using the syllable recognition results in the lattice format.
Estimation accuracy of spatial concepts
We compared the matching rate with the estimation results of index INLINEFORM0 of the spatial concepts of each teaching utterance and the classification results of the correct answer given by humans. The evaluation of this experiment used the adjusted Rand index (ARI) BIBREF33 . ARI is a measure of the degree of similarity between two clustering results.
Further, we compared the proposed method with a method of word clustering without location information for the investigation of the effect of lexical acquisition using location information. In particular, a method of word clustering without location information used the Dirichlet process mixture (DPM) of the unigram model of an SBP representation. The parameters corresponding to those of the proposed method were the same as the parameters of the proposed method and were estimated using Gibbs sampling.
Fig. FIGREF59 shows the results of the average of the ARI values of 10 trials of learning by Gibbs sampling. Here, we found that the proposed method showed the best score. These results and the results reported in Section SECREF49 suggest that learning by uttered sentences obtained by better phoneme recognition and better word segmentation produces a good result for the acquisition of spatial concepts. Furthermore, in a comparison of two clustering methods, we found that SpCoA was considerably better than DPM, a word clustering method without location information, irrespective of the word segmentation method used. The experimental results showed that it is possible to improve the estimation accuracy of spatial concepts and vocabulary by performing word clustering that considered location information.
Accuracy of acquired phoneme sequences representing the names of places
We evaluated whether the names of places were properly learned for the considered teaching places. This experiment assumes a request for the best phoneme sequence INLINEFORM0 representing the self-position INLINEFORM1 for a robot. The robot moves close to each teaching place. The probability of a word INLINEFORM2 when the self-position INLINEFORM3 of the robot is given, INLINEFORM4 , can be obtained by using equation ( EQREF37 ). The word having the best probability was selected. We compared the PAR with the correct phoneme sequence and a selected name of the place. Because “kiqchiN” and “daidokoro” were taught for the same place, the word whose PAR was the higher score was adopted.
Fig. FIGREF63 shows the results of PAR for the word considered the name of a place. SpCoA (latticelm), the proposed method using the results of unsupervised word segmentation on the basis of the speech recognition results in the lattice format, showed the best PAR score. In the 1-best and BoS methods, a part syllable sequence of the name of a place was more minutely segmented as shown in Table TABREF55 . Therefore, the robot could not learn the name of the teaching place as a coherent phoneme sequence. In contrast, the robot could learn the names of teaching places more accurately by using the proposed method.
Self-localization that utilizes acquired spatial concepts
In this experiment, we validate that the robot can make efficient use of the acquired spatial concepts. We compare the estimation accuracy of localization for the proposed method (SpCoA MCL) and the conventional MCL. When a robot comes to the learning target, the utterer speaks out the sentence containing the name of the place once again for the robot. The moving trajectory of the robot and the uttered positions are the same in all the trials. In particular, the uttered sentence is “kokowa ** dayo”. When learning a task, this phrase is not used. The number of particles is INLINEFORM0 , and the initial particles are uniformly distributed in the considered environment. The robot performs a control operation for each time step.
The estimation error in the localization is evaluated as follows: While running localization, we record the estimation error (equation ( EQREF66 )) on the INLINEFORM0 plane of the floor for each time step. DISPLAYFORM0
where INLINEFORM0 denote the true position coordinates of the robot as obtained from the simulator, and INLINEFORM1 , INLINEFORM2 represent the weighted mean values of localization coordinates. The normalized weight INLINEFORM3 is obtained from the sensor model in MCL as a likelihood. In the utterance time, this likelihood is multiplied by the value calculated using equation ( EQREF37 ). INLINEFORM4 , INLINEFORM5 denote the INLINEFORM6 -coordinate and the INLINEFORM7 -coordinate of index INLINEFORM8 of each particle at time INLINEFORM9 . After running the localization, we calculated the average of INLINEFORM10 .
Further, we compared the estimation accuracy rate (EAR) of the global localization. In each trial, we calculated the proportion of time step in which the estimation error was less than 50 cm.
Fig. FIGREF68 shows the results of the estimation error and the EAR for 10 trials of each method. All trials of SpCoA MCL (latticelm) and almost all trials of the method using 1-best NPYLM and BoS showed relatively small estimation errors. Results of the second trial of 1-best NPYLM and the fifth trial of BoS showed higher estimation errors. In these trials, many particles converged to other places instead of the place where the robot was, based on utterance information. Nevertheless, compared with those of the conventional MCL, the results obtained using spatial concepts showed an obvious improvement in the estimation accuracy. Consequently, spatial concepts acquired by using the proposed method proved to be very helpful in improving the localization accuracy.
Experiment II
In this experiment, the effectiveness of the proposed method was tested by using an autonomous mobile robot TurtleBot 2 in a real environment. Fig. FIGREF70 shows TurtleBot 2 used in the experiments.
Mapping and self-localization are performed by the robot operating system (ROS). The speech recognition system, the microphone, and the unsupervised morphological analyzer were the same as those described in Section SECREF4 .
Learning of spatial concepts in the real environment
We conducted an experiment of the spatial concept acquisition in a real environment of an entire floor of a building. In this experiment, self-localization was performed using a map generated by SLAM. The initial particles are defined by the true initial position of the robot. The generated map in the real environment and the names of teaching places are shown in Fig. FIGREF73 . The number of teaching places was 19, and the number of teaching names was 16. The teaching utterances were performed for a total of 100 times.
Fig. FIGREF75 shows the position distributions learned on the map. Table TABREF76 shows the five best elements of the multinomial distributions of the name of place INLINEFORM0 and the multinomial distributions of the indices of the position distribution INLINEFORM1 for each index of spatial concept INLINEFORM2 .
Thus, we found that the proposed method can learn the names of places corresponding to the considered teaching places in the real environment. For example, in the spatial concept of index INLINEFORM0 , “torire” was learned to correspond to a position distribution of INLINEFORM1 . Similarly, “kidanokeN” corresponded to INLINEFORM2 in INLINEFORM3 , and “kaigihitsu” was corresponded to INLINEFORM4 in INLINEFORM5 . In the spatial concept of index INLINEFORM6 , a part of the syllable sequences was minutely segmented as “sohatsuke”, “N”, and “tani”, “guchi”. In this case, the robot was taught two types of names. These words were learned to correspond to the same position distribution of INLINEFORM7 . In INLINEFORM8 , “gomibako” showed a high probability, and it corresponded to three distributions of the position of INLINEFORM9 . The position distribution of INLINEFORM10 had the fourth highest probability in the spatial concept INLINEFORM11 . Therefore, “raqkukeN,” which had the fifth highest probability in the spatial concept INLINEFORM12 (and was expected to relate to the spatial concept INLINEFORM13 ), can be estimated as the word drawn from spatial concept INLINEFORM14 . However, in practice, this situation did not cause any severe problems because the spatial concept of the index INLINEFORM15 had the highest probabilities for the word “rapukeN” and the position distribution INLINEFORM16 than INLINEFORM17 . In the probabilistic model, the relative probability and the integrative information are important. When the robot listened to an utterance related to “raqkukeN,” it could make use of the spatial concept of index INLINEFORM18 for self-localization with a high probability, and appropriately updated its estimated self-location. We expected that the spatial concept of index INLINEFORM19 was learned as two separate spatial concepts. However, “watarirooka” and “kaidaNmae” were learned as the same spatial concept. Therefore, the multinomial distribution INLINEFORM20 showed a higher probability for the indices of the position distribution corresponding to the teaching places of both “watarirooka” and “kaidaNmae”.
The proposed method adopts a nonparametric Bayesian method in which it is possible to form spatial concepts that allow many-to-many correspondences between names and places. In contrast, this can create ambiguity that classifies originally different spatial concepts into one spatial concept as a side effect. There is a possibility that the ambiguity of concepts such as INLINEFORM0 will have a negative effect on self-localization, even though the self-localization performance was (overall) clearly increased by employing the proposed method. The solution of this problem will be considered in future work.
In terms of the PAR of uttered sentences, the evaluation value from the evaluation method used in Section SECREF49 is 0.83; this value is comparable to the result in Section SECREF49 . However, in terms of the PAR of the name of the place, the evaluation value from the evaluation method used in Section SECREF60 is 0.35, which is lower than that in Section SECREF60 . We consider that the increase in uncertainty in the real environment and the increase in the number of teaching words reduced the performance. We expect that this problem could be improved using further experience related to places, e.g., if the number of utterances per place is increased, and additional sensory information is provided.
Modification of localization by the acquired spatial concepts
In this experiment, we verified the modification results of self-localization by using spatial concepts in global self-localization. This experiment used the learning results of spatial concepts presented in Section SECREF71 . The experimental procedures are shown below. The initial particles were uniformly distributed on the entire floor. The robot begins to move from a little distance away to the target place. When the robot reached the target place, the utterer spoke the sentence containing the name of the place for the robot. Upon obtaining the speech information, the robot modifies the self-localization on the basis of the acquired spatial concepts. The number of particles was the same as that mentioned in Section SECREF71 .
Fig. FIGREF80 shows the results of the self-localization before (the top part of the figure) and after (the bottom part of the figure) the utterance for three places. The particle states are denoted by red arrows. The moving trajectory of the robot is indicated by a green dotted arrow. Figs. FIGREF80 (a), (b), and (c) show the results for the names of places “toire”, “souhatsukeN”, and “gomibako”. Further, three spatial concepts, i.e., those at INLINEFORM0 , were learned as “gomibako”. In this experiment, the utterer uttered to the robot when the robot came close to the place of INLINEFORM1 . In all the examples shown in the top part of the figure, the particles were dispersed in several places. In contrast, the number of particles near the true position of the robot showed an almost accurate increase in all the examples shown in the bottom part of the figure. Thus, we can conclude that the proposed method can modify self-localization by using spatial concepts.
Conclusion and Future Work
In this paper, we discussed the spatial concept acquisition, lexical acquisition related to places, and self-localization using acquired spatial concepts. We proposed nonparametric Bayesian spatial concept acquisition method SpCoA that integrates latticelm BIBREF22 , a spatial clustering method, and MCL. We conducted experiments for evaluating the performance of SpCoA in a simulation and a real environment. SpCoA showed good results in all the experiments. In experiments of the learning of spatial concepts, the robot could form spatial concepts for the places of the learning targets from human continuous speech signals in both the room of the simulation environment and the entire floor of the real environment. Further, the unsupervised word segmentation method latticelm could reduce the variability and errors in the recognition of phonemes in all the utterances. SpCoA achieved more accurate lexical acquisition by performing word segmentation using the lattices of the speech recognition results. In the self-localization experiments, the robot could effectively utilize the acquired spatial concepts for recognizing self-position and reducing the estimation errors in self-localization. As a method that further improves the performance of the lexical acquisition, a mutual learning method was proposed by Nakamura et al. on the basis of the integration of the learning of object concepts with a language model BIBREF34 , BIBREF35 . Following a similar approach, Heymann et al. proposed a method that alternately and repeatedly updates phoneme recognition results and the language model by using unsupervised word segmentation BIBREF36 . As a result, they achieved robust lexical acquisition. In our study, we can expect to improve the accuracy of lexical acquisition for spatial concepts by estimating both the spatial concepts and the language model.
Furthermore, as a future work, we consider it necessary for robots to learn spatial concepts online and to recognize whether the uttered word indicates the current place or destination. Furthermore, developing a method that simultaneously acquires spatial concepts and builds a map is one of our future objectives. We believe that the spatial concepts will have a positive effect on the mapping. We also intend to examine a method that associates the image and the landscape with spatial concepts and a method that estimates both spatial concepts and object concepts.
[] Akira Taniguchi received his BE degree from Ritsumeikan University in 2013 and his ME degree from the Graduate School of Information Science and Engineering, Ritsumeikan University, in 2015. He is currently working toward his PhD degree at the Emergent System Lab, Ritsumeikan University, Japan. His research interests include language acquisition, concept acquisition, and symbol emergence in robotics.
[] Tadahiro Taniguchi received the ME and PhD degrees from Kyoto University in 2003 and 2006, respectively. From April 2005 to March 2006, he was a Japan Society for the Promotion of Science (JSPS) research fellow (DC2) in the Department of Mechanical Engineering and Science, Graduate School of Engineering, Kyoto University. From April 2006 to March 2007, he was a JSPS research fellow (PD) in the same department. From April 2007 to March 2008, he was a JSPS research fellow in the Department of Systems Science, Graduate School of Informatics, Kyoto University. From April 2008 to March 2010, he was an assistant professor at the Department of Human and Computer Intelligence, Ritsumeikan University. Since April 2010, he has been an associate professor in the same department. He is currently engaged in research on machine learning, emergent systems, and semiotics.
[] Tetsunari Inamura received the BE, MS and PhD degrees from the University of Tokyo, in 1995, 1997 and 2000, respectively. He was a Researcher of the CREST program, Japanese Science and Technology Cooperation, from 2000 to 2003, and then joined the Department of Mechano-Informatics, School of Information Science and Technology, University of Tokyo as a Lecturer, from 2003 to 2006. He is now an Associate Professor in the Principles of Informatics Research Division, National Institute of Informatics, and an Associate Professor in the Department of Informatics, School of Multidisciplinary Sciences, Graduate University for Advanced Studies (SOKENDAI). His research interests include imitation learning and symbol emergence on humanoid robots, development of interactive robots through virtual reality and so on. | unsupervised word segmentation method latticelm |
1bdf7e9f3f804930b2933ebd9207a3e000b27742 | 1bdf7e9f3f804930b2933ebd9207a3e000b27742_0 | Q: Does their model start with any prior knowledge of words?
Text: Introduction
Autonomous robots, such as service robots, operating in the human living environment with humans have to be able to perform various tasks and language communication. To this end, robots are required to acquire novel concepts and vocabulary on the basis of the information obtained from their sensors, e.g., laser sensors, microphones, and cameras, and recognize a variety of objects, places, and situations in an ambient environment. Above all, we consider it important for the robot to learn the names that humans associate with places in the environment and the spatial areas corresponding to these names; i.e., the robot has to be able to understand words related to places. Therefore, it is important to deal with considerable uncertainty, such as the robot's movement errors, sensor noise, and speech recognition errors.
Several studies on language acquisition by robots have assumed that robots have no prior lexical knowledge. These studies differ from speech recognition studies based on a large vocabulary and natural language processing studies based on lexical, syntactic, and semantic knowledge BIBREF0 , BIBREF1 . Studies on language acquisition by robots also constitute a constructive approach to the human developmental process and the emergence of symbols.
The objectives of this study were to build a robot that learns words related to places and efficiently utilizes this learned vocabulary in self-localization. Lexical acquisition related to places is expected to enable a robot to improve its spatial cognition. A schematic representation depicting the target task of this study is shown in Fig. FIGREF3 . This study assumes that a robot does not have any vocabularies in advance but can recognize syllables or phonemes. The robot then performs self-localization while moving around in the environment, as shown in Fig. FIGREF3 (a). An utterer speaks a sentence including the name of the place to the robot, as shown in Fig. FIGREF3 (b). For the purposes of this study, we need to consider the problems of self-localization and lexical acquisition simultaneously.
When a robot learns novel words from utterances, it is difficult to determine segmentation boundaries and the identity of different phoneme sequences from the speech recognition results, which can lead to errors. First, let us consider the case of the lexical acquisition of an isolated word. For example, if a robot obtains the speech recognition results “aporu”, “epou”, and “aqpuru” (incorrect phoneme recognition of apple), it is difficult for the robot to determine whether they denote the same referent without prior knowledge. Second, let us consider a case of the lexical acquisition of the utterance of a sentence. For example, a robot obtains a speech recognition result, such as “thisizanaporu.” The robot has to necessarily segment a sentence into individual words, e.g., “this”, “iz”, “an”, and “aporu”. In addition, it is necessary for the robot to recognize words referring to the same referent, e.g., the fruit apple, from among the many segmented results that contain errors. In case of Fig. FIGREF3 (c), there is some possibility of learning names including phoneme errors, e.g., “afroqtabutibe,” because the robot does not have any lexical knowledge. On the other hand, when a robot performs online probabilistic self-localization, we assume that the robot uses sensor data and control data, e.g., values obtained using a range sensor and odometry. If the position of the robot on the global map is unclear, the difficulties associated with the identification of the self-position by only using local sensor information become problematic. In the case of global localization using local information, e.g., a range sensor, the problem that the hypothesis of self-position is present in multiple remote locations, frequently occurs, as shown in Fig. FIGREF3 (d).
In order to solve the abovementioned problems, in this study, we adopted the following approach. An utterance is recognized as not a single phoneme sequence but a set of candidates of multiple phonemes. We attempt to suppress the variability in the speech recognition results by performing word discovery taking into account the multiple candidates of speech recognition. In addition, the names of places are learned by associating with words and positions. The lexical acquisition is complemented by using certain particular spatial information; i.e., this information is obtained by hearing utterances including the same word in the same place many times. Furthermore, in this study, we attempt to address the problem of the uncertainty of self-localization by improving the self-position errors by using a recognized utterance including the name of the current place and the acquired spatial concepts, as shown in Fig. FIGREF3 (e).
In this paper, we propose nonparametric Bayesian spatial concept acquisition method (SpCoA) on basis of unsupervised word segmentation and a nonparametric Bayesian generative model that integrates self-localization and a clustering in both words and places. The main contributions of this paper are as follows:
The remainder of this paper is organized as follows: In Section SECREF2 , previous studies on language acquisition and lexical acquisition relevant to our study are described. In Section SECREF3 , the proposed method SpCoA is presented. In Sections SECREF4 and SECREF5 , we discuss the effectiveness of SpCoA in the simulation and in the real environment. Section SECREF6 concludes this paper.
Lexical acquisition
Most studies on lexical acquisition typically focus on lexicons about objects BIBREF0 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . Many of these studies have not be able to address the lexical acquisition of words other than those related to objects, e.g., words about places.
Roy et al. proposed a computational model that enables a robot to learn the names of objects from an object image and spontaneous infant-directed speech BIBREF0 . Their results showed that the model performed speech segmentation, word discovery, and visual categorization. Iwahashi et al. reported that a robot properly understands the situation and acquires the relationship of object behaviors and sentences BIBREF2 , BIBREF3 , BIBREF4 . Qu & Chai focused on the conjunction between speech and eye gaze and the use of domain knowledge in lexical acquisition BIBREF6 , BIBREF7 . They proposed an unsupervised learning method that automatically acquires novel words for an interactive system. Qu & Chai's method based on the IBM translation model BIBREF11 estimates the word-entity association probability.
Nakamura et al. proposed a method to learn object concepts and word meanings from multimodal information and verbal information BIBREF9 . The method proposed in BIBREF9 is a categorization method based on multimodal latent Dirichlet allocation (MLDA) that enables the acquisition of object concepts from multimodal information, such as visual, auditory, and haptic information BIBREF12 . Araki et al. addressed the development of a method combining unsupervised word segmentation from uttered sentences by a nested Pitman-Yor language model (NPYLM) BIBREF13 and the learning of object concepts by MLDA BIBREF10 . However, the disadvantage of using NPYLM was that phoneme sequences with errors did not result in appropriate word segmentation.
These studies did not address the lexical acquisition of the space and place that can also tolerate the uncertainty of phoneme recognition. However, for the introduction of robots into the human living environment, robots need to acquire a lexicon related to not only objects but also places. Our study focuses on the lexical acquisition related to places. Robots can adaptively learn the names of places in various human living environments by using SpCoA. We consider that the acquired names of places can be useful for various tasks, e.g., tasks with a movement of robots by the speech instruction.
Simultaneous learning of places and vocabulary
The following studies have addressed lexical acquisition related to places. However, these studies could not utilize the learned language knowledge in other estimations such as the self-localization of a robot.
Taguchi et al. proposed a method for the unsupervised learning of phoneme sequences and relationships between words and objects from various user utterances without any prior linguistic knowledge other than an acoustic model of phonemes BIBREF1 , BIBREF14 . Further, they proposed a method for the simultaneous categorization of self-position coordinates and lexical learning BIBREF15 . These experimental results showed that it was possible to learn the name of a place from utterances in some cases and to output words corresponding to places in a location that was not used for learning.
Milford et al. proposed RatSLAM inspired by the biological knowledge of a pose cell of the hippocampus of rodents BIBREF16 . Milford et al. proposed a method that enables a robot to acquire spatial concepts by using RatSLAM BIBREF17 . Further, Lingodroids, mobile robots that learn a language through robot-to-robot communication, have been studied BIBREF18 , BIBREF19 , BIBREF20 . Here, a robot communicated the name of a place to other robots at various locations. Experimental results showed that two robots acquired the lexicon of places that they had in common. In BIBREF20 , the researchers showed that it was possible to learn temporal concepts in a manner analogous to the acquisition of spatial concepts. These studies reported that the robots created their own vocabulary. However, these studies did not consider the acquisition of a lexicon by human-to-robot speech interactions.
Welke et al. proposed a method that acquires spatial representation by the integration of the representation of the continuous state space on the sensorimotor level and the discrete symbolic entities used in high-level reasoning BIBREF21 . This method estimates the probable spatial domain and word from the given objects by using the spatial lexical knowledge extracted from Google Corpus and the position information of the object. Their study is different from ours because their study did not consider lexicon learning from human speech.
In the case of global localization, the hypothesis of self-position often remains in multiple remote places. In this case, there is some possibility of performing an incorrect estimation and increasing the estimation error. This problem exists during teaching tasks and self-localization after the lexical acquisition. The abovementioned studies could not deal with this problem. In this paper, we have proposed a method that enables a robot to perform more accurate self-localization by reducing the estimation error of the teaching time by using a smoothing method in the teaching task and by utilizing words acquired through the lexical acquisition. The strengths of this study are that learning of spatial concept and self-localization represented as one generative model and robots are able to utilize acquired lexicon to self-localization autonomously.
Spatial Concept Acquisition
We propose nonparametric Bayesian spatial concept acquisition method (SpCoA) that integrates a nonparametric morphological analyzer for the lattice BIBREF22 , i.e., latticelm, a spatial clustering method, and Monte Carlo localization (MCL) BIBREF23 .
Generative model
In our study, we define a position as a specific coordinate or a local point in the environment, and the position distribution as the spatial area of the environment. Further, we define a spatial concept as the names of places and the position distributions corresponding to these names.
The model that was developed for spatial concept acquisition is a probabilistic generative model that integrates a self-localization with the simultaneous clustering of places and words. Fig. FIGREF13 shows the graphical model for spatial concept acquisition. Table TABREF14 shows each variable of the graphical model. The number of words in a sentence at time INLINEFORM0 is denoted as INLINEFORM1 . The generative model of the proposed method is defined as equation ( EQREF11 -). DISPLAYFORM0
Then, the probability distribution for equation () can be defined as follows: DISPLAYFORM0
The prior distribution configured by using the stick breaking process (SBP) BIBREF24 is denoted as INLINEFORM0 , the multinomial distribution as INLINEFORM1 , the Dirichlet distribution as INLINEFORM2 , the inverse–Wishart distribution as INLINEFORM3 , and the multivariate Gaussian (normal) distribution as INLINEFORM4 . The motion model and the sensor model of self-localization are denoted as INLINEFORM5 and INLINEFORM6 in equations () and (), respectively.
This model can learn an appropriate number of spatial concepts, depending on the data, by using a nonparametric Bayesian approach. We use the SBP, which is one of the methods based on the Dirichlet process. In particular, this model can consider a theoretically infinite number of spatial concepts INLINEFORM0 and position distributions INLINEFORM1 . SBP computations are difficult because they generate an infinite number of parameters. In this study, we approximate a number of parameters by setting sufficiently large values, i.e., a weak-limit approximation BIBREF25 .
It is possible to correlate a name with multiple places, e.g., “staircase” is in two different places, and a place with multiple names, e.g., “toilet” and “restroom” refer to the same place. Spatial concepts are represented by a word distribution of the names of the place INLINEFORM0 and several position distributions ( INLINEFORM1 , INLINEFORM2 ) indicated by a multinomial distribution INLINEFORM3 . In other words, this model is capable of relating the mixture of Gaussian distributions to a multinomial distribution of the names of places. It should be noted that the arrows connecting INLINEFORM4 to the surrounding nodes of the proposed graphical model differ from those of ordinal Gaussian mixture model (GMM). We assume that words obtained by the robot do not change its position, but that the position of the robot affects the distribution of words. Therefore, the proposed generative process assumes that the index of position distribution INLINEFORM5 , i.e., the category of the place, is generated from the position of the robot INLINEFORM6 . This change can be naturally introduced without any troubles by introducing equation ( EQREF12 ).
Overview of the proposed method SpCoA
We assume that a robot performs self-localization by using control data and sensor data at all times. The procedure for the learning of spatial concepts is as follows:
An utterer teaches a robot the names of places, as shown in Fig. FIGREF3 (b). Every time the robot arrives at a place that was a designated learning target, the utterer says a sentence, including the name of the current place.
The robot performs speech recognition from the uttered speech signal data. Thus, the speech recognition system includes a word dictionary of only Japanese syllables. The speech recognition results are obtained in a lattice format.
Word segmentation is performed by using the lattices of the speech recognition results.
The robot learns spatial concepts from words obtained by word segmentation and robot positions obtained by self-localization for all teaching times. The details of the learning are given in SECREF23 .
The procedure for self-localization utilizing spatial concepts is as follows:
The words of the learned spatial concepts are registered to the word dictionary of the speech recognition system.
When a robot obtains a speech signal, speech recognition is performed. Then, a word sequence as the 1-best speech recognition result is obtained.
The robot modifies the self-localization from words obtained by speech recognition and the position likelihood obtained by spatial concepts. The details of self-localization are provided in SECREF35 .
The proposed method can learn words related to places from the utterances of sentences. We use an unsupervised word segmentation method latticelm that can directly segment words from the lattices of the speech recognition results of the uttered sentences BIBREF22 . The lattice can represent to a compact the set of more promising hypotheses of a speech recognition result, such as N-best, in a directed graph format. Unsupervised word segmentation using the lattices of syllable recognition is expected to be able to reduce the variability and errors in phonemes as compared to NPYLM BIBREF13 , i.e., word segmentation using the 1-best speech recognition results.
The self-localization method adopts MCL BIBREF23 , a method that is generally used as the localization of mobile robots for simultaneous localization and mapping (SLAM) BIBREF26 . We assume that a robot generates an environment map by using MCL-based SLAM such as FastSLAM BIBREF27 , BIBREF28 in advance, and then, performs localization by using the generated map. Then, the environment map of both an occupancy grid map and a landmark map is acceptable.
Learning of spatial concept
Spatial concepts are learned from multiple teaching data, control data, and sensor data. The teaching data are a set of uttered sentences for all teaching times. Segmented words of an uttered sentence are converted into a bag-of-words (BoW) representation as a vector of the occurrence counts of words INLINEFORM0 . The set of the teaching times is denoted as INLINEFORM1 , and the number of teaching data items is denoted as INLINEFORM2 . The model parameters are denoted as INLINEFORM3 . The initial values of the model parameters can be set arbitrarily in accordance with a condition. Further, the sampling values of the model parameters from the following joint posterior distribution are obtained by performing Gibbs sampling. DISPLAYFORM0
where the hyperparameters of the model are denoted as INLINEFORM0 . The algorithm of the learning of spatial concepts is shown in Algorithm SECREF23 .
The conditional posterior distribution of each element used for performing Gibbs sampling can be expressed as follows: An index INLINEFORM0 of the position distribution is sampled for each data INLINEFORM1 from a posterior distribution as follows: DISPLAYFORM0
An index INLINEFORM0 of the spatial concepts is sampled for each data item INLINEFORM1 from a posterior distribution as follows: DISPLAYFORM0
where INLINEFORM0 denotes a vector of the occurrence counts of words in the sentence at time INLINEFORM1 . A posterior distribution representing word probabilities of the name of place INLINEFORM2 is calculated as follows: DISPLAYFORM0
where variables with the subscript INLINEFORM0 denote the set of all teaching times. A word probability of the name of place INLINEFORM1 is sampled for each INLINEFORM2 as follows: DISPLAYFORM0
where INLINEFORM0 represents the posterior parameter and INLINEFORM1 denotes the BoW representation of all sentences of INLINEFORM2 in INLINEFORM3 . A posterior distribution representing the position distribution INLINEFORM4 is calculated as follows: DISPLAYFORM0
A position distribution INLINEFORM0 , INLINEFORM1 is sampled for each INLINEFORM2 as follows: DISPLAYFORM0
where INLINEFORM0 denotes the Gaussian–inverse–Wishart distribution; INLINEFORM1 , and INLINEFORM2 represent the posterior parameters; and INLINEFORM3 indicates the set of the teaching positions of INLINEFORM4 in INLINEFORM5 . A topic probability distribution INLINEFORM6 of spatial concepts is sampled as follows: DISPLAYFORM0
A posterior distribution representing the mixed weights INLINEFORM0 of the position distributions is calculated as follows: DISPLAYFORM0
A mixed weight INLINEFORM0 of the position distributions is sampled for each INLINEFORM1 as follows: DISPLAYFORM0
where INLINEFORM0 denotes a vector counting all the indices of the Gaussian distribution of INLINEFORM1 in INLINEFORM2 .
Self-positions INLINEFORM0 are sampled by using a Monte Carlo fixed-lag smoother BIBREF29 in the learning phase. The smoother can estimate self-position INLINEFORM1 and not INLINEFORM2 , i.e., a sequential estimation from the given data INLINEFORM3 until time INLINEFORM4 , but it can estimate INLINEFORM5 , i.e., an estimation from the given data INLINEFORM6 until time INLINEFORM7 later than INLINEFORM8 INLINEFORM9 . In general, the smoothing method can provide a more accurate estimation than the MCL of online estimation. In contrast, if the self-position of a robot INLINEFORM10 is sampled like direct assignment sampling for each time INLINEFORM11 , the sampling of INLINEFORM12 is divided in the case with the teaching time INLINEFORM13 and another time INLINEFORM14 as follows: DISPLAYFORM0
[tb] Learning of spatial concepts [1] INLINEFORM0 , INLINEFORM1 Localization and speech recognition INLINEFORM2 to INLINEFORM3 INLINEFORM4 BIBREF29 the speech signal is observed INLINEFORM5 add INLINEFORM6 to INLINEFORM7 Registering the lattice add INLINEFORM8 to INLINEFORM9 Registering the teaching time Word segmentation using lattices INLINEFORM10 BIBREF22 Gibbs sampling Initialize parameters INLINEFORM11 , INLINEFORM12 , INLINEFORM13 INLINEFORM14 to INLINEFORM15 INLINEFORM16 ( EQREF25 ) INLINEFORM17 ( EQREF26 ) INLINEFORM18 ( EQREF28 ) INLINEFORM19 ( EQREF30 ) INLINEFORM20 ( EQREF31 ) INLINEFORM21 ( EQREF33 ) INLINEFORM22 to INLINEFORM23 INLINEFORM24 ( EQREF34 ) INLINEFORM25
Self-localization of after learning spatial concepts
A robot that acquires spatial concepts can leverage spatial concepts to self-localization. The estimated model parameters INLINEFORM0 and a speech recognition sentence INLINEFORM1 at time INLINEFORM2 are given to the condition part of the probability formula of MCL as follows: DISPLAYFORM0
When the robot hears the name of a place spoken by the utterer, in addition to the likelihood of the sensor model of MCL, the likelihood of INLINEFORM0 with respect to a speech recognition sentence is calculated as follows: DISPLAYFORM0
The algorithm of self-localization utilizing spatial concepts is shown in Algorithm SECREF35 . The set of particles is denoted as INLINEFORM0 , the temporary set that stores the pairs of the particle INLINEFORM1 and the weight INLINEFORM2 , i.e., INLINEFORM3 , is denoted as INLINEFORM4 . The number of particles is INLINEFORM5 . The function INLINEFORM6 is a function that moves each particle from its previous state INLINEFORM7 to its current state INLINEFORM8 by using control data. The function INLINEFORM9 calculates the likelihood of each particle INLINEFORM10 using sensor data INLINEFORM11 . These functions are normally used in MCL. For further details, please refer to BIBREF26 . In this case, a speech recognition sentence INLINEFORM12 is obtained by the speech recognition system using a word dictionary containing all the learned words. [tb] Self-localization utilizing spatial concepts [1] INLINEFORM13 INLINEFORM14 INLINEFORM15 INLINEFORM16 to INLINEFORM17 INLINEFORM18 () INLINEFORM19 () the speech signal is observed INLINEFORM20 add INLINEFORM21 to INLINEFORM22 INLINEFORM23 to INLINEFORM24 draw INLINEFORM25 with probability INLINEFORM26 add INLINEFORM27 to INLINEFORM28 INLINEFORM29
Experiment I
In this experiment, we validate the evidence of the proposed method (SpCoA) in an environment simulated on the simulator platform SIGVerse BIBREF30 , which enables the simulation of social interactions. The speech recognition is performed using the Japanese continuous speech recognition system Julius BIBREF31 , BIBREF32 . The set of 43 Japanese phonemes defined by Acoustical Society of Japan (ASJ)'s speech database committee is adopted by Julius BIBREF31 . The representation of these phonemes is also adopted in this study. The Julius system uses a word dictionary containing 115 Japanese syllables. The microphone attached on the robot is SHURE's PG27-USB. Further, an unsupervised morphological analyzer, a latticelm 0.4, is implemented BIBREF22 .
In the experiment, we compare the following three types of word segmentation methods. A set of syllable sequences is given to the graphical model of SpCoA by each method. This set is used for the learning of spatial concepts as recognized uttered sentences INLINEFORM0 .
The remainder of this section is organized as follows: In Section SECREF43 , the conditions and results of learning spatial concepts are described. The experiments performed using the learned spatial concepts are described in Section SECREF49 to SECREF64 . In Section SECREF49 , we evaluate the accuracy of the phoneme recognition and word segmentation for uttered sentences. In Section SECREF56 , we evaluate the clustering accuracy of the estimation results of index INLINEFORM0 of spatial concepts for each teaching utterance. In Section SECREF60 , we evaluate the accuracy of the acquisition of names of places. In Section SECREF64 , we show that spatial concepts can be utilized for effective self-localization.
Learning of spatial concepts
We conduct this experiment of spatial concept acquisition in the environment prepared on SIGVerse. The experimental environment is shown in Fig. FIGREF45 . A mobile robot can move by performing forward, backward, right rotation, or left rotation movements on a two-dimensional plane. In this experiment, the robot can use an approximately correct map of the considered environment. The robot has a range sensor in front and performs self-localization on the basis of an occupancy grid map. The initial particles are defined by the true initial position of the robot. The number of particles is INLINEFORM0 .
The lag value of the Monte Carlo fixed-lag smoothing is fixed at 100. The other parameters of this experiment are as follows: INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 , INLINEFORM7 , and INLINEFORM8 . The number of iterations used for Gibbs sampling is 100. This experiment does not include the direct assignment sampling of INLINEFORM9 in equation ( EQREF34 ), i.e., lines 22–24 of Algorithm SECREF23 are omitted, because we consider that the self-position can be obtained with sufficiently good accuracy by using the Monte Carlo smoothing. Eight places are selected as the learning targets, and eight types of place names are considered. Each uttered place name is shown in Fig. FIGREF45 . These utterances include the same name in different places, i.e., “teeburunoatari” (which means near the table in English), and different names in the same place, i.e., “kiqchiN” and “daidokoro” (which mean a kitchen in English). The other teaching names are “geNkaN” (which means an entrance or a doorway in English); “terebimae” (which means the front of the TV in English); “gomibako” (which means a trash box in English); “hoNdana” (which means a bookshelf in English); and “sofaamae” (which means the front of the sofa in English). The teaching utterances, including the 10 types of phrases, are spoken for a total of 90 times. The phrases in each uttered sentence are listed in Table TABREF46 .
The learning results of spatial concepts obtained by using the proposed method are presented here. Fig. FIGREF47 shows the position distributions learned in the experimental environment. Fig. FIGREF47 (top) shows the word distributions of the names of places for each spatial concept, and Fig. FIGREF47 (bottom) shows the multinomial distributions of the indices of the position distributions. Consequently, the proposed method can learn the names of places corresponding to each place of the learning target. In the spatial concept of index INLINEFORM0 , the highest probability of words was “sofamae”, and the highest probability of the indices of the position distribution was INLINEFORM1 ; therefore, the name of a place “sofamae” was learned to correspond to the position distribution of INLINEFORM2 . In the spatial concept of index INLINEFORM3 , “kiqchi” and “daidokoro” were learned to correspond to the position distribution of INLINEFORM4 . Therefore, this result shows that multiple names can be learned for the same place. In the spatial concept of index INLINEFORM5 , “te” and “durunoatari” (one word in a normal situation) were learned to correspond to the position distributions of INLINEFORM6 and INLINEFORM7 . Therefore, this result shows that the same name can be learned for multiple places.
Phoneme recognition accuracy of uttered sentences
We compared the performance of three types of word segmentation methods for all the considered uttered sentences. It was difficult to weigh the ambiguous syllable recognition and the unsupervised word segmentation separately. Therefore, this experiment considered the positions of a delimiter as a single letter. We calculated the matching rate of a phoneme string of a recognition result of each uttered sentence and the correct phoneme string of the teaching data that was suitably segmented into Japanese morphemes using MeCab, which is an off-the-shelf Japanese morphological analyzer that is widely used for natural language processing. The matching rate of the phoneme string was calculated by using the phoneme accuracy rate (PAR) as follows: DISPLAYFORM0
The numerator of equation ( EQREF52 ) is calculated by using the Levenshtein distance between the correct phoneme string and the recognition phoneme string. INLINEFORM0 denotes the number of substitutions; INLINEFORM1 , the number of deletions; and INLINEFORM2 , the number of insertions. INLINEFORM3 represents the number of phonemes of the correct phoneme string.
Table TABREF54 shows the results of PAR. Table TABREF55 presents examples of the word segmentation results of the three considered methods. We found that the unsupervised morphological analyzer capable of using lattices improved the accuracy of phoneme recognition and word segmentation. Consequently, this result suggests that this word segmentation method considers the multiple hypothesis of speech recognition as a whole and reduces uncertainty such as variability in recognition by using the syllable recognition results in the lattice format.
Estimation accuracy of spatial concepts
We compared the matching rate with the estimation results of index INLINEFORM0 of the spatial concepts of each teaching utterance and the classification results of the correct answer given by humans. The evaluation of this experiment used the adjusted Rand index (ARI) BIBREF33 . ARI is a measure of the degree of similarity between two clustering results.
Further, we compared the proposed method with a method of word clustering without location information for the investigation of the effect of lexical acquisition using location information. In particular, a method of word clustering without location information used the Dirichlet process mixture (DPM) of the unigram model of an SBP representation. The parameters corresponding to those of the proposed method were the same as the parameters of the proposed method and were estimated using Gibbs sampling.
Fig. FIGREF59 shows the results of the average of the ARI values of 10 trials of learning by Gibbs sampling. Here, we found that the proposed method showed the best score. These results and the results reported in Section SECREF49 suggest that learning by uttered sentences obtained by better phoneme recognition and better word segmentation produces a good result for the acquisition of spatial concepts. Furthermore, in a comparison of two clustering methods, we found that SpCoA was considerably better than DPM, a word clustering method without location information, irrespective of the word segmentation method used. The experimental results showed that it is possible to improve the estimation accuracy of spatial concepts and vocabulary by performing word clustering that considered location information.
Accuracy of acquired phoneme sequences representing the names of places
We evaluated whether the names of places were properly learned for the considered teaching places. This experiment assumes a request for the best phoneme sequence INLINEFORM0 representing the self-position INLINEFORM1 for a robot. The robot moves close to each teaching place. The probability of a word INLINEFORM2 when the self-position INLINEFORM3 of the robot is given, INLINEFORM4 , can be obtained by using equation ( EQREF37 ). The word having the best probability was selected. We compared the PAR with the correct phoneme sequence and a selected name of the place. Because “kiqchiN” and “daidokoro” were taught for the same place, the word whose PAR was the higher score was adopted.
Fig. FIGREF63 shows the results of PAR for the word considered the name of a place. SpCoA (latticelm), the proposed method using the results of unsupervised word segmentation on the basis of the speech recognition results in the lattice format, showed the best PAR score. In the 1-best and BoS methods, a part syllable sequence of the name of a place was more minutely segmented as shown in Table TABREF55 . Therefore, the robot could not learn the name of the teaching place as a coherent phoneme sequence. In contrast, the robot could learn the names of teaching places more accurately by using the proposed method.
Self-localization that utilizes acquired spatial concepts
In this experiment, we validate that the robot can make efficient use of the acquired spatial concepts. We compare the estimation accuracy of localization for the proposed method (SpCoA MCL) and the conventional MCL. When a robot comes to the learning target, the utterer speaks out the sentence containing the name of the place once again for the robot. The moving trajectory of the robot and the uttered positions are the same in all the trials. In particular, the uttered sentence is “kokowa ** dayo”. When learning a task, this phrase is not used. The number of particles is INLINEFORM0 , and the initial particles are uniformly distributed in the considered environment. The robot performs a control operation for each time step.
The estimation error in the localization is evaluated as follows: While running localization, we record the estimation error (equation ( EQREF66 )) on the INLINEFORM0 plane of the floor for each time step. DISPLAYFORM0
where INLINEFORM0 denote the true position coordinates of the robot as obtained from the simulator, and INLINEFORM1 , INLINEFORM2 represent the weighted mean values of localization coordinates. The normalized weight INLINEFORM3 is obtained from the sensor model in MCL as a likelihood. In the utterance time, this likelihood is multiplied by the value calculated using equation ( EQREF37 ). INLINEFORM4 , INLINEFORM5 denote the INLINEFORM6 -coordinate and the INLINEFORM7 -coordinate of index INLINEFORM8 of each particle at time INLINEFORM9 . After running the localization, we calculated the average of INLINEFORM10 .
Further, we compared the estimation accuracy rate (EAR) of the global localization. In each trial, we calculated the proportion of time step in which the estimation error was less than 50 cm.
Fig. FIGREF68 shows the results of the estimation error and the EAR for 10 trials of each method. All trials of SpCoA MCL (latticelm) and almost all trials of the method using 1-best NPYLM and BoS showed relatively small estimation errors. Results of the second trial of 1-best NPYLM and the fifth trial of BoS showed higher estimation errors. In these trials, many particles converged to other places instead of the place where the robot was, based on utterance information. Nevertheless, compared with those of the conventional MCL, the results obtained using spatial concepts showed an obvious improvement in the estimation accuracy. Consequently, spatial concepts acquired by using the proposed method proved to be very helpful in improving the localization accuracy.
Experiment II
In this experiment, the effectiveness of the proposed method was tested by using an autonomous mobile robot TurtleBot 2 in a real environment. Fig. FIGREF70 shows TurtleBot 2 used in the experiments.
Mapping and self-localization are performed by the robot operating system (ROS). The speech recognition system, the microphone, and the unsupervised morphological analyzer were the same as those described in Section SECREF4 .
Learning of spatial concepts in the real environment
We conducted an experiment of the spatial concept acquisition in a real environment of an entire floor of a building. In this experiment, self-localization was performed using a map generated by SLAM. The initial particles are defined by the true initial position of the robot. The generated map in the real environment and the names of teaching places are shown in Fig. FIGREF73 . The number of teaching places was 19, and the number of teaching names was 16. The teaching utterances were performed for a total of 100 times.
Fig. FIGREF75 shows the position distributions learned on the map. Table TABREF76 shows the five best elements of the multinomial distributions of the name of place INLINEFORM0 and the multinomial distributions of the indices of the position distribution INLINEFORM1 for each index of spatial concept INLINEFORM2 .
Thus, we found that the proposed method can learn the names of places corresponding to the considered teaching places in the real environment. For example, in the spatial concept of index INLINEFORM0 , “torire” was learned to correspond to a position distribution of INLINEFORM1 . Similarly, “kidanokeN” corresponded to INLINEFORM2 in INLINEFORM3 , and “kaigihitsu” was corresponded to INLINEFORM4 in INLINEFORM5 . In the spatial concept of index INLINEFORM6 , a part of the syllable sequences was minutely segmented as “sohatsuke”, “N”, and “tani”, “guchi”. In this case, the robot was taught two types of names. These words were learned to correspond to the same position distribution of INLINEFORM7 . In INLINEFORM8 , “gomibako” showed a high probability, and it corresponded to three distributions of the position of INLINEFORM9 . The position distribution of INLINEFORM10 had the fourth highest probability in the spatial concept INLINEFORM11 . Therefore, “raqkukeN,” which had the fifth highest probability in the spatial concept INLINEFORM12 (and was expected to relate to the spatial concept INLINEFORM13 ), can be estimated as the word drawn from spatial concept INLINEFORM14 . However, in practice, this situation did not cause any severe problems because the spatial concept of the index INLINEFORM15 had the highest probabilities for the word “rapukeN” and the position distribution INLINEFORM16 than INLINEFORM17 . In the probabilistic model, the relative probability and the integrative information are important. When the robot listened to an utterance related to “raqkukeN,” it could make use of the spatial concept of index INLINEFORM18 for self-localization with a high probability, and appropriately updated its estimated self-location. We expected that the spatial concept of index INLINEFORM19 was learned as two separate spatial concepts. However, “watarirooka” and “kaidaNmae” were learned as the same spatial concept. Therefore, the multinomial distribution INLINEFORM20 showed a higher probability for the indices of the position distribution corresponding to the teaching places of both “watarirooka” and “kaidaNmae”.
The proposed method adopts a nonparametric Bayesian method in which it is possible to form spatial concepts that allow many-to-many correspondences between names and places. In contrast, this can create ambiguity that classifies originally different spatial concepts into one spatial concept as a side effect. There is a possibility that the ambiguity of concepts such as INLINEFORM0 will have a negative effect on self-localization, even though the self-localization performance was (overall) clearly increased by employing the proposed method. The solution of this problem will be considered in future work.
In terms of the PAR of uttered sentences, the evaluation value from the evaluation method used in Section SECREF49 is 0.83; this value is comparable to the result in Section SECREF49 . However, in terms of the PAR of the name of the place, the evaluation value from the evaluation method used in Section SECREF60 is 0.35, which is lower than that in Section SECREF60 . We consider that the increase in uncertainty in the real environment and the increase in the number of teaching words reduced the performance. We expect that this problem could be improved using further experience related to places, e.g., if the number of utterances per place is increased, and additional sensory information is provided.
Modification of localization by the acquired spatial concepts
In this experiment, we verified the modification results of self-localization by using spatial concepts in global self-localization. This experiment used the learning results of spatial concepts presented in Section SECREF71 . The experimental procedures are shown below. The initial particles were uniformly distributed on the entire floor. The robot begins to move from a little distance away to the target place. When the robot reached the target place, the utterer spoke the sentence containing the name of the place for the robot. Upon obtaining the speech information, the robot modifies the self-localization on the basis of the acquired spatial concepts. The number of particles was the same as that mentioned in Section SECREF71 .
Fig. FIGREF80 shows the results of the self-localization before (the top part of the figure) and after (the bottom part of the figure) the utterance for three places. The particle states are denoted by red arrows. The moving trajectory of the robot is indicated by a green dotted arrow. Figs. FIGREF80 (a), (b), and (c) show the results for the names of places “toire”, “souhatsukeN”, and “gomibako”. Further, three spatial concepts, i.e., those at INLINEFORM0 , were learned as “gomibako”. In this experiment, the utterer uttered to the robot when the robot came close to the place of INLINEFORM1 . In all the examples shown in the top part of the figure, the particles were dispersed in several places. In contrast, the number of particles near the true position of the robot showed an almost accurate increase in all the examples shown in the bottom part of the figure. Thus, we can conclude that the proposed method can modify self-localization by using spatial concepts.
Conclusion and Future Work
In this paper, we discussed the spatial concept acquisition, lexical acquisition related to places, and self-localization using acquired spatial concepts. We proposed nonparametric Bayesian spatial concept acquisition method SpCoA that integrates latticelm BIBREF22 , a spatial clustering method, and MCL. We conducted experiments for evaluating the performance of SpCoA in a simulation and a real environment. SpCoA showed good results in all the experiments. In experiments of the learning of spatial concepts, the robot could form spatial concepts for the places of the learning targets from human continuous speech signals in both the room of the simulation environment and the entire floor of the real environment. Further, the unsupervised word segmentation method latticelm could reduce the variability and errors in the recognition of phonemes in all the utterances. SpCoA achieved more accurate lexical acquisition by performing word segmentation using the lattices of the speech recognition results. In the self-localization experiments, the robot could effectively utilize the acquired spatial concepts for recognizing self-position and reducing the estimation errors in self-localization. As a method that further improves the performance of the lexical acquisition, a mutual learning method was proposed by Nakamura et al. on the basis of the integration of the learning of object concepts with a language model BIBREF34 , BIBREF35 . Following a similar approach, Heymann et al. proposed a method that alternately and repeatedly updates phoneme recognition results and the language model by using unsupervised word segmentation BIBREF36 . As a result, they achieved robust lexical acquisition. In our study, we can expect to improve the accuracy of lexical acquisition for spatial concepts by estimating both the spatial concepts and the language model.
Furthermore, as a future work, we consider it necessary for robots to learn spatial concepts online and to recognize whether the uttered word indicates the current place or destination. Furthermore, developing a method that simultaneously acquires spatial concepts and builds a map is one of our future objectives. We believe that the spatial concepts will have a positive effect on the mapping. We also intend to examine a method that associates the image and the landscape with spatial concepts and a method that estimates both spatial concepts and object concepts.
[] Akira Taniguchi received his BE degree from Ritsumeikan University in 2013 and his ME degree from the Graduate School of Information Science and Engineering, Ritsumeikan University, in 2015. He is currently working toward his PhD degree at the Emergent System Lab, Ritsumeikan University, Japan. His research interests include language acquisition, concept acquisition, and symbol emergence in robotics.
[] Tadahiro Taniguchi received the ME and PhD degrees from Kyoto University in 2003 and 2006, respectively. From April 2005 to March 2006, he was a Japan Society for the Promotion of Science (JSPS) research fellow (DC2) in the Department of Mechanical Engineering and Science, Graduate School of Engineering, Kyoto University. From April 2006 to March 2007, he was a JSPS research fellow (PD) in the same department. From April 2007 to March 2008, he was a JSPS research fellow in the Department of Systems Science, Graduate School of Informatics, Kyoto University. From April 2008 to March 2010, he was an assistant professor at the Department of Human and Computer Intelligence, Ritsumeikan University. Since April 2010, he has been an associate professor in the same department. He is currently engaged in research on machine learning, emergent systems, and semiotics.
[] Tetsunari Inamura received the BE, MS and PhD degrees from the University of Tokyo, in 1995, 1997 and 2000, respectively. He was a Researcher of the CREST program, Japanese Science and Technology Cooperation, from 2000 to 2003, and then joined the Department of Mechano-Informatics, School of Information Science and Technology, University of Tokyo as a Lecturer, from 2003 to 2006. He is now an Associate Professor in the Principles of Informatics Research Division, National Institute of Informatics, and an Associate Professor in the Department of Informatics, School of Multidisciplinary Sciences, Graduate University for Advanced Studies (SOKENDAI). His research interests include imitation learning and symbol emergence on humanoid robots, development of interactive robots through virtual reality and so on. | No |
a74886d789a5d7ebcf7f151bdfb862c79b6b8a12 | a74886d789a5d7ebcf7f151bdfb862c79b6b8a12_0 | Q: What were the baselines?
Text: Introduction
Misinformation and disinformation are two of the most pertinent and difficult challenges of the information age, exacerbated by the popularity of social media. In an effort to counter this, a significant amount of manual labour has been invested in fact checking claims, often collecting the results of these manual checks on fact checking portals or websites such as politifact.com or snopes.com. In a parallel development, researchers have recently started to view fact checking as a task that can be partially automated, using machine learning and NLP to automatically predict the veracity of claims. However, existing efforts either use small datasets consisting of naturally occurring claims (e.g. BIBREF0 , BIBREF1 ), or datasets consisting of artificially constructed claims such as FEVER BIBREF2 . While the latter offer valuable contributions to further automatic claim verification work, they cannot replace real-world datasets.
Datasets
Over the past few years, a variety of mostly small datasets related to fact checking have been released. An overview over core datasets is given in Table TABREF4 , and a version of this table extended with the number of documents, source of annotations and SoA performances can be found in the appendix (Table TABREF1 ). The datasets can be grouped into four categories (I–IV). Category I contains datasets aimed at testing how well the veracity of a claim can be predicted using the claim alone, without context or evidence documents. Category II contains datasets bundled with documents related to each claim – either topically related to provide context, or serving as evidence. Those documents are, however, not annotated. Category III is for predicting veracity; they encourage retrieving evidence documents as part of their task description, but do not distribute them. Finally, category IV comprises datasets annotated for both veracity and stance. Thus, every document is annotated with a label indicating whether the document supports or denies the claim, or is unrelated to it. Additional labels can then be added to the datasets to better predict veracity, for instance by jointly training stance and veracity prediction models.
Methods not shown in the table, but related to fact checking, are stance detection for claims BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , satire detection BIBREF21 , clickbait detection BIBREF22 , conspiracy news detection BIBREF23 , rumour cascade detection BIBREF24 and claim perspectives detection BIBREF25 .
Claims are obtained from a variety of sources, including Wikipedia, Twitter, criminal reports and fact checking websites such as politifact.com and snopes.com. The same goes for documents – these are often websites obtained through Web search queries, or Wikipedia documents, tweets or Facebook posts. Most datasets contain a fairly small number of claims, and those that do not, often lack evidence documents. An exception is BIBREF2 , who create a Wikipedia-based fact checking dataset. While a good testbed for developing deep neural architectures, their dataset is artificially constructed and can thus not take metadata about claims into account.
Contributions: We provide a dataset that, uniquely among extant datasets, contains a large number of naturally occurring claims and rich additional meta-information.
Methods
Fact checking methods partly depend on the type of dataset used. Methods only taking into account claims typically encode those with CNNs or RNNs BIBREF3 , BIBREF4 , and potentially encode metadata BIBREF3 in a similar way. Methods for small datasets often use hand-crafted features that are a mix of bag of word and other lexical features, e.g. LIWC, and then use those as input to a SVM or MLP BIBREF0 , BIBREF4 , BIBREF13 . Some use additional Twitter-specific features BIBREF26 . More involved methods taking into account evidence documents, often trained on larger datasets, consist of evidence identification and ranking following a neural model that measures the compatibility between claim and evidence BIBREF2 , BIBREF27 , BIBREF28 .
Contributions: The latter category above is the most related to our paper as we consider evidence documents. However, existing models are not trained jointly for evidence identification, or for stance and veracity prediction, but rather employ a pipeline approach. Here, we show that a joint approach that learns to weigh evidence pages by their importance for veracity prediction can improve downstream veracity prediction performance.
Dataset Construction
We crawled a total of 43,837 claims with their metadata (see details in Table TABREF1 ). We present the data collection in terms of selecting sources, crawling claims and associated metadata (Section SECREF9 ); retrieving evidence pages; and linking entities in the crawled claims (Section SECREF13 ).
Selection of sources
We crawled all active fact checking websites in English listed by Duke Reporters' Lab and on the Fact Checking Wikipedia page. This resulted in 38 websites in total (shown in Table TABREF1 ). Ten websites could not be crawled, as further detailed in Table TABREF40 . In the later experimental descriptions, we refer to the part of the dataset crawled from a specific fact checking website as a domain, and we refer to each website as source.
From each source, we crawled the ID, claim, label, URL, reason for label, categories, person making the claim (speaker), person fact checking the claim (checker), tags, article title, publication date, claim date, as well as the full text that appears when the claim is clicked. Lastly, the above full text contains hyperlinks, so we further crawled the full text that appears when each of those hyperlinks are clicked (outlinks).
There were a number of crawling issues, e.g. security protection of websites with SSL/TLS protocols, time out, URLs that pointed to pdf files instead of HTML content, or unresolvable encoding. In all of these cases, the content could not be retrieved. For some websites, no veracity labels were available, in which case, they were not selected as domains for training a veracity prediction model. Moreover, not all types of metadata (category, speaker, checker, tags, claim date, publish date) were available for all websites; and availability of articles and full texts differs as well.
We performed semi-automatic cleansing of the dataset as follows. First, we double-checked that the veracity labels would not appear in claims. For some domains, the first or last sentence of the claim would sometimes contain the veracity label, in which case we would discard either the full sentence or part of the sentence. Next, we checked the dataset for duplicate claims. We found 202 such instances, 69 of them with different labels. Upon manual inspection, this was mainly due to them appearing on different websites, with labels not differing much in practice (e.g. `Not true', vs. `Mostly False'). We made sure that all such duplicate claims would be in the training split of the dataset, so that the models would not have an unfair advantage. Finally, we performed some minor manual merging of label types for the same domain where it was clear that they were supposed to denote the same level of veracity (e.g. `distorts', `distorts the facts').
This resulted in a total of 36,534 claims with their metadata. For the purposes of fact verification, we discarded instances with labels that occur fewer than 5 times, resulting in 34,918 claims. The number of instances, as well as labels per domain, are shown in Table TABREF34 and label names in Table TABREF43 in the appendix. The dataset is split into a training part (80%) and a development and testing part (10% each) in a label-stratified manner. Note that the domains vary in the number of labels, ranging from 2 to 27. Labels include both straight-forward ratings of veracity (`correct', `incorrect'), but also labels that would be more difficult to map onto a veracity scale (e.g. `grass roots movement!', `misattributed', `not the whole story'). We therefore do not postprocess label types across domains to map them onto the same scale, and rather treat them as is. In the methodology section (Section SECREF4 ), we show how a model can be trained on this dataset regardless by framing this multi-domain veracity prediction task as a multi-task learning (MTL) one.
Retrieving Evidence Pages
The text of each claim is submitted verbatim as a query to the Google Search API (without quotes). The 10 most highly ranked search results are retrieved, for each of which we save the title; Google search rank; URL; time stamp of last update; search snippet; as well as the full Web page. We acknowledge that search results change over time, which might have an effect on veracity prediction. However, studying such temporal effects is outside the scope of this paper. Similar to Web crawling claims, as described in Section SECREF9 , the corresponding Web pages can in some cases not be retrieved, in which case fewer than 10 evidence pages are available. The resulting evidence pages are from a wide variety of URL domains, though with a predictable skew towards popular websites, such as Wikipedia or The Guardian (see Table TABREF42 in the appendix for detailed statistics).
Entity Detection and Linking
To better understand what claims are about, we conduct entity linking for all claims. Specifically, mentions of people, places, organisations, and other named entities within a claim are recognised and linked to their respective Wikipedia pages, if available. Where there are different entities with the same name, they are disambiguated. For this, we apply the state-of-the-art neural entity linking model by BIBREF29 . This results in a total of 25,763 entities detected and linked to Wikipedia, with a total of 15,351 claims involved, meaning that 42% of all claims contain entities that can be linked to Wikipedia. Later on, we use entities as additional metadata (see Section SECREF31 ). The distribution of claim numbers according to the number of entities they contain is shown in Figure FIGREF15 . We observe that the majority of claims have one to four entities, and the maximum number of 35 entities occurs in one claim only. Out of the 25,763 entities, 2,767 are unique entities. The top 30 most frequent entities are listed in Table TABREF14 . This clearly shows that most of the claims involve entities related to the United States, which is to be expected, as most of the fact checking websites are US-based.
Claim Veracity Prediction
We train several models to predict the veracity of claims. Those fall into two categories: those that only consider the claims themselves, and those that encode evidence pages as well. In addition, claim metadata (speaker, checker, linked entities) is optionally encoded for both categories of models, and ablation studies with and without that metadata are shown. We first describe the base model used in Section SECREF16 , followed by introducing our novel evidence ranking and veracity prediction model in Section SECREF22 , and lastly the metadata encoding model in Section SECREF31 .
Multi-Domain Claim Veracity Prediction with Disparate Label Spaces
Since not all fact checking websites use the same claim labels (see Table TABREF34 , and Table TABREF43 in the appendix), training a claim veracity prediction model is not entirely straight-forward. One option would be to manually map those labels onto one another. However, since the sheer number of labels is rather large (165), and it is not always clear from the guidelines on fact checking websites how they can be mapped onto one another, we opt to learn how these labels relate to one another as part of the veracity prediction model. To do so, we employ the multi-task learning (MTL) approach inspired by collaborative filtering presented in BIBREF30 (MTL with LEL–multitask learning with label embedding layer) that excels on pairwise sequence classification tasks with disparate label spaces. More concretely, each domain is modelled as its own task in a MTL architecture, and labels are projected into a fixed-length label embedding space. Predictions are then made by taking the dot product between the claim-evidence embeddings and the label embeddings. By doing so, the model implicitly learns how semantically close the labels are to one another, and can benefit from this knowledge when making predictions for individual tasks, which on their own might only have a small number of instances. When making predictions for individual domains/tasks, both at training and at test time, as well as when calculating the loss, a mask is applied such that the valid and invalid labels for that task are restricted to the set of known task labels.
Note that the setting here slightly differs from BIBREF30 . There, tasks are less strongly related to one another; for example, they consider stance detection, aspect-based sentiment analysis and natural language inference. Here, we have different domains, as opposed to conceptually different tasks, but use their framework, as we have the same underlying problem of disparate label spaces. A more formal problem definition follows next, as our evidence ranking and veracity prediction model in Section SECREF22 then builds on it.
We frame our problem as a multi-task learning one, where access to labelled datasets for INLINEFORM0 tasks INLINEFORM1 is given at training time with a target task INLINEFORM2 that is of particular interest. The training dataset for task INLINEFORM3 consists of INLINEFORM4 examples INLINEFORM5 and their labels INLINEFORM6 . The base model is a classic deep neural network MTL model BIBREF31 that shares its parameters across tasks and has task-specific softmax output layers that output a probability distribution INLINEFORM7 for task INLINEFORM8 : DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 is the weight matrix and bias term of the output layer of task INLINEFORM3 respectively, INLINEFORM4 is the jointly learned hidden representation, INLINEFORM5 is the number of labels for task INLINEFORM6 , and INLINEFORM7 is the dimensionality of INLINEFORM8 . The MTL model is trained to minimise the sum of individual task losses INLINEFORM9 using a negative log-likelihood objective.
To learn the relationships between labels, a Label Embedding Layer (LEL) embeds labels of all tasks in a joint Euclidian space. Instead of training separate softmax output layers as above, a label compatibility function INLINEFORM0 measures how similar a label with embedding INLINEFORM1 is to the hidden representation INLINEFORM2 : DISPLAYFORM0
where INLINEFORM0 is the dot product. Padding is applied such that INLINEFORM1 and INLINEFORM2 have the same dimensionality. Matrix multiplication and softmax are used for making predictions: DISPLAYFORM0
where INLINEFORM0 is the label embedding matrix for all tasks and INLINEFORM1 is the dimensionality of the label embeddings. We apply a task-specific mask to INLINEFORM2 in order to obtain a task-specific probability distribution INLINEFORM3 . The LEL is shared across all tasks, which allows the model to learn the relationships between labels in the joint embedding space.
Joint Evidence Ranking and Claim Veracity Prediction
So far, we have ignored the issue of how to obtain claim representation, as the base model described in the previous section is agnostic to how instances are encoded. A very simple approach, which we report as a baseline, is to encode claim texts only. Such a model ignores evidence for and against a claim, and ends up guessing the veracity based on surface patterns observed in the claim texts.
We next introduce two variants of evidence-based veracity prediction models that encode 10 pieces of evidence in addition to the claim. Here, we opt to encode search snippets as opposed to whole retrieved pages. While the latter would also be possible, it comes with a number of additional challenges, such as encoding large documents, parsing tables or PDF files, and encoding images or videos on these pages, which we leave to future work. Search snippets also have the benefit that they already contain summaries of the part of the page content that is most related to the claim.
Our problem is to obtain encodings for INLINEFORM0 examples INLINEFORM1 . For simplicity, we will henceforth drop the task superscript and refer to instances as INLINEFORM2 , as instance encodings are learned in a task-agnostic fashion. Each example further consists of a claim INLINEFORM3 and INLINEFORM4 evidence pages INLINEFORM5 .
Each claim and evidence page is encoded with a BiLSTM to obtain a sentence embedding, which is the concatenation of the last state of the forward and backward reading of the sentence, i.e. INLINEFORM0 , where INLINEFORM1 is the sentence embedding.
Next, we want to combine claims and evidence sentence embeddings into joint instance representations. In the simplest case, referred to as model variant crawled_avg, we mean average the BiLSTM sentence embeddings of all evidence pages (signified by the overline) and concatenate those with the claim embeddings, i.e. DISPLAYFORM0
where INLINEFORM0 is the resulting encoding for training example INLINEFORM1 and INLINEFORM2 denotes vector concatenation. However, this has the disadvantage that all evidence pages are considered equal.
The here proposed alternative instance encoding model, crawled_ranked, which achieves the highest overall performance as discussed in Section SECREF5 , learns the compatibility between an instance's claim and each evidence page. It ranks evidence pages by their utility for the veracity prediction task, and then uses the resulting ranking to obtain a weighted combination of all claim-evidence pairs. No direct labels are available to learn the ranking of individual documents, only for the veracity of the associated claim, so the model has to learn evidence ranks implicitly.
To combine claim and evidence representations, we use the matching model proposed for the task of natural language inference by BIBREF32 and adapt it to combine an instance's claim representation with each evidence representation, i.e. DISPLAYFORM0
where INLINEFORM0 is the resulting encoding for training example INLINEFORM1 and evidence page INLINEFORM2 , INLINEFORM3 denotes vector concatenation, and INLINEFORM4 denotes the dot product.
All joint claim-evidence representations INLINEFORM0 are then projected into the binary space via a fully connected layer INLINEFORM1 , followed by a non-linear activation function INLINEFORM2 , to obtain a soft ranking of claim-evidence pairs, in practice a 10-dimensional vector, DISPLAYFORM0
where INLINEFORM0 denotes concatenation.
Scores for all labels are obtained as per ( EQREF28 ) above, with the same input instance embeddings as for the evidence ranker, i.e. INLINEFORM0 . Final predictions for all claim-evidence pairs are then obtained by taking the dot product between the label scores and binary evidence ranking scores, i.e. DISPLAYFORM0
Note that the novelty here is that, unlike for the model described in BIBREF32 , we have no direct labels for learning weights for this matching model. Rather, our model has to implicitly learn these weights for each claim-evidence pair in an end-to-end fashion given the veracity labels.
Metadata
We experiment with how useful claim metadata is, and encode the following as one-hot vectors: speaker, category, tags and linked entities. We do not encode `Reason' as it gives away the label, and do not include `Checker' as there are too many unique checkers for this information to be relevant. The claim publication date is potentially relevant, but it does not make sense to merely model this as a one-hot feature, so we leave incorporating temporal information to future work.
Since all metadata consists of individual words and phrases, a sequence encoder is not necessary, and we opt for a CNN followed by a max pooling operation as used in BIBREF3 to encode metadata for fact checking. The max-pooled metadata representations, denoted INLINEFORM0 , are then concatenated with the instance representations, e.g. for the most elaborate model, crawled_ranked, these would be concatenated with INLINEFORM1 .
Experimental Setup
The base sentence embedding model is a BiLSTM over all words in the respective sequences with randomly initialised word embeddings, following BIBREF30 . We opt for this strong baseline sentence encoding model, as opposed to engineering sentence embeddings that work particularly well for this dataset, to showcase the dataset. We would expect pre-trained contextual encoding models, e.g. ELMO BIBREF33 , ULMFit BIBREF34 , BERT BIBREF35 , to offer complementary performance gains, as has been shown for a few recent papers BIBREF36 , BIBREF37 .
For claim veracity prediction without evidence documents with the MTL with LEL model, we use the following sentence encoding variants: claim-only, which uses a BiLSTM-based sentence embedding as input, and claim-only_embavg, which uses a sentence embedding based on mean averaged word embeddings as input.
We train one multi-task model per task (i.e., one model per domain). We perform a grid search over the following hyperparameters, tuned on the respective dev set, and evaluate on the correspoding test set (final settings are underlined): word embedding size [64, 128, 256], BiLSTM hidden layer size [64, 128, 256], number of BiLSTM hidden layers [1, 2, 3], BiLSTM dropout on input and output layers [0.0, 0.1, 0.2, 0.5], word-by-word-attention for BiLSTM with window size 10 BIBREF38 [True, False], skip-connections for the BiLSTM [True, False], batch size [32, 64, 128], label embedding size [16, 32, 64]. We use ReLU as an activation function for both the BiLSTM and the CNN. For the CNN, the following hyperparameters are used: number filters [32], kernel size [32]. We train using cross-entropy loss and the RMSProp optimiser with initial learning rate of INLINEFORM0 and perform early stopping on the dev set with a patience of 3.
Results
For each domain, we compute the Micro as well as Macro F1, then mean average results over all domains. Core results with all vs. no metadata are shown in Table TABREF30 . We first experiment with different base model variants and find that label embeddings improve results, and that the best proposed models utilising multiple domains outperform single-task models (see Table TABREF36 ). This corroborates the findings of BIBREF30 . Per-domain results with the best model are shown in Table TABREF34 . Domain names are from hereon after abbreviated for brevity, see Table TABREF1 in the appendix for correspondences to full website names. Unsurprisingly, it is hard to achieve a high Macro F1 for domains with many labels, e.g. tron and snes. Further, some domains, surprisingly mostly with small numbers of instances, seem to be very easy – a perfect Micro and Macro F1 score of 1.0 is achieved on ranz, bove, buca, fani and thal. We find that for those domains, the verdict is often already revealed as part of the claim using explicit wording.
Our evidence-based claim veracity prediction models outperform claim-only veracity prediction models by a large margin. Unsurprisingly, claim-only_embavg is outperformed by claim-only. Further, crawled_ranked is our best-performing model in terms of Micro F1 and Macro F1, meaning that our model captures that not every piece of evidence is equally important, and can utilise this for veracity prediction.
We perform an ablation analysis of how metadata impacts results, shown in Table TABREF35 . Out of the different types of metadata, topic tags on their own contribute the most. This is likely because they offer highly complementary information to the claim text of evidence pages. Only using all metadata together achieves a higher Macro F1 at similar Micro F1 than using no metadata at all. To further investigate this, we split the test set into those instances for which no metadata is available vs. those for which metadata is available. We find that encoding metadata within the model hurts performance for domains where no metadata is available, but improves performance where it is. In practice, an ensemble of both types of models would be sensible, as well as exploring more involved methods of encoding metadata.
Analysis and Discussion
An analysis of labels frequently confused with one another, for the largest domain `pomt' and best-performing model crawled_ranked + meta is shown in Figure FIGREF39 . The diagonal represents when gold and predicted labels match, and the numbers signify the number of test instances. One can observe that the model struggles more to detect claims with labels `true' than those with label `false'. Generally, many confusions occur over close labels, e.g. `half-true' vs. `mostly true'. We further analyse what properties instances that are predicted correctly vs. incorrectly have, using the model crawled_ranked meta. We find that, unsurprisingly, longer claims are harder to classify correctly, and that claims with a high direct token overlap with evidence pages lead to a high evidence ranking. When it comes to frequently occurring tags and entities, very general tags such as `government-and-politics' or `tax' that do not give away much, frequently co-occur with incorrect predictions, whereas more specific tags such as `brisbane-4000' or `hong-kong' tend to co-occur with correct predictions. Similar trends are observed for bigrams. This means that the model has an easy time succeeding for instances where the claims are short, where specific topics tend to co-occur with certain veracities, and where evidence documents are highly informative. Instances with longer, more complex claims where evidence is ambiguous remain challenging.
Conclusions
We present a new, real-world fact checking dataset, currently the largest of its kind. It consists of 34,918 claims collected from 26 fact checking websites, rich metadata and 10 retrieved evidence pages per claim. We find that encoding the metadata as well evidence pages helps, and introduce a new joint model for ranking evidence pages and predicting veracity.
Acknowledgments
This research is partially supported by QUARTZ (721321, EU H2020 MSCA-ITN) and DABAI (5153-00004A, Innovation Fund Denmark).
Appendix
Summary statistics for claim collection. “Domain” indicates the domain name used for the veracity prediction experiments, “–” indicates that the website was not used due to missing or insufficient claim labels, see Section SECREF12 .
Comparison of fact checking datasets. Doc = all doc types (including tweets, replies, etc.). SoA perform indicates state-of-the-art performance. INLINEFORM0 indicates that claims are not naturally occuring: BIBREF6 use events as claims; BIBREF7 use DBPedia tiples as claims; BIBREF9 use tweets as claims; and BIBREF2 rewrite sentences in Wikipedia as claims. INLINEFORM1 denotes that the SoA performance is from other papers. Best performance for BIBREF3 is from BIBREF40 ; BIBREF2 from BIBREF28 ; BIBREF10 from BIBREF42 in English, BIBREF12 from BIBREF26 ; and BIBREF13 from BIBREF39 . | a BiLSTM over all words in the respective sequences with randomly initialised word embeddings, following BIBREF30 |
e9ccc74b1f1b172224cf9f01e66b1fa9e34d2593 | e9ccc74b1f1b172224cf9f01e66b1fa9e34d2593_0 | Q: What metadata is included?
Text: Introduction
Misinformation and disinformation are two of the most pertinent and difficult challenges of the information age, exacerbated by the popularity of social media. In an effort to counter this, a significant amount of manual labour has been invested in fact checking claims, often collecting the results of these manual checks on fact checking portals or websites such as politifact.com or snopes.com. In a parallel development, researchers have recently started to view fact checking as a task that can be partially automated, using machine learning and NLP to automatically predict the veracity of claims. However, existing efforts either use small datasets consisting of naturally occurring claims (e.g. BIBREF0 , BIBREF1 ), or datasets consisting of artificially constructed claims such as FEVER BIBREF2 . While the latter offer valuable contributions to further automatic claim verification work, they cannot replace real-world datasets.
Datasets
Over the past few years, a variety of mostly small datasets related to fact checking have been released. An overview over core datasets is given in Table TABREF4 , and a version of this table extended with the number of documents, source of annotations and SoA performances can be found in the appendix (Table TABREF1 ). The datasets can be grouped into four categories (I–IV). Category I contains datasets aimed at testing how well the veracity of a claim can be predicted using the claim alone, without context or evidence documents. Category II contains datasets bundled with documents related to each claim – either topically related to provide context, or serving as evidence. Those documents are, however, not annotated. Category III is for predicting veracity; they encourage retrieving evidence documents as part of their task description, but do not distribute them. Finally, category IV comprises datasets annotated for both veracity and stance. Thus, every document is annotated with a label indicating whether the document supports or denies the claim, or is unrelated to it. Additional labels can then be added to the datasets to better predict veracity, for instance by jointly training stance and veracity prediction models.
Methods not shown in the table, but related to fact checking, are stance detection for claims BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , satire detection BIBREF21 , clickbait detection BIBREF22 , conspiracy news detection BIBREF23 , rumour cascade detection BIBREF24 and claim perspectives detection BIBREF25 .
Claims are obtained from a variety of sources, including Wikipedia, Twitter, criminal reports and fact checking websites such as politifact.com and snopes.com. The same goes for documents – these are often websites obtained through Web search queries, or Wikipedia documents, tweets or Facebook posts. Most datasets contain a fairly small number of claims, and those that do not, often lack evidence documents. An exception is BIBREF2 , who create a Wikipedia-based fact checking dataset. While a good testbed for developing deep neural architectures, their dataset is artificially constructed and can thus not take metadata about claims into account.
Contributions: We provide a dataset that, uniquely among extant datasets, contains a large number of naturally occurring claims and rich additional meta-information.
Methods
Fact checking methods partly depend on the type of dataset used. Methods only taking into account claims typically encode those with CNNs or RNNs BIBREF3 , BIBREF4 , and potentially encode metadata BIBREF3 in a similar way. Methods for small datasets often use hand-crafted features that are a mix of bag of word and other lexical features, e.g. LIWC, and then use those as input to a SVM or MLP BIBREF0 , BIBREF4 , BIBREF13 . Some use additional Twitter-specific features BIBREF26 . More involved methods taking into account evidence documents, often trained on larger datasets, consist of evidence identification and ranking following a neural model that measures the compatibility between claim and evidence BIBREF2 , BIBREF27 , BIBREF28 .
Contributions: The latter category above is the most related to our paper as we consider evidence documents. However, existing models are not trained jointly for evidence identification, or for stance and veracity prediction, but rather employ a pipeline approach. Here, we show that a joint approach that learns to weigh evidence pages by their importance for veracity prediction can improve downstream veracity prediction performance.
Dataset Construction
We crawled a total of 43,837 claims with their metadata (see details in Table TABREF1 ). We present the data collection in terms of selecting sources, crawling claims and associated metadata (Section SECREF9 ); retrieving evidence pages; and linking entities in the crawled claims (Section SECREF13 ).
Selection of sources
We crawled all active fact checking websites in English listed by Duke Reporters' Lab and on the Fact Checking Wikipedia page. This resulted in 38 websites in total (shown in Table TABREF1 ). Ten websites could not be crawled, as further detailed in Table TABREF40 . In the later experimental descriptions, we refer to the part of the dataset crawled from a specific fact checking website as a domain, and we refer to each website as source.
From each source, we crawled the ID, claim, label, URL, reason for label, categories, person making the claim (speaker), person fact checking the claim (checker), tags, article title, publication date, claim date, as well as the full text that appears when the claim is clicked. Lastly, the above full text contains hyperlinks, so we further crawled the full text that appears when each of those hyperlinks are clicked (outlinks).
There were a number of crawling issues, e.g. security protection of websites with SSL/TLS protocols, time out, URLs that pointed to pdf files instead of HTML content, or unresolvable encoding. In all of these cases, the content could not be retrieved. For some websites, no veracity labels were available, in which case, they were not selected as domains for training a veracity prediction model. Moreover, not all types of metadata (category, speaker, checker, tags, claim date, publish date) were available for all websites; and availability of articles and full texts differs as well.
We performed semi-automatic cleansing of the dataset as follows. First, we double-checked that the veracity labels would not appear in claims. For some domains, the first or last sentence of the claim would sometimes contain the veracity label, in which case we would discard either the full sentence or part of the sentence. Next, we checked the dataset for duplicate claims. We found 202 such instances, 69 of them with different labels. Upon manual inspection, this was mainly due to them appearing on different websites, with labels not differing much in practice (e.g. `Not true', vs. `Mostly False'). We made sure that all such duplicate claims would be in the training split of the dataset, so that the models would not have an unfair advantage. Finally, we performed some minor manual merging of label types for the same domain where it was clear that they were supposed to denote the same level of veracity (e.g. `distorts', `distorts the facts').
This resulted in a total of 36,534 claims with their metadata. For the purposes of fact verification, we discarded instances with labels that occur fewer than 5 times, resulting in 34,918 claims. The number of instances, as well as labels per domain, are shown in Table TABREF34 and label names in Table TABREF43 in the appendix. The dataset is split into a training part (80%) and a development and testing part (10% each) in a label-stratified manner. Note that the domains vary in the number of labels, ranging from 2 to 27. Labels include both straight-forward ratings of veracity (`correct', `incorrect'), but also labels that would be more difficult to map onto a veracity scale (e.g. `grass roots movement!', `misattributed', `not the whole story'). We therefore do not postprocess label types across domains to map them onto the same scale, and rather treat them as is. In the methodology section (Section SECREF4 ), we show how a model can be trained on this dataset regardless by framing this multi-domain veracity prediction task as a multi-task learning (MTL) one.
Retrieving Evidence Pages
The text of each claim is submitted verbatim as a query to the Google Search API (without quotes). The 10 most highly ranked search results are retrieved, for each of which we save the title; Google search rank; URL; time stamp of last update; search snippet; as well as the full Web page. We acknowledge that search results change over time, which might have an effect on veracity prediction. However, studying such temporal effects is outside the scope of this paper. Similar to Web crawling claims, as described in Section SECREF9 , the corresponding Web pages can in some cases not be retrieved, in which case fewer than 10 evidence pages are available. The resulting evidence pages are from a wide variety of URL domains, though with a predictable skew towards popular websites, such as Wikipedia or The Guardian (see Table TABREF42 in the appendix for detailed statistics).
Entity Detection and Linking
To better understand what claims are about, we conduct entity linking for all claims. Specifically, mentions of people, places, organisations, and other named entities within a claim are recognised and linked to their respective Wikipedia pages, if available. Where there are different entities with the same name, they are disambiguated. For this, we apply the state-of-the-art neural entity linking model by BIBREF29 . This results in a total of 25,763 entities detected and linked to Wikipedia, with a total of 15,351 claims involved, meaning that 42% of all claims contain entities that can be linked to Wikipedia. Later on, we use entities as additional metadata (see Section SECREF31 ). The distribution of claim numbers according to the number of entities they contain is shown in Figure FIGREF15 . We observe that the majority of claims have one to four entities, and the maximum number of 35 entities occurs in one claim only. Out of the 25,763 entities, 2,767 are unique entities. The top 30 most frequent entities are listed in Table TABREF14 . This clearly shows that most of the claims involve entities related to the United States, which is to be expected, as most of the fact checking websites are US-based.
Claim Veracity Prediction
We train several models to predict the veracity of claims. Those fall into two categories: those that only consider the claims themselves, and those that encode evidence pages as well. In addition, claim metadata (speaker, checker, linked entities) is optionally encoded for both categories of models, and ablation studies with and without that metadata are shown. We first describe the base model used in Section SECREF16 , followed by introducing our novel evidence ranking and veracity prediction model in Section SECREF22 , and lastly the metadata encoding model in Section SECREF31 .
Multi-Domain Claim Veracity Prediction with Disparate Label Spaces
Since not all fact checking websites use the same claim labels (see Table TABREF34 , and Table TABREF43 in the appendix), training a claim veracity prediction model is not entirely straight-forward. One option would be to manually map those labels onto one another. However, since the sheer number of labels is rather large (165), and it is not always clear from the guidelines on fact checking websites how they can be mapped onto one another, we opt to learn how these labels relate to one another as part of the veracity prediction model. To do so, we employ the multi-task learning (MTL) approach inspired by collaborative filtering presented in BIBREF30 (MTL with LEL–multitask learning with label embedding layer) that excels on pairwise sequence classification tasks with disparate label spaces. More concretely, each domain is modelled as its own task in a MTL architecture, and labels are projected into a fixed-length label embedding space. Predictions are then made by taking the dot product between the claim-evidence embeddings and the label embeddings. By doing so, the model implicitly learns how semantically close the labels are to one another, and can benefit from this knowledge when making predictions for individual tasks, which on their own might only have a small number of instances. When making predictions for individual domains/tasks, both at training and at test time, as well as when calculating the loss, a mask is applied such that the valid and invalid labels for that task are restricted to the set of known task labels.
Note that the setting here slightly differs from BIBREF30 . There, tasks are less strongly related to one another; for example, they consider stance detection, aspect-based sentiment analysis and natural language inference. Here, we have different domains, as opposed to conceptually different tasks, but use their framework, as we have the same underlying problem of disparate label spaces. A more formal problem definition follows next, as our evidence ranking and veracity prediction model in Section SECREF22 then builds on it.
We frame our problem as a multi-task learning one, where access to labelled datasets for INLINEFORM0 tasks INLINEFORM1 is given at training time with a target task INLINEFORM2 that is of particular interest. The training dataset for task INLINEFORM3 consists of INLINEFORM4 examples INLINEFORM5 and their labels INLINEFORM6 . The base model is a classic deep neural network MTL model BIBREF31 that shares its parameters across tasks and has task-specific softmax output layers that output a probability distribution INLINEFORM7 for task INLINEFORM8 : DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 is the weight matrix and bias term of the output layer of task INLINEFORM3 respectively, INLINEFORM4 is the jointly learned hidden representation, INLINEFORM5 is the number of labels for task INLINEFORM6 , and INLINEFORM7 is the dimensionality of INLINEFORM8 . The MTL model is trained to minimise the sum of individual task losses INLINEFORM9 using a negative log-likelihood objective.
To learn the relationships between labels, a Label Embedding Layer (LEL) embeds labels of all tasks in a joint Euclidian space. Instead of training separate softmax output layers as above, a label compatibility function INLINEFORM0 measures how similar a label with embedding INLINEFORM1 is to the hidden representation INLINEFORM2 : DISPLAYFORM0
where INLINEFORM0 is the dot product. Padding is applied such that INLINEFORM1 and INLINEFORM2 have the same dimensionality. Matrix multiplication and softmax are used for making predictions: DISPLAYFORM0
where INLINEFORM0 is the label embedding matrix for all tasks and INLINEFORM1 is the dimensionality of the label embeddings. We apply a task-specific mask to INLINEFORM2 in order to obtain a task-specific probability distribution INLINEFORM3 . The LEL is shared across all tasks, which allows the model to learn the relationships between labels in the joint embedding space.
Joint Evidence Ranking and Claim Veracity Prediction
So far, we have ignored the issue of how to obtain claim representation, as the base model described in the previous section is agnostic to how instances are encoded. A very simple approach, which we report as a baseline, is to encode claim texts only. Such a model ignores evidence for and against a claim, and ends up guessing the veracity based on surface patterns observed in the claim texts.
We next introduce two variants of evidence-based veracity prediction models that encode 10 pieces of evidence in addition to the claim. Here, we opt to encode search snippets as opposed to whole retrieved pages. While the latter would also be possible, it comes with a number of additional challenges, such as encoding large documents, parsing tables or PDF files, and encoding images or videos on these pages, which we leave to future work. Search snippets also have the benefit that they already contain summaries of the part of the page content that is most related to the claim.
Our problem is to obtain encodings for INLINEFORM0 examples INLINEFORM1 . For simplicity, we will henceforth drop the task superscript and refer to instances as INLINEFORM2 , as instance encodings are learned in a task-agnostic fashion. Each example further consists of a claim INLINEFORM3 and INLINEFORM4 evidence pages INLINEFORM5 .
Each claim and evidence page is encoded with a BiLSTM to obtain a sentence embedding, which is the concatenation of the last state of the forward and backward reading of the sentence, i.e. INLINEFORM0 , where INLINEFORM1 is the sentence embedding.
Next, we want to combine claims and evidence sentence embeddings into joint instance representations. In the simplest case, referred to as model variant crawled_avg, we mean average the BiLSTM sentence embeddings of all evidence pages (signified by the overline) and concatenate those with the claim embeddings, i.e. DISPLAYFORM0
where INLINEFORM0 is the resulting encoding for training example INLINEFORM1 and INLINEFORM2 denotes vector concatenation. However, this has the disadvantage that all evidence pages are considered equal.
The here proposed alternative instance encoding model, crawled_ranked, which achieves the highest overall performance as discussed in Section SECREF5 , learns the compatibility between an instance's claim and each evidence page. It ranks evidence pages by their utility for the veracity prediction task, and then uses the resulting ranking to obtain a weighted combination of all claim-evidence pairs. No direct labels are available to learn the ranking of individual documents, only for the veracity of the associated claim, so the model has to learn evidence ranks implicitly.
To combine claim and evidence representations, we use the matching model proposed for the task of natural language inference by BIBREF32 and adapt it to combine an instance's claim representation with each evidence representation, i.e. DISPLAYFORM0
where INLINEFORM0 is the resulting encoding for training example INLINEFORM1 and evidence page INLINEFORM2 , INLINEFORM3 denotes vector concatenation, and INLINEFORM4 denotes the dot product.
All joint claim-evidence representations INLINEFORM0 are then projected into the binary space via a fully connected layer INLINEFORM1 , followed by a non-linear activation function INLINEFORM2 , to obtain a soft ranking of claim-evidence pairs, in practice a 10-dimensional vector, DISPLAYFORM0
where INLINEFORM0 denotes concatenation.
Scores for all labels are obtained as per ( EQREF28 ) above, with the same input instance embeddings as for the evidence ranker, i.e. INLINEFORM0 . Final predictions for all claim-evidence pairs are then obtained by taking the dot product between the label scores and binary evidence ranking scores, i.e. DISPLAYFORM0
Note that the novelty here is that, unlike for the model described in BIBREF32 , we have no direct labels for learning weights for this matching model. Rather, our model has to implicitly learn these weights for each claim-evidence pair in an end-to-end fashion given the veracity labels.
Metadata
We experiment with how useful claim metadata is, and encode the following as one-hot vectors: speaker, category, tags and linked entities. We do not encode `Reason' as it gives away the label, and do not include `Checker' as there are too many unique checkers for this information to be relevant. The claim publication date is potentially relevant, but it does not make sense to merely model this as a one-hot feature, so we leave incorporating temporal information to future work.
Since all metadata consists of individual words and phrases, a sequence encoder is not necessary, and we opt for a CNN followed by a max pooling operation as used in BIBREF3 to encode metadata for fact checking. The max-pooled metadata representations, denoted INLINEFORM0 , are then concatenated with the instance representations, e.g. for the most elaborate model, crawled_ranked, these would be concatenated with INLINEFORM1 .
Experimental Setup
The base sentence embedding model is a BiLSTM over all words in the respective sequences with randomly initialised word embeddings, following BIBREF30 . We opt for this strong baseline sentence encoding model, as opposed to engineering sentence embeddings that work particularly well for this dataset, to showcase the dataset. We would expect pre-trained contextual encoding models, e.g. ELMO BIBREF33 , ULMFit BIBREF34 , BERT BIBREF35 , to offer complementary performance gains, as has been shown for a few recent papers BIBREF36 , BIBREF37 .
For claim veracity prediction without evidence documents with the MTL with LEL model, we use the following sentence encoding variants: claim-only, which uses a BiLSTM-based sentence embedding as input, and claim-only_embavg, which uses a sentence embedding based on mean averaged word embeddings as input.
We train one multi-task model per task (i.e., one model per domain). We perform a grid search over the following hyperparameters, tuned on the respective dev set, and evaluate on the correspoding test set (final settings are underlined): word embedding size [64, 128, 256], BiLSTM hidden layer size [64, 128, 256], number of BiLSTM hidden layers [1, 2, 3], BiLSTM dropout on input and output layers [0.0, 0.1, 0.2, 0.5], word-by-word-attention for BiLSTM with window size 10 BIBREF38 [True, False], skip-connections for the BiLSTM [True, False], batch size [32, 64, 128], label embedding size [16, 32, 64]. We use ReLU as an activation function for both the BiLSTM and the CNN. For the CNN, the following hyperparameters are used: number filters [32], kernel size [32]. We train using cross-entropy loss and the RMSProp optimiser with initial learning rate of INLINEFORM0 and perform early stopping on the dev set with a patience of 3.
Results
For each domain, we compute the Micro as well as Macro F1, then mean average results over all domains. Core results with all vs. no metadata are shown in Table TABREF30 . We first experiment with different base model variants and find that label embeddings improve results, and that the best proposed models utilising multiple domains outperform single-task models (see Table TABREF36 ). This corroborates the findings of BIBREF30 . Per-domain results with the best model are shown in Table TABREF34 . Domain names are from hereon after abbreviated for brevity, see Table TABREF1 in the appendix for correspondences to full website names. Unsurprisingly, it is hard to achieve a high Macro F1 for domains with many labels, e.g. tron and snes. Further, some domains, surprisingly mostly with small numbers of instances, seem to be very easy – a perfect Micro and Macro F1 score of 1.0 is achieved on ranz, bove, buca, fani and thal. We find that for those domains, the verdict is often already revealed as part of the claim using explicit wording.
Our evidence-based claim veracity prediction models outperform claim-only veracity prediction models by a large margin. Unsurprisingly, claim-only_embavg is outperformed by claim-only. Further, crawled_ranked is our best-performing model in terms of Micro F1 and Macro F1, meaning that our model captures that not every piece of evidence is equally important, and can utilise this for veracity prediction.
We perform an ablation analysis of how metadata impacts results, shown in Table TABREF35 . Out of the different types of metadata, topic tags on their own contribute the most. This is likely because they offer highly complementary information to the claim text of evidence pages. Only using all metadata together achieves a higher Macro F1 at similar Micro F1 than using no metadata at all. To further investigate this, we split the test set into those instances for which no metadata is available vs. those for which metadata is available. We find that encoding metadata within the model hurts performance for domains where no metadata is available, but improves performance where it is. In practice, an ensemble of both types of models would be sensible, as well as exploring more involved methods of encoding metadata.
Analysis and Discussion
An analysis of labels frequently confused with one another, for the largest domain `pomt' and best-performing model crawled_ranked + meta is shown in Figure FIGREF39 . The diagonal represents when gold and predicted labels match, and the numbers signify the number of test instances. One can observe that the model struggles more to detect claims with labels `true' than those with label `false'. Generally, many confusions occur over close labels, e.g. `half-true' vs. `mostly true'. We further analyse what properties instances that are predicted correctly vs. incorrectly have, using the model crawled_ranked meta. We find that, unsurprisingly, longer claims are harder to classify correctly, and that claims with a high direct token overlap with evidence pages lead to a high evidence ranking. When it comes to frequently occurring tags and entities, very general tags such as `government-and-politics' or `tax' that do not give away much, frequently co-occur with incorrect predictions, whereas more specific tags such as `brisbane-4000' or `hong-kong' tend to co-occur with correct predictions. Similar trends are observed for bigrams. This means that the model has an easy time succeeding for instances where the claims are short, where specific topics tend to co-occur with certain veracities, and where evidence documents are highly informative. Instances with longer, more complex claims where evidence is ambiguous remain challenging.
Conclusions
We present a new, real-world fact checking dataset, currently the largest of its kind. It consists of 34,918 claims collected from 26 fact checking websites, rich metadata and 10 retrieved evidence pages per claim. We find that encoding the metadata as well evidence pages helps, and introduce a new joint model for ranking evidence pages and predicting veracity.
Acknowledgments
This research is partially supported by QUARTZ (721321, EU H2020 MSCA-ITN) and DABAI (5153-00004A, Innovation Fund Denmark).
Appendix
Summary statistics for claim collection. “Domain” indicates the domain name used for the veracity prediction experiments, “–” indicates that the website was not used due to missing or insufficient claim labels, see Section SECREF12 .
Comparison of fact checking datasets. Doc = all doc types (including tweets, replies, etc.). SoA perform indicates state-of-the-art performance. INLINEFORM0 indicates that claims are not naturally occuring: BIBREF6 use events as claims; BIBREF7 use DBPedia tiples as claims; BIBREF9 use tweets as claims; and BIBREF2 rewrite sentences in Wikipedia as claims. INLINEFORM1 denotes that the SoA performance is from other papers. Best performance for BIBREF3 is from BIBREF40 ; BIBREF2 from BIBREF28 ; BIBREF10 from BIBREF42 in English, BIBREF12 from BIBREF26 ; and BIBREF13 from BIBREF39 . | besides claim, label and claim url, it also includes a claim ID, reason, category, speaker, checker, tags, claim entities, article title, publish data and claim date |
2948015c2a5cd6a7f2ad99b4622f7e4278ceb0d4 | 2948015c2a5cd6a7f2ad99b4622f7e4278ceb0d4_0 | Q: How many expert journalists were there?
Text: Introduction
Misinformation and disinformation are two of the most pertinent and difficult challenges of the information age, exacerbated by the popularity of social media. In an effort to counter this, a significant amount of manual labour has been invested in fact checking claims, often collecting the results of these manual checks on fact checking portals or websites such as politifact.com or snopes.com. In a parallel development, researchers have recently started to view fact checking as a task that can be partially automated, using machine learning and NLP to automatically predict the veracity of claims. However, existing efforts either use small datasets consisting of naturally occurring claims (e.g. BIBREF0 , BIBREF1 ), or datasets consisting of artificially constructed claims such as FEVER BIBREF2 . While the latter offer valuable contributions to further automatic claim verification work, they cannot replace real-world datasets.
Datasets
Over the past few years, a variety of mostly small datasets related to fact checking have been released. An overview over core datasets is given in Table TABREF4 , and a version of this table extended with the number of documents, source of annotations and SoA performances can be found in the appendix (Table TABREF1 ). The datasets can be grouped into four categories (I–IV). Category I contains datasets aimed at testing how well the veracity of a claim can be predicted using the claim alone, without context or evidence documents. Category II contains datasets bundled with documents related to each claim – either topically related to provide context, or serving as evidence. Those documents are, however, not annotated. Category III is for predicting veracity; they encourage retrieving evidence documents as part of their task description, but do not distribute them. Finally, category IV comprises datasets annotated for both veracity and stance. Thus, every document is annotated with a label indicating whether the document supports or denies the claim, or is unrelated to it. Additional labels can then be added to the datasets to better predict veracity, for instance by jointly training stance and veracity prediction models.
Methods not shown in the table, but related to fact checking, are stance detection for claims BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , satire detection BIBREF21 , clickbait detection BIBREF22 , conspiracy news detection BIBREF23 , rumour cascade detection BIBREF24 and claim perspectives detection BIBREF25 .
Claims are obtained from a variety of sources, including Wikipedia, Twitter, criminal reports and fact checking websites such as politifact.com and snopes.com. The same goes for documents – these are often websites obtained through Web search queries, or Wikipedia documents, tweets or Facebook posts. Most datasets contain a fairly small number of claims, and those that do not, often lack evidence documents. An exception is BIBREF2 , who create a Wikipedia-based fact checking dataset. While a good testbed for developing deep neural architectures, their dataset is artificially constructed and can thus not take metadata about claims into account.
Contributions: We provide a dataset that, uniquely among extant datasets, contains a large number of naturally occurring claims and rich additional meta-information.
Methods
Fact checking methods partly depend on the type of dataset used. Methods only taking into account claims typically encode those with CNNs or RNNs BIBREF3 , BIBREF4 , and potentially encode metadata BIBREF3 in a similar way. Methods for small datasets often use hand-crafted features that are a mix of bag of word and other lexical features, e.g. LIWC, and then use those as input to a SVM or MLP BIBREF0 , BIBREF4 , BIBREF13 . Some use additional Twitter-specific features BIBREF26 . More involved methods taking into account evidence documents, often trained on larger datasets, consist of evidence identification and ranking following a neural model that measures the compatibility between claim and evidence BIBREF2 , BIBREF27 , BIBREF28 .
Contributions: The latter category above is the most related to our paper as we consider evidence documents. However, existing models are not trained jointly for evidence identification, or for stance and veracity prediction, but rather employ a pipeline approach. Here, we show that a joint approach that learns to weigh evidence pages by their importance for veracity prediction can improve downstream veracity prediction performance.
Dataset Construction
We crawled a total of 43,837 claims with their metadata (see details in Table TABREF1 ). We present the data collection in terms of selecting sources, crawling claims and associated metadata (Section SECREF9 ); retrieving evidence pages; and linking entities in the crawled claims (Section SECREF13 ).
Selection of sources
We crawled all active fact checking websites in English listed by Duke Reporters' Lab and on the Fact Checking Wikipedia page. This resulted in 38 websites in total (shown in Table TABREF1 ). Ten websites could not be crawled, as further detailed in Table TABREF40 . In the later experimental descriptions, we refer to the part of the dataset crawled from a specific fact checking website as a domain, and we refer to each website as source.
From each source, we crawled the ID, claim, label, URL, reason for label, categories, person making the claim (speaker), person fact checking the claim (checker), tags, article title, publication date, claim date, as well as the full text that appears when the claim is clicked. Lastly, the above full text contains hyperlinks, so we further crawled the full text that appears when each of those hyperlinks are clicked (outlinks).
There were a number of crawling issues, e.g. security protection of websites with SSL/TLS protocols, time out, URLs that pointed to pdf files instead of HTML content, or unresolvable encoding. In all of these cases, the content could not be retrieved. For some websites, no veracity labels were available, in which case, they were not selected as domains for training a veracity prediction model. Moreover, not all types of metadata (category, speaker, checker, tags, claim date, publish date) were available for all websites; and availability of articles and full texts differs as well.
We performed semi-automatic cleansing of the dataset as follows. First, we double-checked that the veracity labels would not appear in claims. For some domains, the first or last sentence of the claim would sometimes contain the veracity label, in which case we would discard either the full sentence or part of the sentence. Next, we checked the dataset for duplicate claims. We found 202 such instances, 69 of them with different labels. Upon manual inspection, this was mainly due to them appearing on different websites, with labels not differing much in practice (e.g. `Not true', vs. `Mostly False'). We made sure that all such duplicate claims would be in the training split of the dataset, so that the models would not have an unfair advantage. Finally, we performed some minor manual merging of label types for the same domain where it was clear that they were supposed to denote the same level of veracity (e.g. `distorts', `distorts the facts').
This resulted in a total of 36,534 claims with their metadata. For the purposes of fact verification, we discarded instances with labels that occur fewer than 5 times, resulting in 34,918 claims. The number of instances, as well as labels per domain, are shown in Table TABREF34 and label names in Table TABREF43 in the appendix. The dataset is split into a training part (80%) and a development and testing part (10% each) in a label-stratified manner. Note that the domains vary in the number of labels, ranging from 2 to 27. Labels include both straight-forward ratings of veracity (`correct', `incorrect'), but also labels that would be more difficult to map onto a veracity scale (e.g. `grass roots movement!', `misattributed', `not the whole story'). We therefore do not postprocess label types across domains to map them onto the same scale, and rather treat them as is. In the methodology section (Section SECREF4 ), we show how a model can be trained on this dataset regardless by framing this multi-domain veracity prediction task as a multi-task learning (MTL) one.
Retrieving Evidence Pages
The text of each claim is submitted verbatim as a query to the Google Search API (without quotes). The 10 most highly ranked search results are retrieved, for each of which we save the title; Google search rank; URL; time stamp of last update; search snippet; as well as the full Web page. We acknowledge that search results change over time, which might have an effect on veracity prediction. However, studying such temporal effects is outside the scope of this paper. Similar to Web crawling claims, as described in Section SECREF9 , the corresponding Web pages can in some cases not be retrieved, in which case fewer than 10 evidence pages are available. The resulting evidence pages are from a wide variety of URL domains, though with a predictable skew towards popular websites, such as Wikipedia or The Guardian (see Table TABREF42 in the appendix for detailed statistics).
Entity Detection and Linking
To better understand what claims are about, we conduct entity linking for all claims. Specifically, mentions of people, places, organisations, and other named entities within a claim are recognised and linked to their respective Wikipedia pages, if available. Where there are different entities with the same name, they are disambiguated. For this, we apply the state-of-the-art neural entity linking model by BIBREF29 . This results in a total of 25,763 entities detected and linked to Wikipedia, with a total of 15,351 claims involved, meaning that 42% of all claims contain entities that can be linked to Wikipedia. Later on, we use entities as additional metadata (see Section SECREF31 ). The distribution of claim numbers according to the number of entities they contain is shown in Figure FIGREF15 . We observe that the majority of claims have one to four entities, and the maximum number of 35 entities occurs in one claim only. Out of the 25,763 entities, 2,767 are unique entities. The top 30 most frequent entities are listed in Table TABREF14 . This clearly shows that most of the claims involve entities related to the United States, which is to be expected, as most of the fact checking websites are US-based.
Claim Veracity Prediction
We train several models to predict the veracity of claims. Those fall into two categories: those that only consider the claims themselves, and those that encode evidence pages as well. In addition, claim metadata (speaker, checker, linked entities) is optionally encoded for both categories of models, and ablation studies with and without that metadata are shown. We first describe the base model used in Section SECREF16 , followed by introducing our novel evidence ranking and veracity prediction model in Section SECREF22 , and lastly the metadata encoding model in Section SECREF31 .
Multi-Domain Claim Veracity Prediction with Disparate Label Spaces
Since not all fact checking websites use the same claim labels (see Table TABREF34 , and Table TABREF43 in the appendix), training a claim veracity prediction model is not entirely straight-forward. One option would be to manually map those labels onto one another. However, since the sheer number of labels is rather large (165), and it is not always clear from the guidelines on fact checking websites how they can be mapped onto one another, we opt to learn how these labels relate to one another as part of the veracity prediction model. To do so, we employ the multi-task learning (MTL) approach inspired by collaborative filtering presented in BIBREF30 (MTL with LEL–multitask learning with label embedding layer) that excels on pairwise sequence classification tasks with disparate label spaces. More concretely, each domain is modelled as its own task in a MTL architecture, and labels are projected into a fixed-length label embedding space. Predictions are then made by taking the dot product between the claim-evidence embeddings and the label embeddings. By doing so, the model implicitly learns how semantically close the labels are to one another, and can benefit from this knowledge when making predictions for individual tasks, which on their own might only have a small number of instances. When making predictions for individual domains/tasks, both at training and at test time, as well as when calculating the loss, a mask is applied such that the valid and invalid labels for that task are restricted to the set of known task labels.
Note that the setting here slightly differs from BIBREF30 . There, tasks are less strongly related to one another; for example, they consider stance detection, aspect-based sentiment analysis and natural language inference. Here, we have different domains, as opposed to conceptually different tasks, but use their framework, as we have the same underlying problem of disparate label spaces. A more formal problem definition follows next, as our evidence ranking and veracity prediction model in Section SECREF22 then builds on it.
We frame our problem as a multi-task learning one, where access to labelled datasets for INLINEFORM0 tasks INLINEFORM1 is given at training time with a target task INLINEFORM2 that is of particular interest. The training dataset for task INLINEFORM3 consists of INLINEFORM4 examples INLINEFORM5 and their labels INLINEFORM6 . The base model is a classic deep neural network MTL model BIBREF31 that shares its parameters across tasks and has task-specific softmax output layers that output a probability distribution INLINEFORM7 for task INLINEFORM8 : DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 is the weight matrix and bias term of the output layer of task INLINEFORM3 respectively, INLINEFORM4 is the jointly learned hidden representation, INLINEFORM5 is the number of labels for task INLINEFORM6 , and INLINEFORM7 is the dimensionality of INLINEFORM8 . The MTL model is trained to minimise the sum of individual task losses INLINEFORM9 using a negative log-likelihood objective.
To learn the relationships between labels, a Label Embedding Layer (LEL) embeds labels of all tasks in a joint Euclidian space. Instead of training separate softmax output layers as above, a label compatibility function INLINEFORM0 measures how similar a label with embedding INLINEFORM1 is to the hidden representation INLINEFORM2 : DISPLAYFORM0
where INLINEFORM0 is the dot product. Padding is applied such that INLINEFORM1 and INLINEFORM2 have the same dimensionality. Matrix multiplication and softmax are used for making predictions: DISPLAYFORM0
where INLINEFORM0 is the label embedding matrix for all tasks and INLINEFORM1 is the dimensionality of the label embeddings. We apply a task-specific mask to INLINEFORM2 in order to obtain a task-specific probability distribution INLINEFORM3 . The LEL is shared across all tasks, which allows the model to learn the relationships between labels in the joint embedding space.
Joint Evidence Ranking and Claim Veracity Prediction
So far, we have ignored the issue of how to obtain claim representation, as the base model described in the previous section is agnostic to how instances are encoded. A very simple approach, which we report as a baseline, is to encode claim texts only. Such a model ignores evidence for and against a claim, and ends up guessing the veracity based on surface patterns observed in the claim texts.
We next introduce two variants of evidence-based veracity prediction models that encode 10 pieces of evidence in addition to the claim. Here, we opt to encode search snippets as opposed to whole retrieved pages. While the latter would also be possible, it comes with a number of additional challenges, such as encoding large documents, parsing tables or PDF files, and encoding images or videos on these pages, which we leave to future work. Search snippets also have the benefit that they already contain summaries of the part of the page content that is most related to the claim.
Our problem is to obtain encodings for INLINEFORM0 examples INLINEFORM1 . For simplicity, we will henceforth drop the task superscript and refer to instances as INLINEFORM2 , as instance encodings are learned in a task-agnostic fashion. Each example further consists of a claim INLINEFORM3 and INLINEFORM4 evidence pages INLINEFORM5 .
Each claim and evidence page is encoded with a BiLSTM to obtain a sentence embedding, which is the concatenation of the last state of the forward and backward reading of the sentence, i.e. INLINEFORM0 , where INLINEFORM1 is the sentence embedding.
Next, we want to combine claims and evidence sentence embeddings into joint instance representations. In the simplest case, referred to as model variant crawled_avg, we mean average the BiLSTM sentence embeddings of all evidence pages (signified by the overline) and concatenate those with the claim embeddings, i.e. DISPLAYFORM0
where INLINEFORM0 is the resulting encoding for training example INLINEFORM1 and INLINEFORM2 denotes vector concatenation. However, this has the disadvantage that all evidence pages are considered equal.
The here proposed alternative instance encoding model, crawled_ranked, which achieves the highest overall performance as discussed in Section SECREF5 , learns the compatibility between an instance's claim and each evidence page. It ranks evidence pages by their utility for the veracity prediction task, and then uses the resulting ranking to obtain a weighted combination of all claim-evidence pairs. No direct labels are available to learn the ranking of individual documents, only for the veracity of the associated claim, so the model has to learn evidence ranks implicitly.
To combine claim and evidence representations, we use the matching model proposed for the task of natural language inference by BIBREF32 and adapt it to combine an instance's claim representation with each evidence representation, i.e. DISPLAYFORM0
where INLINEFORM0 is the resulting encoding for training example INLINEFORM1 and evidence page INLINEFORM2 , INLINEFORM3 denotes vector concatenation, and INLINEFORM4 denotes the dot product.
All joint claim-evidence representations INLINEFORM0 are then projected into the binary space via a fully connected layer INLINEFORM1 , followed by a non-linear activation function INLINEFORM2 , to obtain a soft ranking of claim-evidence pairs, in practice a 10-dimensional vector, DISPLAYFORM0
where INLINEFORM0 denotes concatenation.
Scores for all labels are obtained as per ( EQREF28 ) above, with the same input instance embeddings as for the evidence ranker, i.e. INLINEFORM0 . Final predictions for all claim-evidence pairs are then obtained by taking the dot product between the label scores and binary evidence ranking scores, i.e. DISPLAYFORM0
Note that the novelty here is that, unlike for the model described in BIBREF32 , we have no direct labels for learning weights for this matching model. Rather, our model has to implicitly learn these weights for each claim-evidence pair in an end-to-end fashion given the veracity labels.
Metadata
We experiment with how useful claim metadata is, and encode the following as one-hot vectors: speaker, category, tags and linked entities. We do not encode `Reason' as it gives away the label, and do not include `Checker' as there are too many unique checkers for this information to be relevant. The claim publication date is potentially relevant, but it does not make sense to merely model this as a one-hot feature, so we leave incorporating temporal information to future work.
Since all metadata consists of individual words and phrases, a sequence encoder is not necessary, and we opt for a CNN followed by a max pooling operation as used in BIBREF3 to encode metadata for fact checking. The max-pooled metadata representations, denoted INLINEFORM0 , are then concatenated with the instance representations, e.g. for the most elaborate model, crawled_ranked, these would be concatenated with INLINEFORM1 .
Experimental Setup
The base sentence embedding model is a BiLSTM over all words in the respective sequences with randomly initialised word embeddings, following BIBREF30 . We opt for this strong baseline sentence encoding model, as opposed to engineering sentence embeddings that work particularly well for this dataset, to showcase the dataset. We would expect pre-trained contextual encoding models, e.g. ELMO BIBREF33 , ULMFit BIBREF34 , BERT BIBREF35 , to offer complementary performance gains, as has been shown for a few recent papers BIBREF36 , BIBREF37 .
For claim veracity prediction without evidence documents with the MTL with LEL model, we use the following sentence encoding variants: claim-only, which uses a BiLSTM-based sentence embedding as input, and claim-only_embavg, which uses a sentence embedding based on mean averaged word embeddings as input.
We train one multi-task model per task (i.e., one model per domain). We perform a grid search over the following hyperparameters, tuned on the respective dev set, and evaluate on the correspoding test set (final settings are underlined): word embedding size [64, 128, 256], BiLSTM hidden layer size [64, 128, 256], number of BiLSTM hidden layers [1, 2, 3], BiLSTM dropout on input and output layers [0.0, 0.1, 0.2, 0.5], word-by-word-attention for BiLSTM with window size 10 BIBREF38 [True, False], skip-connections for the BiLSTM [True, False], batch size [32, 64, 128], label embedding size [16, 32, 64]. We use ReLU as an activation function for both the BiLSTM and the CNN. For the CNN, the following hyperparameters are used: number filters [32], kernel size [32]. We train using cross-entropy loss and the RMSProp optimiser with initial learning rate of INLINEFORM0 and perform early stopping on the dev set with a patience of 3.
Results
For each domain, we compute the Micro as well as Macro F1, then mean average results over all domains. Core results with all vs. no metadata are shown in Table TABREF30 . We first experiment with different base model variants and find that label embeddings improve results, and that the best proposed models utilising multiple domains outperform single-task models (see Table TABREF36 ). This corroborates the findings of BIBREF30 . Per-domain results with the best model are shown in Table TABREF34 . Domain names are from hereon after abbreviated for brevity, see Table TABREF1 in the appendix for correspondences to full website names. Unsurprisingly, it is hard to achieve a high Macro F1 for domains with many labels, e.g. tron and snes. Further, some domains, surprisingly mostly with small numbers of instances, seem to be very easy – a perfect Micro and Macro F1 score of 1.0 is achieved on ranz, bove, buca, fani and thal. We find that for those domains, the verdict is often already revealed as part of the claim using explicit wording.
Our evidence-based claim veracity prediction models outperform claim-only veracity prediction models by a large margin. Unsurprisingly, claim-only_embavg is outperformed by claim-only. Further, crawled_ranked is our best-performing model in terms of Micro F1 and Macro F1, meaning that our model captures that not every piece of evidence is equally important, and can utilise this for veracity prediction.
We perform an ablation analysis of how metadata impacts results, shown in Table TABREF35 . Out of the different types of metadata, topic tags on their own contribute the most. This is likely because they offer highly complementary information to the claim text of evidence pages. Only using all metadata together achieves a higher Macro F1 at similar Micro F1 than using no metadata at all. To further investigate this, we split the test set into those instances for which no metadata is available vs. those for which metadata is available. We find that encoding metadata within the model hurts performance for domains where no metadata is available, but improves performance where it is. In practice, an ensemble of both types of models would be sensible, as well as exploring more involved methods of encoding metadata.
Analysis and Discussion
An analysis of labels frequently confused with one another, for the largest domain `pomt' and best-performing model crawled_ranked + meta is shown in Figure FIGREF39 . The diagonal represents when gold and predicted labels match, and the numbers signify the number of test instances. One can observe that the model struggles more to detect claims with labels `true' than those with label `false'. Generally, many confusions occur over close labels, e.g. `half-true' vs. `mostly true'. We further analyse what properties instances that are predicted correctly vs. incorrectly have, using the model crawled_ranked meta. We find that, unsurprisingly, longer claims are harder to classify correctly, and that claims with a high direct token overlap with evidence pages lead to a high evidence ranking. When it comes to frequently occurring tags and entities, very general tags such as `government-and-politics' or `tax' that do not give away much, frequently co-occur with incorrect predictions, whereas more specific tags such as `brisbane-4000' or `hong-kong' tend to co-occur with correct predictions. Similar trends are observed for bigrams. This means that the model has an easy time succeeding for instances where the claims are short, where specific topics tend to co-occur with certain veracities, and where evidence documents are highly informative. Instances with longer, more complex claims where evidence is ambiguous remain challenging.
Conclusions
We present a new, real-world fact checking dataset, currently the largest of its kind. It consists of 34,918 claims collected from 26 fact checking websites, rich metadata and 10 retrieved evidence pages per claim. We find that encoding the metadata as well evidence pages helps, and introduce a new joint model for ranking evidence pages and predicting veracity.
Acknowledgments
This research is partially supported by QUARTZ (721321, EU H2020 MSCA-ITN) and DABAI (5153-00004A, Innovation Fund Denmark).
Appendix
Summary statistics for claim collection. “Domain” indicates the domain name used for the veracity prediction experiments, “–” indicates that the website was not used due to missing or insufficient claim labels, see Section SECREF12 .
Comparison of fact checking datasets. Doc = all doc types (including tweets, replies, etc.). SoA perform indicates state-of-the-art performance. INLINEFORM0 indicates that claims are not naturally occuring: BIBREF6 use events as claims; BIBREF7 use DBPedia tiples as claims; BIBREF9 use tweets as claims; and BIBREF2 rewrite sentences in Wikipedia as claims. INLINEFORM1 denotes that the SoA performance is from other papers. Best performance for BIBREF3 is from BIBREF40 ; BIBREF2 from BIBREF28 ; BIBREF10 from BIBREF42 in English, BIBREF12 from BIBREF26 ; and BIBREF13 from BIBREF39 . | Unanswerable |
c33d0bc5484c38de0119c8738ffa985d1bd64424 | c33d0bc5484c38de0119c8738ffa985d1bd64424_0 | Q: Do the images have multilingual annotations or monolingual ones?
Text: Introduction
Recent advances in learning distributed representations for words (i.e., word embeddings) have resulted in improvements across numerous natural language understanding tasks BIBREF0 , BIBREF1 . These methods use unlabeled text corpora to model the semantic content of words using their co-occurring context words. Key to this is the observation that semantically similar words have similar contexts BIBREF2 , thus leading to similar word embeddings. A limitation of these word embedding approaches is that they only produce monolingual embeddings. This is because word co-occurrences are very likely to be limited to being within language rather than across language in text corpora. Hence semantically similar words across languages are unlikely to have similar word embeddings.
To remedy this, there has been recent work on learning multilingual word embeddings, in which semantically similar words within and across languages have similar word embeddings BIBREF3 . Multilingual embeddings are not just interesting as an interlingua between multiple languages; they are useful in many downstream applications. For example, one application of multilingual embeddings is to find semantically similar words and phrases across languages BIBREF4 . Another use of multilingual embeddings is in enabling zero-shot learning on unseen languages, just as monolingual word embeddings enable predictions on unseen words BIBREF5 . In other words, a classifier using pretrained multilingual word embeddings can generalize to other languages even if training data is only in English. Interestingly, multilingual embeddings have also been shown to improve monolingual task performance BIBREF6 , BIBREF7 .
Consequently, multilingual embeddings can be very useful for low-resource languages – they allow us to overcome the scarcity of data in these languages. However, as detailed in Section "Related Work" , most work on learning multilingual word embeddings so far has heavily relied on the availability of expensive resources such as word-aligned / sentence-aligned parallel corpora or bilingual lexicons. Unfortunately, this data can be prohibitively expensive to collect for many languages. Furthermore even for languages with such data available, the coverage of the data is a limiting factor that restricts how much of the semantic space can be aligned across languages. Overcoming this data bottleneck is a key contribution of our work.
We investigate the use of cheaply available, weakly-supervised image-text data for learning multilingual embeddings. Images are a rich, language-agnostic medium that can provide a bridge across languages. For example, the English word “cat” might be found on webpages containing images of cats. Similarly, the German word “katze” (meaning cat) is likely to be found on other webpages containing similar (or perhaps identical) images of cats. Thus, images can be used to learn that these words have similar semantic content. Importantly, image-text data is generally available on the internet even for low-resource languages.
As image data has proliferated on the internet, tools for understanding images have advanced considerably. Convolutional neural networks (CNNs) have achieved roughly human-level or better performance on vision tasks, particularly classification BIBREF8 , BIBREF9 , BIBREF10 . During classification of an image, CNNs compute intermediate outputs that have been used as generic image features that perform well across a variety of vision tasks BIBREF11 . We use these image features to enforce that words associated with similar images have similar embeddings. Since words associated with similar images are likely to have similar semantic content, even across languages, our learned embeddings capture crosslingual similarity.
There has been other recent work on reducing the amount of supervision required to learn multilingual embeddings (cf. Section "Related Work" ). These methods take monolingual embeddings learned using existing methods and align them post-hoc in a shared embedding space. A limitation with post-hoc alignment of monolingual embeddings, first noticed by BIBREF12 , is that doing training of monolingual embeddings and alignment separately may lead to worse results than joint training of embeddings in one step. Since the monolingual embedding objective is distinct from the multilingual embedding objective, monolingual embeddings are not required to capture all information helpful for post-hoc multilingual alignment. Post-hoc alignment loses out on some information, whereas joint training does not. BIBREF12 observe improved results using a joint training method compared to a similar post-hoc method. Thus, a joint training approach is desirable. To our knowledge, no previous method jointly learns multilingual word embeddings using weakly-supervised data available for low-resource languages.
To summarize: In this paper we propose an approach for learning multilingual word embeddings using image-text data jointly across all languages. We demonstrate that even a bag-of-words based embedding approach achieves performance competitive with the state-of-the-art on crosslingual semantic similarity tasks. We present experiments for understanding the effect of using pixel data as compared to co-occurrences alone. We also provide a method for training and making predictions on multilingual word embeddings even when the language of the text is unknown.
Related Work
Most work on producing multilingual embeddings has relied on crosslingual human-labeled data, such as bilingual lexicons BIBREF13 , BIBREF4 , BIBREF6 , BIBREF14 or parallel/aligned corpora BIBREF15 , BIBREF4 , BIBREF16 , BIBREF17 . These works are also largely bilingual due to either limitations of methods or the requirement for data that exists only for a few language pairs. Bilingual embeddings are less desirable because they do not leverage the relevant resources of other languages. For example, in learning bilingual embeddings for English and French, it may be useful to leverage resources in Spanish, since French and Spanish are closely related. Bilingual embeddings are also limited in their applications to just one language pair.
For instance, BIBREF16 propose BiSkip, a model that extends the skip-gram approach of BIBREF18 to a bilingual parallel corpus. The embedding for a word is trained to predict not only its own context, but also the contexts for corresponding words in a second corpus in a different language. BIBREF4 extend this approach further to multiple languages. This method, called MultiSkip, is compared to our methods in Section "Results and Conclusions" .
There has been some recent work on reducing the amount of human-labeled data required to learn multilingual embeddings, enabling work on low-resource languages BIBREF19 , BIBREF20 , BIBREF21 . These methods take monolingual embeddings learned using existing methods and align them post-hoc in a shared embedding space, exploiting the structural similarity of monolingual embedding spaces first noticed by BIBREF13 . As discussed in Section "Introduction" , post-hoc alignment of monolingual embeddings is inherently suboptimal. For example, BIBREF19 and BIBREF20 use human-labeled data, along with shared surface forms across languages, to learn an alignment in the bilingual setting. BIBREF21 build on this for the multilingual setting, using no human-labeled data and instead using an adversarial approach to maximize alignment between monolingual embedding spaces given their structural similarities. This method (MUSE) outperforms previous approaches and represents the state-of-the-art. We compare it to our methods in Section "Results and Conclusions" .
There has been other work using image-text data to improve image and caption representations for image tasks and to learn word translations BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , but no work using images to learn competitive multilingual word-level embeddings.
Data
We experiment using a dataset derived from Google Images search results. The dataset consists of queries and the corresponding image search results. For example, one (query, image) pair might be “cat with big ears” and an image of a cat. Each (query, image) pair also has a weight corresponding to a relevance score of the image for the query. The dataset includes 3 billion (query, image, weight) triples, with 900 million unique images and 220 million unique queries. The data was prepared by first taking the query-image set, filtering to remove any personally identifiable information and adult content, and tokenizing the remaining queries by replacing special characters with spaces and trimming extraneous whitespace. Rare tokens (those that do not appear in queries at least six times) are filtered out. Each token in each query is given a language tag based on the user-set home language of the user making the search on Google Images. For example, if the query “back pain” is made by a user with English as her home language, then the query is stored as “en:back en:pain”. The dataset includes queries in about 130 languages.
Though the specific dataset we use is proprietary, BIBREF26 have obtained a similar dataset, using the Google Images search interface, that comprises queries in 100 languages.
Methods
We present a series of experiments to investigate the usefulness of multimodal image-text data in learning multilingual embeddings. The crux of our method involves enforcing that for each query-image pair, the query representation ( $Q$ ) is similar to the image representation ( $I$ ). The query representation is a function of the word embeddings for each word in a (language-tagged) query, so enforcing this constraint on the query representation also has the effect of constraining the corresponding multilingual word embeddings.
Given some $Q$ and some $I$ , we enforce that the representations are similar by maximizing their cosine similarity. We use a combination of cosine similarity and softmax objective to produce our loss. This high-level approach is illustrated in Figure 1 . In particular, we calculate unweighted loss as follows for a query $q$ and a corresponding image $i$ : $\textrm {loss}(\textrm {Query} \: q, \textrm {Image} \: i) = -\log {\frac{e^{\frac{Q_q^T I_i}{|Q_q| |I_i|}}}{\sum _{j} e^{\frac{Q_q^T I_j}{{|Q_q| |I_j|}}}}}$
where $Q_q$ is the query representation for query $q$ ; $I_i$ is the image representation corresponding to image $i$ ; $j$ ranges over all images in the corpus; and $Q_q^T I_i$ is the dot product of the vectors $Q_q$ and $I_i$ . Note that this requires that $Q_q$ and $I_j$ have identical dimensionality. If a weight $q$0 is provided for the (query, image) pair, the loss is multiplied by the weight. Observe that $q$1 and $q$2 remain unspecified for now: we detail different experiments involving different representations below.
In practice, given the size of our dataset, calculating the full denominator of the loss for a query, image pair would involve iterating through each image for each query, which is $O(n^2)$ in the number of training examples. To remedy this, we calculated the loss within each batch separately. That is, the denominator of the loss only involved summing over images in the same batch as the query. We used a batch size of 1000 for all experiments. In principle, the negative sampling approach used by BIBREF0 could be used instead to prevent quadratic time complexity.
We can interpret this loss function as producing a softmax classification task for queries and images: given a query, the model needs to predict the image relevant to that query. The cosine similarity between the image representation $I_i$ and the query representation $Q_q$ is normalized under softmax to produce a “belief” that the image $i$ is the image relevant to the query $q$ . This is analogous to the skip-gram model proposed by BIBREF18 , although we use cosine similarity instead of dot product. Just as the skip-gram model ensures the embeddings of words are predictive of their contexts, our model ensures the embeddings of queries (and their constituent words) are predictive of images relevant to them.
Leveraging Image Understanding
Given the natural co-occurrence of images and text on the internet and the availability of powerful generic features, a first approach is to use generic image features as the foundation for the image representation $I$ . We apply two fully-connected layers to learn a transformation from image features to the final representation. We can compute the image representation $I_i$ for image $i$ as: $I_i = ReLU(U * ReLU(Vf_i + b_1) + b_2)$
where $f_i$ is a $d$ -dimensional column vector representing generic image features for image $i$ , $V$ is a $m \times d$ matrix, $b_1$ is an $m$ -dimensional column vector, $U$ is a $n \times m$ matrix, and $b_2$ is an $d$0 -dimensional column vector. We use a rectified linear unit activation function after each fully-connected layer.
We use 64-dimensional image features derived from image-text data using an approach similar to that used by BIBREF27 , who train image features to discriminate between fine-grained semantic image labels. We run two experiments with $m$ and $n$ : in the first, $m = 200$ and $n = 100$ (producing 100-dimensional embeddings), and in the second, $m = 300$ and $n = 300$ (producing 300-dimensional embeddings).
For the query representation, we use a simple approach. The query representation is just the average of its constituent multilingual embeddings. Then, as the query representation is constrained to be similar to corresponding image representations, the multilingual embeddings (randomly initialized) are also constrained.
Note that each word in each query is prefixed with the language of the query. For example, the English query “back pain” is treated as “en:back en:pain”, and the multilingual embeddings that are averaged are those for “en:back” and “en:pain”. This means that words in different languages with shared surface forms are given separate embeddings. We experiment with shared embeddings for words with shared surface forms in Section "Discussion" .
In practice, we use a fixed multilingual vocabulary for the word embeddings, given the size of the dataset. Out-of-vocabulary words are handled by hashing them to a fixed number of embedding buckets (we use 1,000,000). That is, there are 1,000,000 embeddings for all out-of-vocabulary words, and the assignment of embedding for each word is determined by a hash function.
Our approach for leveraging image understanding is shown in Figure 2 .
Co-Occurrence Only
Another approach for generating query and image representations is treating images as a black box. Without using pixel data, how well can we do? Given the statistics of our dataset (3B query, image pairs with 220M unique queries and 900M unique images), we know that different queries co-occur with the same images. Intuitively, if a query $q_1$ co-occurs with many of the same images as query $q_2$ , then $q_1$ and $q_2$ are likely to be semantically similar, regardless of the visual content of the shared images. Thus, we can use a method that uses only co-occurrence statistics to better understand how well we can capture relationships between queries. This method serves as a baseline to our initial approach leveraging image understanding.
In this setting, we keep query representations the same, and we modify image representations as follows: the image representation for an image is a randomly initialized, trainable vector (of the same dimensionality as the query representation, to ensure the cosine similarity can be calculated). The intuition for this approach is that if two queries are both associated with an image, their query representations will both be constrained to be similar to the same vector, and so the query representations themselves are constrained to be similar. This approach is a simple way to adapt our method to make use of only co-occurrence statistics.
One concern with this approach is that many queries may not have significant image co-occurrences with other queries. In particular, there are likely many images associated with only a single query. These isolated images pull query representations toward their respective random image representations (adding noise), but do not provide any information about the relationships between queries. Additionally, even for images associated with multiple queries, if these queries are all within language, then they may not be very helpful for learning multilingual embeddings. Consequently, we run two experiments: one with the original dataset and one with a subset of the dataset that contains only images associated with queries in at least two different languages. This subset of the dataset has 540 million query, image pairs (down from 3 billion). For both experiments, we use $m = 200$ and $n = 100$ and produce 100-dimensional embeddings.
Language Unaware Query Representation
In Section "Leveraging Image Understanding" , our method for computing query representations involved prepending language prefixes to each token, ensuring that the multilingual embedding for the English word “pain” is distinct from that for the French word “pain” (meaning bread). These query representations are language aware, meaning that a language tag is required for each query during both training and prediction. In the weakly-supervised setting, we may want to relax this requirement, as language-tagged data is not always readily available.
In our language unaware setting, language tags are not necessary. Each surface form in each query has a distinct embedding, and words with shared surface forms across languages (e.g., English “pain” and French “pain”) have a shared embedding. In this sense, shared surface forms are used as a bridge between languages. This is illustrated in Figure 3 . This may be helpful in certain cases, as for English “actor” and Spanish “actor”. The image representations leverage generic image features, exactly as in Section "Leveraging Image Understanding" . In our language-unaware experiment, we use $m = 200$ and $n = 100$ and produce 100-dimensional embeddings.
Evaluation
We evaluate our learned multilingual embeddings using six crosslingual semantic similarity tasks, two multilingual document classification tasks, and 13 monolingual semantic similarity tasks. We adapt code from BIBREF4 and BIBREF28 for evaluation.
This task measures how well multilingual embeddings capture semantic similarity of words, as judged by human raters. The task consists of a series of crosslingual word pairs. For each word pair in the task, human raters judge how semantically similar the words are. The model also predicts how similar the words are, using the cosine similarity between the embeddings. The score on the task is the Spearman correlation between the human ratings and the model predictions.
The specific six subtasks we use are part of the Rubenstein-Goodenough dataset BIBREF29 and detailed by BIBREF4 . We also include an additional task aggregating the six subtasks.
In this task, a classifier built on top of learned multilingual embeddings is trained on the RCV corpus of newswire text as in BIBREF15 and BIBREF4 . The corpus consists of documents in seven languages on four topics, and the classifier predicts the topic. The score on the task is test accuracy. Note that each document is monolingual, so this task measures performance within languages for multiple languages (as opposed to crosslingual performance).
This task is the same as the crosslingual semantic similarity task described above, but all word pairs are in English. We use this to understand how monolingual performance differs across methods. We present an average score across the 13 subtasks provided by BIBREF28 .
Evaluation tasks also report a coverage, which is the fraction of the test data that a set of multilingual embeddings is able to make predictions on. This is needed because not every word in the evaluation task has a corresponding learned multilingual embedding. Thus, if coverage is low, scores are less likely to be reliable.
Results and Conclusions
We first present results on the crosslingual semantic similarity and multilingual document classification for our previously described experiments. We compare against the multiSkip approach by BIBREF4 and the state-of-the-art MUSE approach by BIBREF21 . Results for crosslingual semantic similarity are presented in Table 1 , and results for multilingual document classification are presented in Table 2 .
Our experiments corresponding to Section "Leveraging Image Understanding" are titled ImageVec 100-Dim and ImageVec 300-Dim in Tables 1 and 2 . Both experiments significantly outperform the multiSkip experiments in all crosslingual semantic similarity subtasks, and the 300-dimensional experiment slightly outperforms MUSE as well. Note that coverage scores are generally around 0.8 for these experiments. In multilingual document classification, MUSE achieves the best scores, and while our 300-dimensional experiment outperforms the multiSkip 40-dimensional experiment, it does not perform as well as the 512-dimensional experiment. Note that coverage scores are lower on these tasks.
One possible explanation for the difference in performance across the crosslingual semantic similarity task and multilingual document classification task is that the former measures crosslingual performance, whereas the latter measures monolingual performance in multiple languages, as described in Section UID10 . We briefly discuss further evidence that our models perform less well in the monolingual context below.
Discussion
We demonstrated how to learn competitive multilingual word embeddings using image-text data – which is available for low-resource languages. We have presented experiments for understanding the effect of using pixel data as compared to co-occurrences alone. We have also proposed a method for training and making predictions on multilingual word embeddings even when language tags for words are unavailable. Using a simple bag-of-words approach, we achieve performance competitive with the state-of-the-art on crosslingual semantic similarity tasks.
We have also identified a direction for future work: within language performance is weaker than the state-of-the-art, likely because our work leveraged only image-text data rather than a large monolingual corpus. Fortunately, our joint training approach provides a simple extension of our method for future work: multi-task joint training. For example, in a triple-task setting, we can simultaneously (1) constrain query and relevant image representations to be similar and (2) constrain word embeddings to be predictive of context in large monolingual corpora and (3) constrain representations for parallel text across languages to be similar. For the second task, implementing recent advances in producing monolingual embeddings, such as using subword information, is likely to improve results. Multilingual embeddings learned in a multi-task setting would reap both the benefits of our methods and existing methods for producing word embeddings. For example, while our method is likely to perform worse for more abstract words, when combined with existing approaches it is likely to achieve more consistent performance.
An interesting effect of our approach is that queries and images are embedded into a shared space through the query and image representations. This setup enables a range of future research directions and applications, including better image features, better monolingual text representations (especially for visual tasks), nearest-neighbor search for text or images given one modality (or both), and joint prediction using text and images. | monolingual |
93b1b94b301a46251695db8194a2536639a22a88 | 93b1b94b301a46251695db8194a2536639a22a88_0 | Q: Could you learn such embedding simply from the image annotations and without using visual information?
Text: Introduction
Recent advances in learning distributed representations for words (i.e., word embeddings) have resulted in improvements across numerous natural language understanding tasks BIBREF0 , BIBREF1 . These methods use unlabeled text corpora to model the semantic content of words using their co-occurring context words. Key to this is the observation that semantically similar words have similar contexts BIBREF2 , thus leading to similar word embeddings. A limitation of these word embedding approaches is that they only produce monolingual embeddings. This is because word co-occurrences are very likely to be limited to being within language rather than across language in text corpora. Hence semantically similar words across languages are unlikely to have similar word embeddings.
To remedy this, there has been recent work on learning multilingual word embeddings, in which semantically similar words within and across languages have similar word embeddings BIBREF3 . Multilingual embeddings are not just interesting as an interlingua between multiple languages; they are useful in many downstream applications. For example, one application of multilingual embeddings is to find semantically similar words and phrases across languages BIBREF4 . Another use of multilingual embeddings is in enabling zero-shot learning on unseen languages, just as monolingual word embeddings enable predictions on unseen words BIBREF5 . In other words, a classifier using pretrained multilingual word embeddings can generalize to other languages even if training data is only in English. Interestingly, multilingual embeddings have also been shown to improve monolingual task performance BIBREF6 , BIBREF7 .
Consequently, multilingual embeddings can be very useful for low-resource languages – they allow us to overcome the scarcity of data in these languages. However, as detailed in Section "Related Work" , most work on learning multilingual word embeddings so far has heavily relied on the availability of expensive resources such as word-aligned / sentence-aligned parallel corpora or bilingual lexicons. Unfortunately, this data can be prohibitively expensive to collect for many languages. Furthermore even for languages with such data available, the coverage of the data is a limiting factor that restricts how much of the semantic space can be aligned across languages. Overcoming this data bottleneck is a key contribution of our work.
We investigate the use of cheaply available, weakly-supervised image-text data for learning multilingual embeddings. Images are a rich, language-agnostic medium that can provide a bridge across languages. For example, the English word “cat” might be found on webpages containing images of cats. Similarly, the German word “katze” (meaning cat) is likely to be found on other webpages containing similar (or perhaps identical) images of cats. Thus, images can be used to learn that these words have similar semantic content. Importantly, image-text data is generally available on the internet even for low-resource languages.
As image data has proliferated on the internet, tools for understanding images have advanced considerably. Convolutional neural networks (CNNs) have achieved roughly human-level or better performance on vision tasks, particularly classification BIBREF8 , BIBREF9 , BIBREF10 . During classification of an image, CNNs compute intermediate outputs that have been used as generic image features that perform well across a variety of vision tasks BIBREF11 . We use these image features to enforce that words associated with similar images have similar embeddings. Since words associated with similar images are likely to have similar semantic content, even across languages, our learned embeddings capture crosslingual similarity.
There has been other recent work on reducing the amount of supervision required to learn multilingual embeddings (cf. Section "Related Work" ). These methods take monolingual embeddings learned using existing methods and align them post-hoc in a shared embedding space. A limitation with post-hoc alignment of monolingual embeddings, first noticed by BIBREF12 , is that doing training of monolingual embeddings and alignment separately may lead to worse results than joint training of embeddings in one step. Since the monolingual embedding objective is distinct from the multilingual embedding objective, monolingual embeddings are not required to capture all information helpful for post-hoc multilingual alignment. Post-hoc alignment loses out on some information, whereas joint training does not. BIBREF12 observe improved results using a joint training method compared to a similar post-hoc method. Thus, a joint training approach is desirable. To our knowledge, no previous method jointly learns multilingual word embeddings using weakly-supervised data available for low-resource languages.
To summarize: In this paper we propose an approach for learning multilingual word embeddings using image-text data jointly across all languages. We demonstrate that even a bag-of-words based embedding approach achieves performance competitive with the state-of-the-art on crosslingual semantic similarity tasks. We present experiments for understanding the effect of using pixel data as compared to co-occurrences alone. We also provide a method for training and making predictions on multilingual word embeddings even when the language of the text is unknown.
Related Work
Most work on producing multilingual embeddings has relied on crosslingual human-labeled data, such as bilingual lexicons BIBREF13 , BIBREF4 , BIBREF6 , BIBREF14 or parallel/aligned corpora BIBREF15 , BIBREF4 , BIBREF16 , BIBREF17 . These works are also largely bilingual due to either limitations of methods or the requirement for data that exists only for a few language pairs. Bilingual embeddings are less desirable because they do not leverage the relevant resources of other languages. For example, in learning bilingual embeddings for English and French, it may be useful to leverage resources in Spanish, since French and Spanish are closely related. Bilingual embeddings are also limited in their applications to just one language pair.
For instance, BIBREF16 propose BiSkip, a model that extends the skip-gram approach of BIBREF18 to a bilingual parallel corpus. The embedding for a word is trained to predict not only its own context, but also the contexts for corresponding words in a second corpus in a different language. BIBREF4 extend this approach further to multiple languages. This method, called MultiSkip, is compared to our methods in Section "Results and Conclusions" .
There has been some recent work on reducing the amount of human-labeled data required to learn multilingual embeddings, enabling work on low-resource languages BIBREF19 , BIBREF20 , BIBREF21 . These methods take monolingual embeddings learned using existing methods and align them post-hoc in a shared embedding space, exploiting the structural similarity of monolingual embedding spaces first noticed by BIBREF13 . As discussed in Section "Introduction" , post-hoc alignment of monolingual embeddings is inherently suboptimal. For example, BIBREF19 and BIBREF20 use human-labeled data, along with shared surface forms across languages, to learn an alignment in the bilingual setting. BIBREF21 build on this for the multilingual setting, using no human-labeled data and instead using an adversarial approach to maximize alignment between monolingual embedding spaces given their structural similarities. This method (MUSE) outperforms previous approaches and represents the state-of-the-art. We compare it to our methods in Section "Results and Conclusions" .
There has been other work using image-text data to improve image and caption representations for image tasks and to learn word translations BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , but no work using images to learn competitive multilingual word-level embeddings.
Data
We experiment using a dataset derived from Google Images search results. The dataset consists of queries and the corresponding image search results. For example, one (query, image) pair might be “cat with big ears” and an image of a cat. Each (query, image) pair also has a weight corresponding to a relevance score of the image for the query. The dataset includes 3 billion (query, image, weight) triples, with 900 million unique images and 220 million unique queries. The data was prepared by first taking the query-image set, filtering to remove any personally identifiable information and adult content, and tokenizing the remaining queries by replacing special characters with spaces and trimming extraneous whitespace. Rare tokens (those that do not appear in queries at least six times) are filtered out. Each token in each query is given a language tag based on the user-set home language of the user making the search on Google Images. For example, if the query “back pain” is made by a user with English as her home language, then the query is stored as “en:back en:pain”. The dataset includes queries in about 130 languages.
Though the specific dataset we use is proprietary, BIBREF26 have obtained a similar dataset, using the Google Images search interface, that comprises queries in 100 languages.
Methods
We present a series of experiments to investigate the usefulness of multimodal image-text data in learning multilingual embeddings. The crux of our method involves enforcing that for each query-image pair, the query representation ( $Q$ ) is similar to the image representation ( $I$ ). The query representation is a function of the word embeddings for each word in a (language-tagged) query, so enforcing this constraint on the query representation also has the effect of constraining the corresponding multilingual word embeddings.
Given some $Q$ and some $I$ , we enforce that the representations are similar by maximizing their cosine similarity. We use a combination of cosine similarity and softmax objective to produce our loss. This high-level approach is illustrated in Figure 1 . In particular, we calculate unweighted loss as follows for a query $q$ and a corresponding image $i$ : $\textrm {loss}(\textrm {Query} \: q, \textrm {Image} \: i) = -\log {\frac{e^{\frac{Q_q^T I_i}{|Q_q| |I_i|}}}{\sum _{j} e^{\frac{Q_q^T I_j}{{|Q_q| |I_j|}}}}}$
where $Q_q$ is the query representation for query $q$ ; $I_i$ is the image representation corresponding to image $i$ ; $j$ ranges over all images in the corpus; and $Q_q^T I_i$ is the dot product of the vectors $Q_q$ and $I_i$ . Note that this requires that $Q_q$ and $I_j$ have identical dimensionality. If a weight $q$0 is provided for the (query, image) pair, the loss is multiplied by the weight. Observe that $q$1 and $q$2 remain unspecified for now: we detail different experiments involving different representations below.
In practice, given the size of our dataset, calculating the full denominator of the loss for a query, image pair would involve iterating through each image for each query, which is $O(n^2)$ in the number of training examples. To remedy this, we calculated the loss within each batch separately. That is, the denominator of the loss only involved summing over images in the same batch as the query. We used a batch size of 1000 for all experiments. In principle, the negative sampling approach used by BIBREF0 could be used instead to prevent quadratic time complexity.
We can interpret this loss function as producing a softmax classification task for queries and images: given a query, the model needs to predict the image relevant to that query. The cosine similarity between the image representation $I_i$ and the query representation $Q_q$ is normalized under softmax to produce a “belief” that the image $i$ is the image relevant to the query $q$ . This is analogous to the skip-gram model proposed by BIBREF18 , although we use cosine similarity instead of dot product. Just as the skip-gram model ensures the embeddings of words are predictive of their contexts, our model ensures the embeddings of queries (and their constituent words) are predictive of images relevant to them.
Leveraging Image Understanding
Given the natural co-occurrence of images and text on the internet and the availability of powerful generic features, a first approach is to use generic image features as the foundation for the image representation $I$ . We apply two fully-connected layers to learn a transformation from image features to the final representation. We can compute the image representation $I_i$ for image $i$ as: $I_i = ReLU(U * ReLU(Vf_i + b_1) + b_2)$
where $f_i$ is a $d$ -dimensional column vector representing generic image features for image $i$ , $V$ is a $m \times d$ matrix, $b_1$ is an $m$ -dimensional column vector, $U$ is a $n \times m$ matrix, and $b_2$ is an $d$0 -dimensional column vector. We use a rectified linear unit activation function after each fully-connected layer.
We use 64-dimensional image features derived from image-text data using an approach similar to that used by BIBREF27 , who train image features to discriminate between fine-grained semantic image labels. We run two experiments with $m$ and $n$ : in the first, $m = 200$ and $n = 100$ (producing 100-dimensional embeddings), and in the second, $m = 300$ and $n = 300$ (producing 300-dimensional embeddings).
For the query representation, we use a simple approach. The query representation is just the average of its constituent multilingual embeddings. Then, as the query representation is constrained to be similar to corresponding image representations, the multilingual embeddings (randomly initialized) are also constrained.
Note that each word in each query is prefixed with the language of the query. For example, the English query “back pain” is treated as “en:back en:pain”, and the multilingual embeddings that are averaged are those for “en:back” and “en:pain”. This means that words in different languages with shared surface forms are given separate embeddings. We experiment with shared embeddings for words with shared surface forms in Section "Discussion" .
In practice, we use a fixed multilingual vocabulary for the word embeddings, given the size of the dataset. Out-of-vocabulary words are handled by hashing them to a fixed number of embedding buckets (we use 1,000,000). That is, there are 1,000,000 embeddings for all out-of-vocabulary words, and the assignment of embedding for each word is determined by a hash function.
Our approach for leveraging image understanding is shown in Figure 2 .
Co-Occurrence Only
Another approach for generating query and image representations is treating images as a black box. Without using pixel data, how well can we do? Given the statistics of our dataset (3B query, image pairs with 220M unique queries and 900M unique images), we know that different queries co-occur with the same images. Intuitively, if a query $q_1$ co-occurs with many of the same images as query $q_2$ , then $q_1$ and $q_2$ are likely to be semantically similar, regardless of the visual content of the shared images. Thus, we can use a method that uses only co-occurrence statistics to better understand how well we can capture relationships between queries. This method serves as a baseline to our initial approach leveraging image understanding.
In this setting, we keep query representations the same, and we modify image representations as follows: the image representation for an image is a randomly initialized, trainable vector (of the same dimensionality as the query representation, to ensure the cosine similarity can be calculated). The intuition for this approach is that if two queries are both associated with an image, their query representations will both be constrained to be similar to the same vector, and so the query representations themselves are constrained to be similar. This approach is a simple way to adapt our method to make use of only co-occurrence statistics.
One concern with this approach is that many queries may not have significant image co-occurrences with other queries. In particular, there are likely many images associated with only a single query. These isolated images pull query representations toward their respective random image representations (adding noise), but do not provide any information about the relationships between queries. Additionally, even for images associated with multiple queries, if these queries are all within language, then they may not be very helpful for learning multilingual embeddings. Consequently, we run two experiments: one with the original dataset and one with a subset of the dataset that contains only images associated with queries in at least two different languages. This subset of the dataset has 540 million query, image pairs (down from 3 billion). For both experiments, we use $m = 200$ and $n = 100$ and produce 100-dimensional embeddings.
Language Unaware Query Representation
In Section "Leveraging Image Understanding" , our method for computing query representations involved prepending language prefixes to each token, ensuring that the multilingual embedding for the English word “pain” is distinct from that for the French word “pain” (meaning bread). These query representations are language aware, meaning that a language tag is required for each query during both training and prediction. In the weakly-supervised setting, we may want to relax this requirement, as language-tagged data is not always readily available.
In our language unaware setting, language tags are not necessary. Each surface form in each query has a distinct embedding, and words with shared surface forms across languages (e.g., English “pain” and French “pain”) have a shared embedding. In this sense, shared surface forms are used as a bridge between languages. This is illustrated in Figure 3 . This may be helpful in certain cases, as for English “actor” and Spanish “actor”. The image representations leverage generic image features, exactly as in Section "Leveraging Image Understanding" . In our language-unaware experiment, we use $m = 200$ and $n = 100$ and produce 100-dimensional embeddings.
Evaluation
We evaluate our learned multilingual embeddings using six crosslingual semantic similarity tasks, two multilingual document classification tasks, and 13 monolingual semantic similarity tasks. We adapt code from BIBREF4 and BIBREF28 for evaluation.
This task measures how well multilingual embeddings capture semantic similarity of words, as judged by human raters. The task consists of a series of crosslingual word pairs. For each word pair in the task, human raters judge how semantically similar the words are. The model also predicts how similar the words are, using the cosine similarity between the embeddings. The score on the task is the Spearman correlation between the human ratings and the model predictions.
The specific six subtasks we use are part of the Rubenstein-Goodenough dataset BIBREF29 and detailed by BIBREF4 . We also include an additional task aggregating the six subtasks.
In this task, a classifier built on top of learned multilingual embeddings is trained on the RCV corpus of newswire text as in BIBREF15 and BIBREF4 . The corpus consists of documents in seven languages on four topics, and the classifier predicts the topic. The score on the task is test accuracy. Note that each document is monolingual, so this task measures performance within languages for multiple languages (as opposed to crosslingual performance).
This task is the same as the crosslingual semantic similarity task described above, but all word pairs are in English. We use this to understand how monolingual performance differs across methods. We present an average score across the 13 subtasks provided by BIBREF28 .
Evaluation tasks also report a coverage, which is the fraction of the test data that a set of multilingual embeddings is able to make predictions on. This is needed because not every word in the evaluation task has a corresponding learned multilingual embedding. Thus, if coverage is low, scores are less likely to be reliable.
Results and Conclusions
We first present results on the crosslingual semantic similarity and multilingual document classification for our previously described experiments. We compare against the multiSkip approach by BIBREF4 and the state-of-the-art MUSE approach by BIBREF21 . Results for crosslingual semantic similarity are presented in Table 1 , and results for multilingual document classification are presented in Table 2 .
Our experiments corresponding to Section "Leveraging Image Understanding" are titled ImageVec 100-Dim and ImageVec 300-Dim in Tables 1 and 2 . Both experiments significantly outperform the multiSkip experiments in all crosslingual semantic similarity subtasks, and the 300-dimensional experiment slightly outperforms MUSE as well. Note that coverage scores are generally around 0.8 for these experiments. In multilingual document classification, MUSE achieves the best scores, and while our 300-dimensional experiment outperforms the multiSkip 40-dimensional experiment, it does not perform as well as the 512-dimensional experiment. Note that coverage scores are lower on these tasks.
One possible explanation for the difference in performance across the crosslingual semantic similarity task and multilingual document classification task is that the former measures crosslingual performance, whereas the latter measures monolingual performance in multiple languages, as described in Section UID10 . We briefly discuss further evidence that our models perform less well in the monolingual context below.
Discussion
We demonstrated how to learn competitive multilingual word embeddings using image-text data – which is available for low-resource languages. We have presented experiments for understanding the effect of using pixel data as compared to co-occurrences alone. We have also proposed a method for training and making predictions on multilingual word embeddings even when language tags for words are unavailable. Using a simple bag-of-words approach, we achieve performance competitive with the state-of-the-art on crosslingual semantic similarity tasks.
We have also identified a direction for future work: within language performance is weaker than the state-of-the-art, likely because our work leveraged only image-text data rather than a large monolingual corpus. Fortunately, our joint training approach provides a simple extension of our method for future work: multi-task joint training. For example, in a triple-task setting, we can simultaneously (1) constrain query and relevant image representations to be similar and (2) constrain word embeddings to be predictive of context in large monolingual corpora and (3) constrain representations for parallel text across languages to be similar. For the second task, implementing recent advances in producing monolingual embeddings, such as using subword information, is likely to improve results. Multilingual embeddings learned in a multi-task setting would reap both the benefits of our methods and existing methods for producing word embeddings. For example, while our method is likely to perform worse for more abstract words, when combined with existing approaches it is likely to achieve more consistent performance.
An interesting effect of our approach is that queries and images are embedded into a shared space through the query and image representations. This setup enables a range of future research directions and applications, including better image features, better monolingual text representations (especially for visual tasks), nearest-neighbor search for text or images given one modality (or both), and joint prediction using text and images. | Yes |
e8029ec69b0b273954b4249873a5070c2a0edb8a | e8029ec69b0b273954b4249873a5070c2a0edb8a_0 | Q: How much important is the visual grounding in the learning of the multilingual representations?
Text: Introduction
Recent advances in learning distributed representations for words (i.e., word embeddings) have resulted in improvements across numerous natural language understanding tasks BIBREF0 , BIBREF1 . These methods use unlabeled text corpora to model the semantic content of words using their co-occurring context words. Key to this is the observation that semantically similar words have similar contexts BIBREF2 , thus leading to similar word embeddings. A limitation of these word embedding approaches is that they only produce monolingual embeddings. This is because word co-occurrences are very likely to be limited to being within language rather than across language in text corpora. Hence semantically similar words across languages are unlikely to have similar word embeddings.
To remedy this, there has been recent work on learning multilingual word embeddings, in which semantically similar words within and across languages have similar word embeddings BIBREF3 . Multilingual embeddings are not just interesting as an interlingua between multiple languages; they are useful in many downstream applications. For example, one application of multilingual embeddings is to find semantically similar words and phrases across languages BIBREF4 . Another use of multilingual embeddings is in enabling zero-shot learning on unseen languages, just as monolingual word embeddings enable predictions on unseen words BIBREF5 . In other words, a classifier using pretrained multilingual word embeddings can generalize to other languages even if training data is only in English. Interestingly, multilingual embeddings have also been shown to improve monolingual task performance BIBREF6 , BIBREF7 .
Consequently, multilingual embeddings can be very useful for low-resource languages – they allow us to overcome the scarcity of data in these languages. However, as detailed in Section "Related Work" , most work on learning multilingual word embeddings so far has heavily relied on the availability of expensive resources such as word-aligned / sentence-aligned parallel corpora or bilingual lexicons. Unfortunately, this data can be prohibitively expensive to collect for many languages. Furthermore even for languages with such data available, the coverage of the data is a limiting factor that restricts how much of the semantic space can be aligned across languages. Overcoming this data bottleneck is a key contribution of our work.
We investigate the use of cheaply available, weakly-supervised image-text data for learning multilingual embeddings. Images are a rich, language-agnostic medium that can provide a bridge across languages. For example, the English word “cat” might be found on webpages containing images of cats. Similarly, the German word “katze” (meaning cat) is likely to be found on other webpages containing similar (or perhaps identical) images of cats. Thus, images can be used to learn that these words have similar semantic content. Importantly, image-text data is generally available on the internet even for low-resource languages.
As image data has proliferated on the internet, tools for understanding images have advanced considerably. Convolutional neural networks (CNNs) have achieved roughly human-level or better performance on vision tasks, particularly classification BIBREF8 , BIBREF9 , BIBREF10 . During classification of an image, CNNs compute intermediate outputs that have been used as generic image features that perform well across a variety of vision tasks BIBREF11 . We use these image features to enforce that words associated with similar images have similar embeddings. Since words associated with similar images are likely to have similar semantic content, even across languages, our learned embeddings capture crosslingual similarity.
There has been other recent work on reducing the amount of supervision required to learn multilingual embeddings (cf. Section "Related Work" ). These methods take monolingual embeddings learned using existing methods and align them post-hoc in a shared embedding space. A limitation with post-hoc alignment of monolingual embeddings, first noticed by BIBREF12 , is that doing training of monolingual embeddings and alignment separately may lead to worse results than joint training of embeddings in one step. Since the monolingual embedding objective is distinct from the multilingual embedding objective, monolingual embeddings are not required to capture all information helpful for post-hoc multilingual alignment. Post-hoc alignment loses out on some information, whereas joint training does not. BIBREF12 observe improved results using a joint training method compared to a similar post-hoc method. Thus, a joint training approach is desirable. To our knowledge, no previous method jointly learns multilingual word embeddings using weakly-supervised data available for low-resource languages.
To summarize: In this paper we propose an approach for learning multilingual word embeddings using image-text data jointly across all languages. We demonstrate that even a bag-of-words based embedding approach achieves performance competitive with the state-of-the-art on crosslingual semantic similarity tasks. We present experiments for understanding the effect of using pixel data as compared to co-occurrences alone. We also provide a method for training and making predictions on multilingual word embeddings even when the language of the text is unknown.
Related Work
Most work on producing multilingual embeddings has relied on crosslingual human-labeled data, such as bilingual lexicons BIBREF13 , BIBREF4 , BIBREF6 , BIBREF14 or parallel/aligned corpora BIBREF15 , BIBREF4 , BIBREF16 , BIBREF17 . These works are also largely bilingual due to either limitations of methods or the requirement for data that exists only for a few language pairs. Bilingual embeddings are less desirable because they do not leverage the relevant resources of other languages. For example, in learning bilingual embeddings for English and French, it may be useful to leverage resources in Spanish, since French and Spanish are closely related. Bilingual embeddings are also limited in their applications to just one language pair.
For instance, BIBREF16 propose BiSkip, a model that extends the skip-gram approach of BIBREF18 to a bilingual parallel corpus. The embedding for a word is trained to predict not only its own context, but also the contexts for corresponding words in a second corpus in a different language. BIBREF4 extend this approach further to multiple languages. This method, called MultiSkip, is compared to our methods in Section "Results and Conclusions" .
There has been some recent work on reducing the amount of human-labeled data required to learn multilingual embeddings, enabling work on low-resource languages BIBREF19 , BIBREF20 , BIBREF21 . These methods take monolingual embeddings learned using existing methods and align them post-hoc in a shared embedding space, exploiting the structural similarity of monolingual embedding spaces first noticed by BIBREF13 . As discussed in Section "Introduction" , post-hoc alignment of monolingual embeddings is inherently suboptimal. For example, BIBREF19 and BIBREF20 use human-labeled data, along with shared surface forms across languages, to learn an alignment in the bilingual setting. BIBREF21 build on this for the multilingual setting, using no human-labeled data and instead using an adversarial approach to maximize alignment between monolingual embedding spaces given their structural similarities. This method (MUSE) outperforms previous approaches and represents the state-of-the-art. We compare it to our methods in Section "Results and Conclusions" .
There has been other work using image-text data to improve image and caption representations for image tasks and to learn word translations BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , but no work using images to learn competitive multilingual word-level embeddings.
Data
We experiment using a dataset derived from Google Images search results. The dataset consists of queries and the corresponding image search results. For example, one (query, image) pair might be “cat with big ears” and an image of a cat. Each (query, image) pair also has a weight corresponding to a relevance score of the image for the query. The dataset includes 3 billion (query, image, weight) triples, with 900 million unique images and 220 million unique queries. The data was prepared by first taking the query-image set, filtering to remove any personally identifiable information and adult content, and tokenizing the remaining queries by replacing special characters with spaces and trimming extraneous whitespace. Rare tokens (those that do not appear in queries at least six times) are filtered out. Each token in each query is given a language tag based on the user-set home language of the user making the search on Google Images. For example, if the query “back pain” is made by a user with English as her home language, then the query is stored as “en:back en:pain”. The dataset includes queries in about 130 languages.
Though the specific dataset we use is proprietary, BIBREF26 have obtained a similar dataset, using the Google Images search interface, that comprises queries in 100 languages.
Methods
We present a series of experiments to investigate the usefulness of multimodal image-text data in learning multilingual embeddings. The crux of our method involves enforcing that for each query-image pair, the query representation ( $Q$ ) is similar to the image representation ( $I$ ). The query representation is a function of the word embeddings for each word in a (language-tagged) query, so enforcing this constraint on the query representation also has the effect of constraining the corresponding multilingual word embeddings.
Given some $Q$ and some $I$ , we enforce that the representations are similar by maximizing their cosine similarity. We use a combination of cosine similarity and softmax objective to produce our loss. This high-level approach is illustrated in Figure 1 . In particular, we calculate unweighted loss as follows for a query $q$ and a corresponding image $i$ : $\textrm {loss}(\textrm {Query} \: q, \textrm {Image} \: i) = -\log {\frac{e^{\frac{Q_q^T I_i}{|Q_q| |I_i|}}}{\sum _{j} e^{\frac{Q_q^T I_j}{{|Q_q| |I_j|}}}}}$
where $Q_q$ is the query representation for query $q$ ; $I_i$ is the image representation corresponding to image $i$ ; $j$ ranges over all images in the corpus; and $Q_q^T I_i$ is the dot product of the vectors $Q_q$ and $I_i$ . Note that this requires that $Q_q$ and $I_j$ have identical dimensionality. If a weight $q$0 is provided for the (query, image) pair, the loss is multiplied by the weight. Observe that $q$1 and $q$2 remain unspecified for now: we detail different experiments involving different representations below.
In practice, given the size of our dataset, calculating the full denominator of the loss for a query, image pair would involve iterating through each image for each query, which is $O(n^2)$ in the number of training examples. To remedy this, we calculated the loss within each batch separately. That is, the denominator of the loss only involved summing over images in the same batch as the query. We used a batch size of 1000 for all experiments. In principle, the negative sampling approach used by BIBREF0 could be used instead to prevent quadratic time complexity.
We can interpret this loss function as producing a softmax classification task for queries and images: given a query, the model needs to predict the image relevant to that query. The cosine similarity between the image representation $I_i$ and the query representation $Q_q$ is normalized under softmax to produce a “belief” that the image $i$ is the image relevant to the query $q$ . This is analogous to the skip-gram model proposed by BIBREF18 , although we use cosine similarity instead of dot product. Just as the skip-gram model ensures the embeddings of words are predictive of their contexts, our model ensures the embeddings of queries (and their constituent words) are predictive of images relevant to them.
Leveraging Image Understanding
Given the natural co-occurrence of images and text on the internet and the availability of powerful generic features, a first approach is to use generic image features as the foundation for the image representation $I$ . We apply two fully-connected layers to learn a transformation from image features to the final representation. We can compute the image representation $I_i$ for image $i$ as: $I_i = ReLU(U * ReLU(Vf_i + b_1) + b_2)$
where $f_i$ is a $d$ -dimensional column vector representing generic image features for image $i$ , $V$ is a $m \times d$ matrix, $b_1$ is an $m$ -dimensional column vector, $U$ is a $n \times m$ matrix, and $b_2$ is an $d$0 -dimensional column vector. We use a rectified linear unit activation function after each fully-connected layer.
We use 64-dimensional image features derived from image-text data using an approach similar to that used by BIBREF27 , who train image features to discriminate between fine-grained semantic image labels. We run two experiments with $m$ and $n$ : in the first, $m = 200$ and $n = 100$ (producing 100-dimensional embeddings), and in the second, $m = 300$ and $n = 300$ (producing 300-dimensional embeddings).
For the query representation, we use a simple approach. The query representation is just the average of its constituent multilingual embeddings. Then, as the query representation is constrained to be similar to corresponding image representations, the multilingual embeddings (randomly initialized) are also constrained.
Note that each word in each query is prefixed with the language of the query. For example, the English query “back pain” is treated as “en:back en:pain”, and the multilingual embeddings that are averaged are those for “en:back” and “en:pain”. This means that words in different languages with shared surface forms are given separate embeddings. We experiment with shared embeddings for words with shared surface forms in Section "Discussion" .
In practice, we use a fixed multilingual vocabulary for the word embeddings, given the size of the dataset. Out-of-vocabulary words are handled by hashing them to a fixed number of embedding buckets (we use 1,000,000). That is, there are 1,000,000 embeddings for all out-of-vocabulary words, and the assignment of embedding for each word is determined by a hash function.
Our approach for leveraging image understanding is shown in Figure 2 .
Co-Occurrence Only
Another approach for generating query and image representations is treating images as a black box. Without using pixel data, how well can we do? Given the statistics of our dataset (3B query, image pairs with 220M unique queries and 900M unique images), we know that different queries co-occur with the same images. Intuitively, if a query $q_1$ co-occurs with many of the same images as query $q_2$ , then $q_1$ and $q_2$ are likely to be semantically similar, regardless of the visual content of the shared images. Thus, we can use a method that uses only co-occurrence statistics to better understand how well we can capture relationships between queries. This method serves as a baseline to our initial approach leveraging image understanding.
In this setting, we keep query representations the same, and we modify image representations as follows: the image representation for an image is a randomly initialized, trainable vector (of the same dimensionality as the query representation, to ensure the cosine similarity can be calculated). The intuition for this approach is that if two queries are both associated with an image, their query representations will both be constrained to be similar to the same vector, and so the query representations themselves are constrained to be similar. This approach is a simple way to adapt our method to make use of only co-occurrence statistics.
One concern with this approach is that many queries may not have significant image co-occurrences with other queries. In particular, there are likely many images associated with only a single query. These isolated images pull query representations toward their respective random image representations (adding noise), but do not provide any information about the relationships between queries. Additionally, even for images associated with multiple queries, if these queries are all within language, then they may not be very helpful for learning multilingual embeddings. Consequently, we run two experiments: one with the original dataset and one with a subset of the dataset that contains only images associated with queries in at least two different languages. This subset of the dataset has 540 million query, image pairs (down from 3 billion). For both experiments, we use $m = 200$ and $n = 100$ and produce 100-dimensional embeddings.
Language Unaware Query Representation
In Section "Leveraging Image Understanding" , our method for computing query representations involved prepending language prefixes to each token, ensuring that the multilingual embedding for the English word “pain” is distinct from that for the French word “pain” (meaning bread). These query representations are language aware, meaning that a language tag is required for each query during both training and prediction. In the weakly-supervised setting, we may want to relax this requirement, as language-tagged data is not always readily available.
In our language unaware setting, language tags are not necessary. Each surface form in each query has a distinct embedding, and words with shared surface forms across languages (e.g., English “pain” and French “pain”) have a shared embedding. In this sense, shared surface forms are used as a bridge between languages. This is illustrated in Figure 3 . This may be helpful in certain cases, as for English “actor” and Spanish “actor”. The image representations leverage generic image features, exactly as in Section "Leveraging Image Understanding" . In our language-unaware experiment, we use $m = 200$ and $n = 100$ and produce 100-dimensional embeddings.
Evaluation
We evaluate our learned multilingual embeddings using six crosslingual semantic similarity tasks, two multilingual document classification tasks, and 13 monolingual semantic similarity tasks. We adapt code from BIBREF4 and BIBREF28 for evaluation.
This task measures how well multilingual embeddings capture semantic similarity of words, as judged by human raters. The task consists of a series of crosslingual word pairs. For each word pair in the task, human raters judge how semantically similar the words are. The model also predicts how similar the words are, using the cosine similarity between the embeddings. The score on the task is the Spearman correlation between the human ratings and the model predictions.
The specific six subtasks we use are part of the Rubenstein-Goodenough dataset BIBREF29 and detailed by BIBREF4 . We also include an additional task aggregating the six subtasks.
In this task, a classifier built on top of learned multilingual embeddings is trained on the RCV corpus of newswire text as in BIBREF15 and BIBREF4 . The corpus consists of documents in seven languages on four topics, and the classifier predicts the topic. The score on the task is test accuracy. Note that each document is monolingual, so this task measures performance within languages for multiple languages (as opposed to crosslingual performance).
This task is the same as the crosslingual semantic similarity task described above, but all word pairs are in English. We use this to understand how monolingual performance differs across methods. We present an average score across the 13 subtasks provided by BIBREF28 .
Evaluation tasks also report a coverage, which is the fraction of the test data that a set of multilingual embeddings is able to make predictions on. This is needed because not every word in the evaluation task has a corresponding learned multilingual embedding. Thus, if coverage is low, scores are less likely to be reliable.
Results and Conclusions
We first present results on the crosslingual semantic similarity and multilingual document classification for our previously described experiments. We compare against the multiSkip approach by BIBREF4 and the state-of-the-art MUSE approach by BIBREF21 . Results for crosslingual semantic similarity are presented in Table 1 , and results for multilingual document classification are presented in Table 2 .
Our experiments corresponding to Section "Leveraging Image Understanding" are titled ImageVec 100-Dim and ImageVec 300-Dim in Tables 1 and 2 . Both experiments significantly outperform the multiSkip experiments in all crosslingual semantic similarity subtasks, and the 300-dimensional experiment slightly outperforms MUSE as well. Note that coverage scores are generally around 0.8 for these experiments. In multilingual document classification, MUSE achieves the best scores, and while our 300-dimensional experiment outperforms the multiSkip 40-dimensional experiment, it does not perform as well as the 512-dimensional experiment. Note that coverage scores are lower on these tasks.
One possible explanation for the difference in performance across the crosslingual semantic similarity task and multilingual document classification task is that the former measures crosslingual performance, whereas the latter measures monolingual performance in multiple languages, as described in Section UID10 . We briefly discuss further evidence that our models perform less well in the monolingual context below.
Discussion
We demonstrated how to learn competitive multilingual word embeddings using image-text data – which is available for low-resource languages. We have presented experiments for understanding the effect of using pixel data as compared to co-occurrences alone. We have also proposed a method for training and making predictions on multilingual word embeddings even when language tags for words are unavailable. Using a simple bag-of-words approach, we achieve performance competitive with the state-of-the-art on crosslingual semantic similarity tasks.
We have also identified a direction for future work: within language performance is weaker than the state-of-the-art, likely because our work leveraged only image-text data rather than a large monolingual corpus. Fortunately, our joint training approach provides a simple extension of our method for future work: multi-task joint training. For example, in a triple-task setting, we can simultaneously (1) constrain query and relevant image representations to be similar and (2) constrain word embeddings to be predictive of context in large monolingual corpora and (3) constrain representations for parallel text across languages to be similar. For the second task, implementing recent advances in producing monolingual embeddings, such as using subword information, is likely to improve results. Multilingual embeddings learned in a multi-task setting would reap both the benefits of our methods and existing methods for producing word embeddings. For example, while our method is likely to perform worse for more abstract words, when combined with existing approaches it is likely to achieve more consistent performance.
An interesting effect of our approach is that queries and images are embedded into a shared space through the query and image representations. This setup enables a range of future research directions and applications, including better image features, better monolingual text representations (especially for visual tasks), nearest-neighbor search for text or images given one modality (or both), and joint prediction using text and images. | performance is significantly degraded without pixel data |
f4e17b14318b9f67d60a8a2dad1f6b506a10ab36 | f4e17b14318b9f67d60a8a2dad1f6b506a10ab36_0 | Q: How is the generative model evaluated?
Text: Introduction
The ability to determine entailment or contradiction between natural language text is essential for improving the performance in a wide range of natural language processing tasks. Recognizing Textual Entailment (RTE) is a task primarily designed to determine whether two natural language sentences are independent, contradictory or in an entailment relationship where the second sentence (the hypothesis) can be inferred from the first (the premise). Although systems that perform well in RTE could potentially be used to improve question answering, information extraction, text summarization and machine translation BIBREF0 , only in few of such downstream NLP tasks sentence-pairs are actually available. Usually, only a single source sentence (e.g. a question that needs to be answered or a source sentence that we want to translate) is present and models need to come up with their own hypotheses and commonsense knowledge inferences.
The release of the large Stanford Natural Language Inference (SNLI) corpus BIBREF1 allowed end-to-end differentiable neural networks to outperform feature-based classifiers on the RTE task BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 .
In this work, we go a step further and investigate how well recurrent neural networks can produce true hypotheses given a source sentence. Furthermore, we qualitatively demonstrate that by only training on input-output pairs and recursively generating entailed sentence we can generate natural language inference chains (see Figure 1 for an example). Note that every inference step is interpretable as it is mapping one natural language sentence to another one.
Our contributions are fourfold: (i) we propose an entailment generation task based on the SNLI corpus (§ "Entailment Generation" ), (ii) we investigate a sequence-to-sequence model and find that $82\%$ of generated sentences are correct (§ "Example Generations" ), (iii) we demonstrate the ability to generate natural language inference chains trained solely from entailment pairs (§ "Entailment Generation"1 ), and finally (iv) we can also generate sentences with more specific information by swapping source and target sentences during training (§ "Entailment Generation"6 ).
Method
In the section, we briefly introduce the entailment generation task and our sequence-to-sequence model.
Entailment Generation
To create the entailment generation dataset, we simply filter the Stanford Natural Language Inference corpus for sentence-pairs of the entailment class. This results in a training set of $183,416$ sentence pairs, a development set of $3,329$ pairs and a test of $3,368$ pairs. Instead of a classification task, we can now use this dataset for a sequence transduction task.
Sequence-to-Sequence
Sequence-to-sequence recurrent neural networks BIBREF7 have been successfully employed for many sequence transduction tasks in NLP such as machine translation BIBREF8 , BIBREF9 , constituency parsing BIBREF10 , sentence summarization BIBREF11 and question answering BIBREF12 . They consist of two recurrent neural networks (RNNs): an encoder that maps an input sequence of words into a dense vector representation, and a decoder that conditioned on that vector representation generates an output sequence. Specifically, we use long short-term memory (LSTM) RNNs BIBREF13 for encoding and decoding. Furthermore, we experiment with word-by-word attention BIBREF8 , which allows the decoder to search in the encoder outputs to circumvent the LSTM's memory bottleneck. We use greedy decoding at test time. The success of LSTMs with attention in sequence transduction tasks makes them a natural choice as a baseline for entailment generation, and we leave the investigation of more advanced models to future work.
Optimization and Hyperparameters
We use stochastic gradient descent with a mini-batch size of 64 and the ADAM optimizer BIBREF14 with a first momentum coefficient of $0.9$ and a second momentum coefficient of $0.999$ . Word embeddings are initialized with pre-trained word2vec vectors BIBREF15 . Out-of-vocabulary words ( $10.5\%$ ) are randomly initialized by sampling values uniformly from $[-\sqrt{3}, \sqrt{3}]$ and optimized during training. Furthermore, we clip gradients using a norm of $5.0$ . We stop training after 25 epochs.
Experiments and Results
We present results for various tasks: (i) given a premise, generate a sentence that can be inferred from the premise, (ii) construct inference chains by recursively generating sentences, and (iii) given a sentence, create a premise that would entail this sentence, i.e., make a more descriptive sentence by adding specific information.
Quantitative Evaluation
We train an LSTM with and without attention on the training set. After training, we take the best model in terms of BLEU score BIBREF16 on the development set and calculate the BLEU score on the test set. To our surprise, we found that using attention yields only a marginally higher BLEU score (43.1 vs. 42.8). We suspect that this is due to the fact that generating entailed sentences has a larger space of valid target sequences, which makes the use of BLEU problematic and penalizes correct solutions. Hence, we manually annotated 100 random test sentences and decided whether the generated sentence can indeed be inferred from the source sentence. We found that sentences generated by an LSTM with attention are substantially more accurate ( $82\%$ accuracy) than those generated from an LSTM baseline ( $71.7\%$ ). To gain more insights into the model's capabilities, we turn to a thorough qualitative analysis of the attention LSTM model in the remainder of this paper.
Example Generations
Figure 2 shows examples of generated sentences from the development set. Syntactic simplification of the input sentence seems to be the most common approach. The model removes certain parts of the premise such as adjectives, resulting in a more abstract sentence (see Figure 2 . UID8 ).
Figure 2 . UID9 demonstrates that the system can recognize the number of subjects in the sentence and includes this information in the generated sentence. However, we did not observe such 'counting' behavior for more than four subjects, indicating that the system memorized frequency patterns from the training set.
Furthermore, we found predictions that hint to common-sense assumptions: if a sentence talks about a father holding a newborn baby, it is most likely that the newborn baby is his own child (Example 2 . UID10 ).
Two reappearing limitations of the proposed model are related to dealing with words that have a very different meaning but similar word2vec embeddings (e.g. colors), as well as ambiguous words. For instance, 'bar' in Figure 3 . UID8 refers to pole vault and not a place in which you can have a drink. Substituting one color by another one (Figure 3 . UID14 ) is a common mistake.
The SNLI corpus might not reflect the variety of sentences that can be encountered in downstream NLP tasks. In Figure 4 we present generated sentences for randomly selected examples of out-of-domain textual resources. They demonstrate that the model generalizes well to out-of-domain sentences, making it a potentially very useful component for improving systems for question answering, information extraction, sentence summarization etc.
Inference Chain Generation
Next, we test how well the model can generate inference chains by repeatedly passing generated output sentences as inputs to the model. We stop once a sentence has already been generated in the chain. Figure 5 shows that this works well despite that the model was only trained on sentence-pairs.
Furthermore, by generating inference chains for all sentences in the development set we construct an entailment graph. In that graph we found that sentences with shared semantics are eventually mapped to the same sentence that captures the shared meaning.
A visualization of the topology of the entailment graph is shown in Figure 6 . Note that there are several long inference chains, as well as large clusters of sentences (nodes) that are mapped (links) to the same shared meaning.
Inverse Inference
By swapping the source and target sequences for training, we can train a model that given a sentence invents additional information to generate a new sentence (Figure 7 ). We believe this might prove useful to increase the language variety and complexity of AI unit tests such as the Facebook bAbI task BIBREF17 , but we leave this for future work.
Conclusion and Future Work
We investigated the ability of sequence-to-sequence models to generate entailed sentences from a source sentence. To this end, we trained an attentive LSTM on entailment-pairs of the SNLI corpus. We found that this works well and generalizes beyond in-domain sentences. Hence, it could become a useful component for improving the performance of other NLP systems.
We were able to generate natural language inference chains by recursively generating sentences from previously inferred ones. This allowed us to construct an entailment graph for sentences of the SNLI development corpus. In this graph, the shared meaning of two related sentences is represented by the first natural language sentence that connects both sentences. Every inference step is interpretable as it maps a natural language sentence to another one.
Towards high-quality data augmentation, we experimented with reversing the generation task. We found that this enabled the model to learn to invent specific information.
For future work, we want to integrate the presented model into larger architectures to improve the performance of downstream NLP tasks such as information extraction and question answering. Furthermore, we plan to use the model for data augmentation to train expressive neural networks on tasks where only little annotated data is available. Another interesting research direction is to investigate methods for increasing the diversity of the generated sentences.
Acknowledgments
We thank Guillaume Bouchard for suggesting the reversed generation task, and Dirk Weissenborn, Isabelle Augenstein and Matko Bosnjak for comments on drafts of this paper. This work was supported by Microsoft Research through its PhD Scholarship Programme, an Allen Distinguished Investigator Award, and a Marie Curie Career Integration Award. | Comparing BLEU score of model with and without attention |
fac052c4ad6b19a64d7db32fd08df38ad2e22118 | fac052c4ad6b19a64d7db32fd08df38ad2e22118_0 | Q: How do they evaluate their method?
Text: Introduction
Social media plays an important role in health informatics and Twitter has been one of the most influential social media channel for mining population-level health insights BIBREF0 , BIBREF1 , BIBREF2 . These insights range from forecasting of influenza epidemics BIBREF3 to predicting adverse drug reactions BIBREF4 . A notable challenge due to the short length of Twitter messages is categorization of tweets into topics in a supervised manner, i.e., topic classification, as well as in an unsupervised manner, i.e., clustering.
Classification of tweets into topics has been studied extensively BIBREF5 , BIBREF6 , BIBREF7 . Even though text classification algorithms can reach significant accuracy levels, supervised machine learning approaches require annotated data, i.e, topic categories to learn from for classification. On the other hand, annotated data is not always available as the annotation process is burdensome and time-consuming. In addition, discussions in social media evolve rapidly with recent trends, rendering Twitter a dynamic environment with ever-changing topics. Therefore, unsupervised approaches are essential for mining health-related information from Twitter.
Proposed methods for clustering tweets employ conventional text clustering pipelines involving preprocessing applied to raw text strings, followed by feature extraction which is then followed by a clustering algorithm BIBREF8 , BIBREF9 , BIBREF10 . Performance of such approaches depend highly on feature extraction in which careful engineering and domain knowledge is required BIBREF11 . Recent advancements in machine learning research, i.e., deep neural networks, enable efficient representation learning from raw data in a hierarchical manner BIBREF12 , BIBREF13 . Several natural language processing (NLP) tasks involving Twitter data have benefited from deep neural network-based approaches including sentiment classification of tweets BIBREF14 , predicting potential suicide attempts from Twitter BIBREF15 and simulating epidemics from Twitter BIBREF16 .
In this work, we propose deep convolutional autoencoders (CAEs) for obtaining efficient representations of health-related tweets in an unsupervised manner. We validate our approach on a publicly available dataset from Twitter by comparing the performance of our approach and conventional feature extraction methods on 3 different clustering algorithms. Furthermore, we propose a constraint on the learned representations during neural network training in order to further improve the clustering performance. We show that the proposed deep neural network-based representation learning method outperforms conventional methods in terms of clustering performance in experiments of varying number of clusters.
Related Work
Devising efficient representations of tweets, i.e., features, for performing clustering has been studied extensively. Most frequently used features for representing the text in tweets as numerical vectors are bag-of-words (BoWs) and term frequency-inverse document frequency (tf-idf) features BIBREF17 , BIBREF9 , BIBREF10 , BIBREF18 , BIBREF19 . Both of these feature extraction methods are based on word occurrence counts and eventually, result in a sparse (most elements being zero) document-term matrix. Proposed algorithms for clustering tweets into topics include variants of hierarchical, density-based and centroid-based clustering methods; k-means algorithm being the most frequently used one BIBREF9 , BIBREF19 , BIBREF20 .
Numerous works on topic modeling of tweets are available as well. Topic models are generative models, relying on the idea that a given tweet is a mixture of topics, where a topic is a probability distribution over words BIBREF21 . Even though the objective in topic modeling is slightly different than that of pure clustering, representing each tweet as a topic vector is essentially a way of dimensionality reduction or feature extraction and can further be followed by a clustering algorithm. Proposed topic modeling methods include conventional approaches or variants of them such as Latent Dirichlet Allocation (LDA) BIBREF22 , BIBREF17 , BIBREF9 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF19 , BIBREF28 , BIBREF29 and Non-negative Matrix Factorization (NMF) BIBREF30 , BIBREF18 . Note that topic models such as LDA are based on the notion that words belonging to a topic are more likely to appear in the same document and do not assume a distance metric between discovered topics.
In contrary to abovementioned feature extraction methods which are not specific to representation of tweets but rather generic in natural language processing, various works propose custom feature extraction methods for certain health-related information retrieval tasks from Twitter. For instance, Lim et al. engineered sentiment analysis features to discover latent infectious diseases from Twitter BIBREF31 . In order to track public health condition trends from Twitter, specific features are proposed by Parker at al. employing Wikipedia article index, i.e., treating the retrieval of medically-related Wikipedia articles as an indicator of a health-related condition BIBREF32 . Custom user similarity features calculated from tweets were also proposed for building a framework for recommending health-related topics BIBREF27 .
The idea of learning effective representations from raw data using neural networks has been employed in numerous machine learning domains such as computer vision and natural language processing BIBREF12 , BIBREF13 . The concept relies on the hierarchical, layer-wise architecture of neural networks in which the raw input data is encoded into informative representations of lower dimensions (representations of higher dimensions are possible as well) in a highly non-linear fashion. Autoencoders, Denoising Autoencoders, Convolutional Autoencoders, Sparse Autoencoders, Stacked Autoencoders and combinations of these, e.g., Denoising Convolutional Autoencoders, are the most common deep neural network architectures specifically used for representation learning. In an autoencoder training, the network tries to reconstruct the input data at its output, which forces the model to capture the most salient features of the data at its intermediate layers. If the intermediate layers correspond to a lower dimensional latent space than the original input, such autoencoders are also known as undercomplete. Activations extracted from these layers can be considered as compact, non-linear representations of the input. Another significant advancement in neural network-based representation learning in NLP tasks is word embeddings (also called distributed representation of words). By representing each word in a given vocabulary with a real-valued vector of a fixed dimension, word embeddings enable capturing of lexical, semantic or even syntactic similarities between words. Typically, these vector representations are learned from large corpora and can be used to enhance the performance of numerous NLP tasks such as document classification, question answering and machine translation. Most frequently used word embeddings are word2vec BIBREF33 and GloVe (Global Vectors for Word Representation) BIBREF34 . Both of these are extracted in an unsupervised manner and are based on the distributional hypothesis BIBREF35 , i.e., the assumption that words that occur in the same contexts tend to have similar meanings. Both word2vec and GloVe treat a word as a smallest entity to train on. A shift in this paradigm was introduced by fastText BIBREF36 , which treats each word as a bag of character n-grams. Consequently, fastText embeddings are shown to have better representations for rare words BIBREF36 . In addition, one can still construct a vector representation for an out-of-vocabulary word which is not possible with word2vec or GloVe embeddings BIBREF36 . Enhanced methods for deducting better word and/or sentence representations were recently introduced as well by Peters et al. with the name ELMo (Embeddings from Language Models) BIBREF37 and by Devlin et al. with the name BERT (Bidirectional Encoder Representations from Transformers) BIBREF38 . All of these word embedding models are trained on large corpora such as Wikipedia, in an unsupervised manner. For analyzing tweets, word2vec and GloVe word embeddings have been employed for topical clustering of tweets BIBREF39 , topic modeling BIBREF40 , BIBREF41 and extracting depression symptoms from tweets BIBREF20 .
Metrics for evaluating the performance of clustering algorithms varies depending on whether the ground truth topic categories are available or not. If so, frequently used metrics are accuracy and normalized mutual information. In the case of absence of ground truth labels, one has to use internal clustering criterions such as Calinski-Harabasz (CH) score BIBREF42 and Davies-Bouldin index BIBREF43 . Arbelaitz et al. provides an extensive comparative study of cluster validity indices BIBREF44 .
Dataset
For this study, a publicly available dataset is used BIBREF45 . The dataset consisting of tweets has been collected using Twitter API and was initially introduced by Karami et al. BIBREF46 . Earliest tweet dates back to 13 June 2011 where the latest one has a timestamp of 9 April 2015. The dataset consists of 63,326 tweets in English language, collected from Twitter channels of 16 major health news agencies. List of health news channels and the number of tweets in the dataset from each channel can be examined from Table 1 .
The outlook of a typical tweet from the dataset can be examined from Figure 1 . For every tweet, the raw data consists of the tweet text and in most cases followed by a url to the original news article of the particular news source. This url string, if available, is removed from each tweet as it does not possess any natural language information. As Twitter allows several ways for users to interact such as retweeting or mentioning, these actions appear in the raw text as well. For retweets, an indicator string of "RT" appears as a prefix in the raw data and for user mentions, a string of form "@username" appears in the raw data. These two tokens are removed as well. In addition, hashtags are converted to plain tokens by removal of the "#" sign appearing before them (e.g. <#pregnancy> becomes <pregnancy>). Number of words, number of unique words and mean word counts for each Twitter channel can also be examined from Table 1 . Longest tweet consists of 27 words.
Conventional Representations
For representing tweets, 5 conventional representation methods are proposed as baselines.
Word frequency features: For word occurrence-based representations of tweets, conventional tf-idf and BoWs are used to obtain the document-term matrix of $N \times P$ in which each row corresponds to a tweet and each column corresponds to a unique word/token, i.e., $N$ data points and $P$ features. As the document-term matrix obtained from tf-idf or BoWs features is extremely sparse and consequently redundant across many dimensions, dimensionality reduction and topic modeling to a lower dimensional latent space is performed by the methods below.
Principal Component Analysis (PCA): PCA is used to map the word frequency representations from the original feature space to a lower dimensional feature space by an orthogonal linear transformation in such a way that the first principal component has the highest possible variance and similarly, each succeeding component has the highest variance possible while being orthogonal to the preceding components. Our PCA implementation has a time complexity of $\mathcal {O}(NP^2 + P^3)$ .
Truncated Singular Value Decomposition (t-SVD): Standard SVD and t-SVD are commonly employed dimensionality reduction techniques in which a matrix is reduced or approximated into a low-rank decomposition. Time complexity of SVD and t-SVD for $S$ components are $\mathcal {O}(min(NP^2, N^2P))$ and $\mathcal {O}(N^2S)$ , respectively (depending on the implementation). Contrary to PCA, t-SVD can be applied to sparse matrices efficiently as it does not require data normalization. When the data matrix is obtained by BoWs or tf-idf representations as in our case, the technique is also known as Latent Semantic Analysis.
LDA: Our LDA implementation employs online variational Bayes algorithm introduced by Hoffman et al. which uses stochastic optimization to maximize the objective function for the topic model BIBREF47 .
NMF: As NMF finds two non-negative matrices whose product approximates the non-negative document-term matrix, it allows regularization. Our implementation did not employ any regularization and the divergence function is set to be squared error, i.e., Frobenius norm.
Representation Learning
We propose 2D convolutional autoencoders for extracting compact representations of tweets from their raw form in a highly non-linear fashion. In order to turn a given tweet into a 2D structure to be fed into the CAE, we extract the word vectors of each word using word embedding models, i.e., for a given tweet, $t$ , consisting of $W$ words, the 2D input is $I_{t} \in ^{W \times D}$ where $D$ is the embedding vector dimension. We compare 4 different word embeddings namely word2vec, GloVe, fastText and BERT with embedding vector dimensions of 300, 300, 300 and 768, respectively. We set the maximum sequence length to 32, i.e., for tweets having less number of words, the input matrix is padded with zeros. As word2vec and GloVe embeddings can not handle out-of-vocabulary words, such cases are represented as a vector of zeros. The process of extracting word vector representations of a tweet to form the 2D input matrix can be examined from Figure 1 .
The CAE architecture can be considered as consisting of 2 parts, ie., the encoder and the decoder. The encoder, $f_{enc}(\cdot )$ , is the part of the network that compresses the input, $I$ , into a latent space representation, $U$ , and the decoder, $f_{dec}(\cdot )$ aims to reconstruct the input from the latent space representation (see equation 12 ). In essence,
$$U = f_{enc}(I) = f_{L}(f_{L-1}(...f_{1}(I)))$$ (Eq. 12)
where $L$ is the number of layers in the encoder part of the CAE.
The encoder in the proposed architecture consists of three 2D convolutional layers with 64, 32 and 1 filters, respectively. The decoder follows the same symmetry with three convolutional layers with 1, 32 and 64 filters, respectively and an output convolutional layer of a single filter (see Figure 1 ). All convolutional layers have a kernel size of (3 $\times $ 3) and an activation function of Rectified Linear Unit (ReLU) except the output layer which employs a linear activation function. Each convolutional layer in the encoder is followed by a 2D MaxPooling layer and similarly each convolutional layer in the decoder is followed by a 2D UpSampling layer, serving as an inverse operation (having the same parameters). The pooling sizes for pooling layers are (2 $\times $ 5), (2 $\times $ 5) and (2 $\times $ 2), respectively for the architectures when word2vec, GloVe and fastText embeddings are employed. With this configuration, an input tweet of size $32 \times 300$ (corresponding to maximum sequence length $\times $ embedding dimension, $D$ ) is downsampled to size of $4 \times 6$ out of the encoder (bottleneck layer). As BERT word embeddings have word vectors of fixed size 768, the pooling layer sizes are chosen to be (2 $\times $ 8), (2 $\times $ 8) and (2 $\times $0 2), respectively for that case. In summary, a representation of $\times $1 values is learned for each tweet through the encoder, e.g., for fastText embeddings the flow of dimensions after each encoder block is as such : $\times $2 .
In numerous NLP tasks, an Embedding Layer is employed as the first layer of the neural network which can be initialized with the word embedding matrix in order to incorporate the embedding process into the architecture itself instead of manual extraction. In our case, this was not possible because of nonexistence of an inversed embedding layer in the decoder (as in the relationship between MaxPooling layers and UpSampling layers) as an embedding layer is not differentiable.
Training of autoencoders tries to minimize the reconstruction error/loss, i.e., the deviation of the reconstructed output from the input. $L_2$ -loss or mean square error (MSE) is chosen to be the loss function. In autoencoders, minimizing the $L_2$ -loss is equivalent to maximizing the mutual information between the reconstructed inputs and the original ones BIBREF48 . In addition, from a probabilistic point of view, minimizing the $L_2$ -loss is the same as maximizing the probability of the parameters given the data, corresponding to a maximum likelihood estimator. The optimizer for the autoencoder training is chosen to be Adam due to its faster convergence abilities BIBREF49 . The learning rate for the optimizer is set to $10^{-5}$ and the batch size for the training is set to 32. Random split of 80% training-20% validation set is performed for monitoring convergence. Maximum number of training epochs is set to 50.
L 2 L_2-norm Constrained Representation Learning
Certain constraints on neural network weights are commonly employed during training in order to reduce overfitting, also known as regularization. Such constraints include $L_1$ regularization, $L_2$ regularization, orthogonal regularization etc. Even though regularization is a common practice, standard training of neural networks do not inherently impose any constraints on the learned representations (activations), $U$ , other than the ones compelled by the activation functions (e.g. ReLUs resulting in non-negative outputs). Recent advancements in computer vision research show that constraining the learned representations can enhance the effectiveness of representation learning, consequently increasing the clustering performance BIBREF50 , BIBREF51 .
$$\begin{aligned} & \text{minimize} & & L = 1/_N \left\Vert I - f_{dec}(f_{enc}(I))\right\Vert ^2_{2} \\ & \text{subject to} & & \left\Vert f_{enc}(I)\right\Vert ^2_{2} = 1 \end{aligned}$$ (Eq. 14)
We propose an $L_2$ norm constraint on the learned representations out of the bottleneck layer, $U$ . Essentially, this is a hard constraint introduced during neural network training that results in learned features with unit $L_2$ norm out of the bottleneck layer (see equation 14 where $N$ is the number of data points). Training a deep convolutional autoencoder with such a constraint is shown to be much more effective for image data than applying $L_2$ normalization on the learned representations after training BIBREF51 . To the best of our knowledge, this is the first study to incorporate $L_2$ norm constraint in a task involving text data.
Evaluation
In order to fairly compare and evaluate the proposed methods in terms of effectiveness in representation of tweets, we fix the number of features to 24 for all methods and feed these representations as an input to 3 different clustering algorithms namely, k-means, Ward and spectral clustering with cluster numbers of 10, 20 and 50. Distance metric for k-means clustering is chosen to be euclidean and the linkage criteria for Ward clustering is chosen to be minimizing the sum of differences within all clusters, i.e., recursively merging pairs of clusters that minimally increases the within-cluster variance in a hierarchical manner. For spectral clustering, Gaussian kernel has been employed for constructing the affinity matrix. We also run experiments with tf-idf and BoWs representations without further dimensionality reduction as well as concatenation of all word embeddings into a long feature vector. For evaluation of clustering performance, we use Calinski-Harabasz score BIBREF42 , also known as the variance ratio criterion. CH score is defined as the ratio between the within-cluster dispersion and the between-cluster dispersion. CH score has a range of $[0, +\infty ]$ and a higher CH score corresponds to a better clustering. Computational complexity of calculating CH score is $\mathcal {O}(N)$ .
For a given dataset $X$ consisting of $N$ data points, i.e., $X = \big \lbrace x_1, x_2, ... , x_N\big \rbrace $ and a given set of disjoint clusters $C$ with $K$ clusters, i.e., $C = \big \lbrace c_1, c_2, ... , c_K\big \rbrace $ , Calinski-Harabasz score, $S_{CH}$ , is defined as
$$S_{CH} = \frac{N-K}{K-1}\frac{\sum _{c_k \in C}^{}{N_k \left\Vert \overline{c_k}-\overline{X}\right\Vert ^2_{2}}}{\sum _{c_k \in C}^{}{}\sum _{x_i \in c_k}^{}{\left\Vert x_i-\overline{c_k}\right\Vert ^2_{2}}}$$ (Eq. 16)
where $N_k$ is the number of points belonging to the cluster $c_k$ , $\overline{X}$ is the centroid of the entire dataset, $\frac{1}{N}\sum _{x_i \in X}{x_i}$ and $\overline{c_k}$ is the centroid of the cluster $c_k$ , $\frac{1}{N_k}\sum _{x_i \in c_k}{x_i}$ .
For visual validation, we plot and inspect the t-Distributed Stochastic Neighbor Embedding (t-SNE) BIBREF52 and Uniform Manifold Approximation and Projection (UMAP) BIBREF53 mappings of the learned representations as well. Implementation of this study is done in Python (version 3.6) using scikit-learn and TensorFlow libraries BIBREF54 , BIBREF55 on a 64-bit Ubuntu 16.04 workstation with 128 GB RAM. Training of autoencoders are performed with a single NVIDIA Titan Xp GPU.
Results
Performance of the representations tested on 3 different clustering algorithms, i.e., CH scores, for 3 different cluster numbers can be examined from Table 2 . $L_2$ -norm constrained CAE is simply referred as $L_2$ -CAE in Table 2 . Same table shows the number of features used for each method as well. Document-term matrix extracted by BoWs and tf-idf features result in a sparse matrix of $63,326 \times 13,026$ with a sparsity of 0.9994733. Similarly, concatenation of word embeddings result in a high number of features with $32 \times 300 = 9,600$ for word2vec, GloVe and fastText, $32 \times 768 = 24,576$ for BERT embeddings. In summary, the proposed method of learning representations of tweets with CAEs outperform all of the conventional algorithms. When representations are compared with Hotelling's $T^2$ test (multivariate version of $t$ -test), every representation distribution learned by CAEs are shown to be statistically significantly different than every other conventional representation distribution with $p<0.001$ . In addition, introducing the $L_2$ -norm constraint on the learned representations during training enhances the clustering performance further (again $p<0.001$ when comparing for example fastText+CAE vs. fastText+ $L_2$0 -CAE). An example learning curve for CAE and $L_2$1 -CAE with fastText embeddings as input can also be seen in Figure 2 .
Detailed inspection of tweets that are clustered into the same cluster as well as visual analysis of the formed clusters is also performed. Figure 3 shows the t-SNE and UMAP mappings (onto 2D plane) of the 10 clusters formed by k-means algorithm for LDA, CAE and $L_2$ -CAE representations. Below are several examples of tweets sampled from one of the clusters formed by k-means in the 50 clusters case (fastText embeddings fed into $L_2$ -CAE):
Discussion
Overall, we show that deep convolutional autoencoder-based feature extraction, i.e., representation learning, from health related tweets significantly enhances the performance of clustering algorithms when compared to conventional text feature extraction and topic modeling methods (see Table 2 ). This statement holds true for 3 different clustering algorithms (k-means, Ward, spectral) as well as for 3 different number of clusters. In addition, proposed constrained training ( $L_2$ -norm constraint) is shown to further improve the clustering performance in each experiment as well (see Table 2 ). A Calinski-Harabasz score of 4,304 has been achieved with constrained representation learning by CAE for the experiment of 50 clusters formed by k-means clustering. The highest CH score achieved in the same experiment setting by conventional algorithms was 638 which was achieved by LDA applied of tf-idf features.
Visualizations of t-SNE and UMAP mappings in Figure 3 show that $L_2$ -norm constrained training results in higher separability of clusters. The benefit of this constraint is especially significant in the performance of k-means clustering (see Table 2 ). This phenomena is not unexpected as k-means clustering is based on $L_2$ distance as well. The difference in learning curves for regular and constrained CAE trainings is also expected. Constrained CAE training converges to local minimum slightly later than unconstrained CAE, i.e., training of $L_2$ -CAE is slightly slower than that of CAE due to the introduced contraint (see Figure 2 ).
When it comes to comparison between word embeddings, fastText and BERT word vectors result in the highest CH scores whereas word2vec and GloVe embeddings result in significantly lower performance. This observation can be explained by the nature of word2vec and GloVe embeddings which can not handle out-of-vocabulary tokens. Numerous tweets include names of certain drugs which are more likely to be absent in the vocabulary of these models, consequently resulting in vectors of zeros as embeddings. However, fastText embeddings are based on character n-grams which enables handling of out-of-vocabulary tokens, e.g., fastText word vectors of the tokens <acetaminophen> and <paracetamol> are closer to each other simply due to shared character sequence, <acetam>, even if one of them is not in the vocabulary. Note that, <acetaminophen> and <paracetamol> are different names for the same drug.
Using tf-idf or BoWs features directly results in very poor performance. Similarly, concatenating word embeddings to create thousands of features results in significantly low performance compared to methods that reduce these features to 24. The main reason is that the bias-variance trade-off is dominated by the bias in high dimensional settings especially in Euclidean spaces BIBREF56 . Due to very high number of features (relative to the number of observations), the radius of a given region varies with respect to the $n$ th root of its volume, whereas the number of data points in the region varies roughly linearly with the volume BIBREF56 . This phenomena is known as curse of dimensionality. As topic models such as LDA and NMF are designed to be used on documents that are sufficiently long to extract robust statistics from, extracted topic vectors fall short in performance as well when it comes to tweets due to short texts.
The main limitation of this study is the absence of topic labels in the dataset. As a result, internal clustering measure of Calinski-Harabasz score was used for evaluating the performance of the formed clusters instead of accuracy or normalized mutual information. Even though CH score is shown to be able to capture clusters of different densities and presence of subclusters, it has difficulties capturing highly noisy data and skewed distributions BIBREF57 . In addition, used clustering algorithms, i.e., k-means, Ward and spectral clustering, are hard clustering algorithms which results in non-overlapping clusters. However, a given tweet can have several topical labels.
Future work includes representation learning of health-related tweets using deep neural network architectures that can inherently learn the sequential nature of the textual data such as recurrent neural networks, e.g., Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU) etc. Sequence-to-sequence autoencoders are main examples of such architectures and they have been shown to be effective in encoding paragraphs from Wikipedia and other corpora to lower dimensions BIBREF58 . Furthermore, encodings out of a bidirectional GRU will be tested for clustering performance, as such architectures have been employed to represent a given tweet in other studies BIBREF59 , BIBREF60 , BIBREF61 .
Conclusion
In summary, we show that deep convolutional autoencoders can effectively learn compact representations of health-related tweets in an unsupervised manner. Conducted analysis show that the proposed representation learning scheme outperforms conventional feature extraction methods in three different clustering algorithms. In addition, we propose a constraint on the learned representation in order to further increase the clustering performance. Future work includes comparison of our model with recurrent neural architectures for clustering of health-related tweets. We believe this study serves as an advancement in the field of natural language processing for health informatics especially in clustering of short-text social media data. | Calinski-Harabasz score, t-SNE, UMAP |
aa54e12ff71c25b7cff1e44783d07806e89f8e54 | aa54e12ff71c25b7cff1e44783d07806e89f8e54_0 | Q: What is an example of a health-related tweet?
Text: Introduction
Social media plays an important role in health informatics and Twitter has been one of the most influential social media channel for mining population-level health insights BIBREF0 , BIBREF1 , BIBREF2 . These insights range from forecasting of influenza epidemics BIBREF3 to predicting adverse drug reactions BIBREF4 . A notable challenge due to the short length of Twitter messages is categorization of tweets into topics in a supervised manner, i.e., topic classification, as well as in an unsupervised manner, i.e., clustering.
Classification of tweets into topics has been studied extensively BIBREF5 , BIBREF6 , BIBREF7 . Even though text classification algorithms can reach significant accuracy levels, supervised machine learning approaches require annotated data, i.e, topic categories to learn from for classification. On the other hand, annotated data is not always available as the annotation process is burdensome and time-consuming. In addition, discussions in social media evolve rapidly with recent trends, rendering Twitter a dynamic environment with ever-changing topics. Therefore, unsupervised approaches are essential for mining health-related information from Twitter.
Proposed methods for clustering tweets employ conventional text clustering pipelines involving preprocessing applied to raw text strings, followed by feature extraction which is then followed by a clustering algorithm BIBREF8 , BIBREF9 , BIBREF10 . Performance of such approaches depend highly on feature extraction in which careful engineering and domain knowledge is required BIBREF11 . Recent advancements in machine learning research, i.e., deep neural networks, enable efficient representation learning from raw data in a hierarchical manner BIBREF12 , BIBREF13 . Several natural language processing (NLP) tasks involving Twitter data have benefited from deep neural network-based approaches including sentiment classification of tweets BIBREF14 , predicting potential suicide attempts from Twitter BIBREF15 and simulating epidemics from Twitter BIBREF16 .
In this work, we propose deep convolutional autoencoders (CAEs) for obtaining efficient representations of health-related tweets in an unsupervised manner. We validate our approach on a publicly available dataset from Twitter by comparing the performance of our approach and conventional feature extraction methods on 3 different clustering algorithms. Furthermore, we propose a constraint on the learned representations during neural network training in order to further improve the clustering performance. We show that the proposed deep neural network-based representation learning method outperforms conventional methods in terms of clustering performance in experiments of varying number of clusters.
Related Work
Devising efficient representations of tweets, i.e., features, for performing clustering has been studied extensively. Most frequently used features for representing the text in tweets as numerical vectors are bag-of-words (BoWs) and term frequency-inverse document frequency (tf-idf) features BIBREF17 , BIBREF9 , BIBREF10 , BIBREF18 , BIBREF19 . Both of these feature extraction methods are based on word occurrence counts and eventually, result in a sparse (most elements being zero) document-term matrix. Proposed algorithms for clustering tweets into topics include variants of hierarchical, density-based and centroid-based clustering methods; k-means algorithm being the most frequently used one BIBREF9 , BIBREF19 , BIBREF20 .
Numerous works on topic modeling of tweets are available as well. Topic models are generative models, relying on the idea that a given tweet is a mixture of topics, where a topic is a probability distribution over words BIBREF21 . Even though the objective in topic modeling is slightly different than that of pure clustering, representing each tweet as a topic vector is essentially a way of dimensionality reduction or feature extraction and can further be followed by a clustering algorithm. Proposed topic modeling methods include conventional approaches or variants of them such as Latent Dirichlet Allocation (LDA) BIBREF22 , BIBREF17 , BIBREF9 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF19 , BIBREF28 , BIBREF29 and Non-negative Matrix Factorization (NMF) BIBREF30 , BIBREF18 . Note that topic models such as LDA are based on the notion that words belonging to a topic are more likely to appear in the same document and do not assume a distance metric between discovered topics.
In contrary to abovementioned feature extraction methods which are not specific to representation of tweets but rather generic in natural language processing, various works propose custom feature extraction methods for certain health-related information retrieval tasks from Twitter. For instance, Lim et al. engineered sentiment analysis features to discover latent infectious diseases from Twitter BIBREF31 . In order to track public health condition trends from Twitter, specific features are proposed by Parker at al. employing Wikipedia article index, i.e., treating the retrieval of medically-related Wikipedia articles as an indicator of a health-related condition BIBREF32 . Custom user similarity features calculated from tweets were also proposed for building a framework for recommending health-related topics BIBREF27 .
The idea of learning effective representations from raw data using neural networks has been employed in numerous machine learning domains such as computer vision and natural language processing BIBREF12 , BIBREF13 . The concept relies on the hierarchical, layer-wise architecture of neural networks in which the raw input data is encoded into informative representations of lower dimensions (representations of higher dimensions are possible as well) in a highly non-linear fashion. Autoencoders, Denoising Autoencoders, Convolutional Autoencoders, Sparse Autoencoders, Stacked Autoencoders and combinations of these, e.g., Denoising Convolutional Autoencoders, are the most common deep neural network architectures specifically used for representation learning. In an autoencoder training, the network tries to reconstruct the input data at its output, which forces the model to capture the most salient features of the data at its intermediate layers. If the intermediate layers correspond to a lower dimensional latent space than the original input, such autoencoders are also known as undercomplete. Activations extracted from these layers can be considered as compact, non-linear representations of the input. Another significant advancement in neural network-based representation learning in NLP tasks is word embeddings (also called distributed representation of words). By representing each word in a given vocabulary with a real-valued vector of a fixed dimension, word embeddings enable capturing of lexical, semantic or even syntactic similarities between words. Typically, these vector representations are learned from large corpora and can be used to enhance the performance of numerous NLP tasks such as document classification, question answering and machine translation. Most frequently used word embeddings are word2vec BIBREF33 and GloVe (Global Vectors for Word Representation) BIBREF34 . Both of these are extracted in an unsupervised manner and are based on the distributional hypothesis BIBREF35 , i.e., the assumption that words that occur in the same contexts tend to have similar meanings. Both word2vec and GloVe treat a word as a smallest entity to train on. A shift in this paradigm was introduced by fastText BIBREF36 , which treats each word as a bag of character n-grams. Consequently, fastText embeddings are shown to have better representations for rare words BIBREF36 . In addition, one can still construct a vector representation for an out-of-vocabulary word which is not possible with word2vec or GloVe embeddings BIBREF36 . Enhanced methods for deducting better word and/or sentence representations were recently introduced as well by Peters et al. with the name ELMo (Embeddings from Language Models) BIBREF37 and by Devlin et al. with the name BERT (Bidirectional Encoder Representations from Transformers) BIBREF38 . All of these word embedding models are trained on large corpora such as Wikipedia, in an unsupervised manner. For analyzing tweets, word2vec and GloVe word embeddings have been employed for topical clustering of tweets BIBREF39 , topic modeling BIBREF40 , BIBREF41 and extracting depression symptoms from tweets BIBREF20 .
Metrics for evaluating the performance of clustering algorithms varies depending on whether the ground truth topic categories are available or not. If so, frequently used metrics are accuracy and normalized mutual information. In the case of absence of ground truth labels, one has to use internal clustering criterions such as Calinski-Harabasz (CH) score BIBREF42 and Davies-Bouldin index BIBREF43 . Arbelaitz et al. provides an extensive comparative study of cluster validity indices BIBREF44 .
Dataset
For this study, a publicly available dataset is used BIBREF45 . The dataset consisting of tweets has been collected using Twitter API and was initially introduced by Karami et al. BIBREF46 . Earliest tweet dates back to 13 June 2011 where the latest one has a timestamp of 9 April 2015. The dataset consists of 63,326 tweets in English language, collected from Twitter channels of 16 major health news agencies. List of health news channels and the number of tweets in the dataset from each channel can be examined from Table 1 .
The outlook of a typical tweet from the dataset can be examined from Figure 1 . For every tweet, the raw data consists of the tweet text and in most cases followed by a url to the original news article of the particular news source. This url string, if available, is removed from each tweet as it does not possess any natural language information. As Twitter allows several ways for users to interact such as retweeting or mentioning, these actions appear in the raw text as well. For retweets, an indicator string of "RT" appears as a prefix in the raw data and for user mentions, a string of form "@username" appears in the raw data. These two tokens are removed as well. In addition, hashtags are converted to plain tokens by removal of the "#" sign appearing before them (e.g. <#pregnancy> becomes <pregnancy>). Number of words, number of unique words and mean word counts for each Twitter channel can also be examined from Table 1 . Longest tweet consists of 27 words.
Conventional Representations
For representing tweets, 5 conventional representation methods are proposed as baselines.
Word frequency features: For word occurrence-based representations of tweets, conventional tf-idf and BoWs are used to obtain the document-term matrix of $N \times P$ in which each row corresponds to a tweet and each column corresponds to a unique word/token, i.e., $N$ data points and $P$ features. As the document-term matrix obtained from tf-idf or BoWs features is extremely sparse and consequently redundant across many dimensions, dimensionality reduction and topic modeling to a lower dimensional latent space is performed by the methods below.
Principal Component Analysis (PCA): PCA is used to map the word frequency representations from the original feature space to a lower dimensional feature space by an orthogonal linear transformation in such a way that the first principal component has the highest possible variance and similarly, each succeeding component has the highest variance possible while being orthogonal to the preceding components. Our PCA implementation has a time complexity of $\mathcal {O}(NP^2 + P^3)$ .
Truncated Singular Value Decomposition (t-SVD): Standard SVD and t-SVD are commonly employed dimensionality reduction techniques in which a matrix is reduced or approximated into a low-rank decomposition. Time complexity of SVD and t-SVD for $S$ components are $\mathcal {O}(min(NP^2, N^2P))$ and $\mathcal {O}(N^2S)$ , respectively (depending on the implementation). Contrary to PCA, t-SVD can be applied to sparse matrices efficiently as it does not require data normalization. When the data matrix is obtained by BoWs or tf-idf representations as in our case, the technique is also known as Latent Semantic Analysis.
LDA: Our LDA implementation employs online variational Bayes algorithm introduced by Hoffman et al. which uses stochastic optimization to maximize the objective function for the topic model BIBREF47 .
NMF: As NMF finds two non-negative matrices whose product approximates the non-negative document-term matrix, it allows regularization. Our implementation did not employ any regularization and the divergence function is set to be squared error, i.e., Frobenius norm.
Representation Learning
We propose 2D convolutional autoencoders for extracting compact representations of tweets from their raw form in a highly non-linear fashion. In order to turn a given tweet into a 2D structure to be fed into the CAE, we extract the word vectors of each word using word embedding models, i.e., for a given tweet, $t$ , consisting of $W$ words, the 2D input is $I_{t} \in ^{W \times D}$ where $D$ is the embedding vector dimension. We compare 4 different word embeddings namely word2vec, GloVe, fastText and BERT with embedding vector dimensions of 300, 300, 300 and 768, respectively. We set the maximum sequence length to 32, i.e., for tweets having less number of words, the input matrix is padded with zeros. As word2vec and GloVe embeddings can not handle out-of-vocabulary words, such cases are represented as a vector of zeros. The process of extracting word vector representations of a tweet to form the 2D input matrix can be examined from Figure 1 .
The CAE architecture can be considered as consisting of 2 parts, ie., the encoder and the decoder. The encoder, $f_{enc}(\cdot )$ , is the part of the network that compresses the input, $I$ , into a latent space representation, $U$ , and the decoder, $f_{dec}(\cdot )$ aims to reconstruct the input from the latent space representation (see equation 12 ). In essence,
$$U = f_{enc}(I) = f_{L}(f_{L-1}(...f_{1}(I)))$$ (Eq. 12)
where $L$ is the number of layers in the encoder part of the CAE.
The encoder in the proposed architecture consists of three 2D convolutional layers with 64, 32 and 1 filters, respectively. The decoder follows the same symmetry with three convolutional layers with 1, 32 and 64 filters, respectively and an output convolutional layer of a single filter (see Figure 1 ). All convolutional layers have a kernel size of (3 $\times $ 3) and an activation function of Rectified Linear Unit (ReLU) except the output layer which employs a linear activation function. Each convolutional layer in the encoder is followed by a 2D MaxPooling layer and similarly each convolutional layer in the decoder is followed by a 2D UpSampling layer, serving as an inverse operation (having the same parameters). The pooling sizes for pooling layers are (2 $\times $ 5), (2 $\times $ 5) and (2 $\times $ 2), respectively for the architectures when word2vec, GloVe and fastText embeddings are employed. With this configuration, an input tweet of size $32 \times 300$ (corresponding to maximum sequence length $\times $ embedding dimension, $D$ ) is downsampled to size of $4 \times 6$ out of the encoder (bottleneck layer). As BERT word embeddings have word vectors of fixed size 768, the pooling layer sizes are chosen to be (2 $\times $ 8), (2 $\times $ 8) and (2 $\times $0 2), respectively for that case. In summary, a representation of $\times $1 values is learned for each tweet through the encoder, e.g., for fastText embeddings the flow of dimensions after each encoder block is as such : $\times $2 .
In numerous NLP tasks, an Embedding Layer is employed as the first layer of the neural network which can be initialized with the word embedding matrix in order to incorporate the embedding process into the architecture itself instead of manual extraction. In our case, this was not possible because of nonexistence of an inversed embedding layer in the decoder (as in the relationship between MaxPooling layers and UpSampling layers) as an embedding layer is not differentiable.
Training of autoencoders tries to minimize the reconstruction error/loss, i.e., the deviation of the reconstructed output from the input. $L_2$ -loss or mean square error (MSE) is chosen to be the loss function. In autoencoders, minimizing the $L_2$ -loss is equivalent to maximizing the mutual information between the reconstructed inputs and the original ones BIBREF48 . In addition, from a probabilistic point of view, minimizing the $L_2$ -loss is the same as maximizing the probability of the parameters given the data, corresponding to a maximum likelihood estimator. The optimizer for the autoencoder training is chosen to be Adam due to its faster convergence abilities BIBREF49 . The learning rate for the optimizer is set to $10^{-5}$ and the batch size for the training is set to 32. Random split of 80% training-20% validation set is performed for monitoring convergence. Maximum number of training epochs is set to 50.
L 2 L_2-norm Constrained Representation Learning
Certain constraints on neural network weights are commonly employed during training in order to reduce overfitting, also known as regularization. Such constraints include $L_1$ regularization, $L_2$ regularization, orthogonal regularization etc. Even though regularization is a common practice, standard training of neural networks do not inherently impose any constraints on the learned representations (activations), $U$ , other than the ones compelled by the activation functions (e.g. ReLUs resulting in non-negative outputs). Recent advancements in computer vision research show that constraining the learned representations can enhance the effectiveness of representation learning, consequently increasing the clustering performance BIBREF50 , BIBREF51 .
$$\begin{aligned} & \text{minimize} & & L = 1/_N \left\Vert I - f_{dec}(f_{enc}(I))\right\Vert ^2_{2} \\ & \text{subject to} & & \left\Vert f_{enc}(I)\right\Vert ^2_{2} = 1 \end{aligned}$$ (Eq. 14)
We propose an $L_2$ norm constraint on the learned representations out of the bottleneck layer, $U$ . Essentially, this is a hard constraint introduced during neural network training that results in learned features with unit $L_2$ norm out of the bottleneck layer (see equation 14 where $N$ is the number of data points). Training a deep convolutional autoencoder with such a constraint is shown to be much more effective for image data than applying $L_2$ normalization on the learned representations after training BIBREF51 . To the best of our knowledge, this is the first study to incorporate $L_2$ norm constraint in a task involving text data.
Evaluation
In order to fairly compare and evaluate the proposed methods in terms of effectiveness in representation of tweets, we fix the number of features to 24 for all methods and feed these representations as an input to 3 different clustering algorithms namely, k-means, Ward and spectral clustering with cluster numbers of 10, 20 and 50. Distance metric for k-means clustering is chosen to be euclidean and the linkage criteria for Ward clustering is chosen to be minimizing the sum of differences within all clusters, i.e., recursively merging pairs of clusters that minimally increases the within-cluster variance in a hierarchical manner. For spectral clustering, Gaussian kernel has been employed for constructing the affinity matrix. We also run experiments with tf-idf and BoWs representations without further dimensionality reduction as well as concatenation of all word embeddings into a long feature vector. For evaluation of clustering performance, we use Calinski-Harabasz score BIBREF42 , also known as the variance ratio criterion. CH score is defined as the ratio between the within-cluster dispersion and the between-cluster dispersion. CH score has a range of $[0, +\infty ]$ and a higher CH score corresponds to a better clustering. Computational complexity of calculating CH score is $\mathcal {O}(N)$ .
For a given dataset $X$ consisting of $N$ data points, i.e., $X = \big \lbrace x_1, x_2, ... , x_N\big \rbrace $ and a given set of disjoint clusters $C$ with $K$ clusters, i.e., $C = \big \lbrace c_1, c_2, ... , c_K\big \rbrace $ , Calinski-Harabasz score, $S_{CH}$ , is defined as
$$S_{CH} = \frac{N-K}{K-1}\frac{\sum _{c_k \in C}^{}{N_k \left\Vert \overline{c_k}-\overline{X}\right\Vert ^2_{2}}}{\sum _{c_k \in C}^{}{}\sum _{x_i \in c_k}^{}{\left\Vert x_i-\overline{c_k}\right\Vert ^2_{2}}}$$ (Eq. 16)
where $N_k$ is the number of points belonging to the cluster $c_k$ , $\overline{X}$ is the centroid of the entire dataset, $\frac{1}{N}\sum _{x_i \in X}{x_i}$ and $\overline{c_k}$ is the centroid of the cluster $c_k$ , $\frac{1}{N_k}\sum _{x_i \in c_k}{x_i}$ .
For visual validation, we plot and inspect the t-Distributed Stochastic Neighbor Embedding (t-SNE) BIBREF52 and Uniform Manifold Approximation and Projection (UMAP) BIBREF53 mappings of the learned representations as well. Implementation of this study is done in Python (version 3.6) using scikit-learn and TensorFlow libraries BIBREF54 , BIBREF55 on a 64-bit Ubuntu 16.04 workstation with 128 GB RAM. Training of autoencoders are performed with a single NVIDIA Titan Xp GPU.
Results
Performance of the representations tested on 3 different clustering algorithms, i.e., CH scores, for 3 different cluster numbers can be examined from Table 2 . $L_2$ -norm constrained CAE is simply referred as $L_2$ -CAE in Table 2 . Same table shows the number of features used for each method as well. Document-term matrix extracted by BoWs and tf-idf features result in a sparse matrix of $63,326 \times 13,026$ with a sparsity of 0.9994733. Similarly, concatenation of word embeddings result in a high number of features with $32 \times 300 = 9,600$ for word2vec, GloVe and fastText, $32 \times 768 = 24,576$ for BERT embeddings. In summary, the proposed method of learning representations of tweets with CAEs outperform all of the conventional algorithms. When representations are compared with Hotelling's $T^2$ test (multivariate version of $t$ -test), every representation distribution learned by CAEs are shown to be statistically significantly different than every other conventional representation distribution with $p<0.001$ . In addition, introducing the $L_2$ -norm constraint on the learned representations during training enhances the clustering performance further (again $p<0.001$ when comparing for example fastText+CAE vs. fastText+ $L_2$0 -CAE). An example learning curve for CAE and $L_2$1 -CAE with fastText embeddings as input can also be seen in Figure 2 .
Detailed inspection of tweets that are clustered into the same cluster as well as visual analysis of the formed clusters is also performed. Figure 3 shows the t-SNE and UMAP mappings (onto 2D plane) of the 10 clusters formed by k-means algorithm for LDA, CAE and $L_2$ -CAE representations. Below are several examples of tweets sampled from one of the clusters formed by k-means in the 50 clusters case (fastText embeddings fed into $L_2$ -CAE):
Discussion
Overall, we show that deep convolutional autoencoder-based feature extraction, i.e., representation learning, from health related tweets significantly enhances the performance of clustering algorithms when compared to conventional text feature extraction and topic modeling methods (see Table 2 ). This statement holds true for 3 different clustering algorithms (k-means, Ward, spectral) as well as for 3 different number of clusters. In addition, proposed constrained training ( $L_2$ -norm constraint) is shown to further improve the clustering performance in each experiment as well (see Table 2 ). A Calinski-Harabasz score of 4,304 has been achieved with constrained representation learning by CAE for the experiment of 50 clusters formed by k-means clustering. The highest CH score achieved in the same experiment setting by conventional algorithms was 638 which was achieved by LDA applied of tf-idf features.
Visualizations of t-SNE and UMAP mappings in Figure 3 show that $L_2$ -norm constrained training results in higher separability of clusters. The benefit of this constraint is especially significant in the performance of k-means clustering (see Table 2 ). This phenomena is not unexpected as k-means clustering is based on $L_2$ distance as well. The difference in learning curves for regular and constrained CAE trainings is also expected. Constrained CAE training converges to local minimum slightly later than unconstrained CAE, i.e., training of $L_2$ -CAE is slightly slower than that of CAE due to the introduced contraint (see Figure 2 ).
When it comes to comparison between word embeddings, fastText and BERT word vectors result in the highest CH scores whereas word2vec and GloVe embeddings result in significantly lower performance. This observation can be explained by the nature of word2vec and GloVe embeddings which can not handle out-of-vocabulary tokens. Numerous tweets include names of certain drugs which are more likely to be absent in the vocabulary of these models, consequently resulting in vectors of zeros as embeddings. However, fastText embeddings are based on character n-grams which enables handling of out-of-vocabulary tokens, e.g., fastText word vectors of the tokens <acetaminophen> and <paracetamol> are closer to each other simply due to shared character sequence, <acetam>, even if one of them is not in the vocabulary. Note that, <acetaminophen> and <paracetamol> are different names for the same drug.
Using tf-idf or BoWs features directly results in very poor performance. Similarly, concatenating word embeddings to create thousands of features results in significantly low performance compared to methods that reduce these features to 24. The main reason is that the bias-variance trade-off is dominated by the bias in high dimensional settings especially in Euclidean spaces BIBREF56 . Due to very high number of features (relative to the number of observations), the radius of a given region varies with respect to the $n$ th root of its volume, whereas the number of data points in the region varies roughly linearly with the volume BIBREF56 . This phenomena is known as curse of dimensionality. As topic models such as LDA and NMF are designed to be used on documents that are sufficiently long to extract robust statistics from, extracted topic vectors fall short in performance as well when it comes to tweets due to short texts.
The main limitation of this study is the absence of topic labels in the dataset. As a result, internal clustering measure of Calinski-Harabasz score was used for evaluating the performance of the formed clusters instead of accuracy or normalized mutual information. Even though CH score is shown to be able to capture clusters of different densities and presence of subclusters, it has difficulties capturing highly noisy data and skewed distributions BIBREF57 . In addition, used clustering algorithms, i.e., k-means, Ward and spectral clustering, are hard clustering algorithms which results in non-overlapping clusters. However, a given tweet can have several topical labels.
Future work includes representation learning of health-related tweets using deep neural network architectures that can inherently learn the sequential nature of the textual data such as recurrent neural networks, e.g., Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU) etc. Sequence-to-sequence autoencoders are main examples of such architectures and they have been shown to be effective in encoding paragraphs from Wikipedia and other corpora to lower dimensions BIBREF58 . Furthermore, encodings out of a bidirectional GRU will be tested for clustering performance, as such architectures have been employed to represent a given tweet in other studies BIBREF59 , BIBREF60 , BIBREF61 .
Conclusion
In summary, we show that deep convolutional autoencoders can effectively learn compact representations of health-related tweets in an unsupervised manner. Conducted analysis show that the proposed representation learning scheme outperforms conventional feature extraction methods in three different clustering algorithms. In addition, we propose a constraint on the learned representation in order to further increase the clustering performance. Future work includes comparison of our model with recurrent neural architectures for clustering of health-related tweets. We believe this study serves as an advancement in the field of natural language processing for health informatics especially in clustering of short-text social media data. | The health benefits of alcohol consumption are more limited than previously thought, researchers say |
1405824a6845082eae0458c94c4affd7456ad0f7 | 1405824a6845082eae0458c94c4affd7456ad0f7_0 | Q: Was the introduced LSTM+CNN model trained on annotated data in a supervised fashion?
Text: Introduction
Since Satoshi Nakamoto published the article "Bitcoin: A Peer-to-Peer Electronic Cash System" in 2008 BIBREF0 , and after the official launch of Bitcoin in 2009, technologies such as blockchain and cryptocurrency have attracted attention from academia and industry. At present, the technologies have been applied to many fields such as medical science, economics, Internet of Things BIBREF1 . Since the launch of Ethereum (Next Generation Encryption Platform) BIBREF2 with smart contract function proposed by Vitalik Buterin in 2015, lots of attention has been obtained on its dedicated cryptocurrency Ether, smart contract, blockchain and its decentralized Ethereum Virtual Machine (EVM). The main reason is that its design method provides developers with the ability to develop Decentralized apps (Dapps), and thus obtain wider applications. A new application paradigm opens the door to many possibilities and opportunities.
Initial Coin Offerings (ICOs) is a financing method for the blockchain industry. As an example of financial innovation, ICO provides rapid access to capital for new ventures but suffers from drawbacks relating to non-regulation, considerable risk, and non-accountability. According to a report prepared by Satis Group Crypto Research, around 81% of the total number of ICOs launched since 2017 have turned out to be scams BIBREF3 . Also, according to the inquiry reveals of the University of Pennsylvania that many ICO failed even to promise that they would protect investors against insider self-dealing.
Google, Facebook, and Twitter have announced that they will ban advertising of cryptocurrencies, ICO etc. in the future. Fraudulent pyramid selling of virtual currency happens frequently in China. The People’s Bank of China has banned the provision of services for virtual currency transactions and ICO activities.
The incredibly huge amount of ICO projects make it difficult for people to recognize its risks. In order to find items of interest, people usually query the social network using the opinion of those items, and then view or further purchase them. In reality, the opinion of social network platforms is the major entrance for users. People are inclined to view or buy the items that have been purchased by many other people, and/or have high review scores.
Sentiment analysis is contextual mining of text which identifies and extracts subjective information in the source material and helping a business to understand the social sentiment of their brand, product or service while monitoring online conversations. To resolve the risks and fraud problems, it is important not only to analyze the social-network opinion, but also to scan the smart contract vulnerability detection.
Our Proposed Methodology
We proposed two methodologies which integrate Long Short Term Memory network (LSTM) and Convolutional Neural Network (CNN) into sentiment analysis model, one outputs the softmax probability over two kinds of emotions, positive and negative. The other model outputs the tanh sentiment score of input text ranged [-1, 1], -1 represents the negative emotion and vice versa. Fig. 1 shows our system flow-chart and Fig. 2 is our system architecture. Detail descriptions are explained below.
Tokenizing and Word Embedding. The raw input text can be noisy. They contain specific words which could affect the model training process. To clean these input text, we use the tokenizer from Stanford NLP BIBREF4 to remove some unnecessary tokens such as username, hashtag and URL. Word embedding is a distributed representation of a word, which is suitable for the input of neural networks. In this work, we choose the word embedding size d = 100, and use the 100-dimensional GloVe word embeddings pre-trained on the 27B Twitter data to initialize the word embeddings.
Model Architecture. The architecture of our models are shown in Figure 3 , both based on the combination of LSTM and CNN.
Training. We use a max input sequence length of 64 during training and testing. For LSTM layer, the number of hidden units is 64, and all hidden layer size is 128. All trainable parameters have randomly initialized. Our model can be trained by minimizing the objective functions, we use $\ell _{2}$ function for tanh model and cross-entropy for softmax model. For optimization, we use Adam BIBREF7 with the two momentum parameters set to 0.9 and 0.999 respectively. The initial learning rate was set to 1E-3, and the batch size was 2048. Fig. 4 is our experiment result.
Ranking. The design of the model can guarantee the accuracy of the sentiment score of each text. However, when calculating the project score, problems occur when projects have the same score. For example, when the score of item A and item B are the same, while item A has 1000 texts, and item B has only 1 text. If it is only a simple calculation of the weighted score, it does not reflect the difference in the difference between the amount of data between Project A and B. So we designed a score-based scoring algorithm (as shown in the equation 16 ).
Datasets and Experimental Results
We train our models on Sentiment140 and Amazon product reviews. Both of these datasets concentrates on sentiment represented by a short text. Summary description of other datasets for validation are also as below:
We have provided baseline results for the accuracy of other models against datasets (as shown in Table 1 ) For training the softmax model, we divide the text sentiment to two kinds of emotion, positive and negative. And for training the tanh model, we convert the positive and negative emotion to [-1.0, 1.0] continuous sentiment score, while 1.0 means positive and vice versa. We also test our model on various models and calculate metrics such as accuracy, precision and recall and show the results are in Table 2 . Table 3 , Table 4 , Table 5 , Table 6 and Table 7 . Table 8 are more detail information with precisions and recall of our models against other datasets.
From the above experimental results, it can be found that the dataset from IMDB and YELP do not have neutral problem, so it can be solved with softmax. However, when we compare SST with SWN, we found many neutral sentences inside, which causes Softmax achieved poor results. Then we tried to use tanh, it got better results, but because of the data set, the current effect is still lower than other papers, but it is already better than using softmax! Therefore, we also tried to introduce the concept of emotional continuity in this work, we express the emotion as the output space of tanh function [-1, 1], and -1 represents the most negative emotion, +1 Representing the most positive emotions, 0 means neutral sentences without any emotions. Compared to other dichotomies or triads, we think this is more intuitive. We have listed the scores using the Yelp dataset as an example and divided them into positive (0.33, 1), neutral [-0.33, 0.33] and negative [-1, 0.33) emotions and each three representative sentences were presented in Table 9 (the score is rounded to 5 decimal places).
In order to validate the concept of emotional continuity, we re-deploy the rating of the product of cell phones and accessories reviews (Amazon product reviews) from five sentiment scores (5: very positive, 4: positive, 3: neutral, 2: negative, 1: very negative) to three sentiment scores ((1, 0.33): very positive and positive, [0.33, -0.33]: neutral, -0.33, -1]: negative and very negative) and re-train our model. We test our tanh and softmax model on other Amazon product reviews datasets (musical instruments, home and kitchen and toys and games etc... ) and calculate metrics such as accuracy of neutral reviews and all reviews. With this experimental results, we found tanh model got better results and can be solved neutral problem. We show the results are in Table 10 .
Our system run on a 64-bit Ubuntu 14.04, and hardware setting are 128 GB DDR4 2400 RAM and Intel(R) Xeon(R) E5-2620 v4 CPU, NVIDIA TITAN V, TITAN XP, and GTX 1080 GPUs; the software setting is the nvidia-docker tensorflow:18.04-py3 on NVIDIA cloud. More specifically, our training models run on a single GPU (NVIDIA TITAN V), with roughly a runtime of 5 minutes per epoch. We run all the models up to 300 epochs and select the model which has the best accuracy on the testing set.
Related Work
The sentiment of a sentence can be inferred with subjectivity classification and polarity classification, where the former classifies whether a sentence is subjective or objective and the latter decides whether a subjective sentence expresses a negative or positive sentiment. In existing deep learning models, sentence sentiment classification is usually formulated as a joint three-way classification problem, namely, to predict a sentence as positive, neutral, and negative. Wang et al. proposed a regional CNN-LSTM model, which consists of two parts: regional CNN and LSTM, to predict the valence arousal ratings of text BIBREF11 . Wang et al. described a joint CNN and RNN architecture for sentiment classification of short texts, which takes advantage of the coarse-grained local features generated by CNN and long-distance dependencies learned via RNN BIBREF12 . Guggilla et al. presented a LSTM and CNN-based deep neural network model, which utilizes word2vec and linguistic embeddings for claim classification (classifying sentences to be factual or feeling) BIBREF13 . Kim also proposed to use CNN for sentence-level sentiment classification and experimented with several variants, namely CNN-rand (where word embeddings are randomly initialized), CNN-static (where word embeddings are pre-trained and fixed), CNN-non-static (where word embeddings are pre-trained and fine-tuned) and CNN-multichannel (where multiple sets of word embeddings are used) BIBREF9 .
Besides, about using machine learning to solve fraud problems, Fabrizio Carcillo et. al proposed to reduce credit card fraud by using machine learning techniques to enhance traditional active learning strategies BIBREF14 . Shuhao Wang et. al observes users’ time series data in the entire browsing sequence and then uses in-depth learning methods to model the detection of e-commerce fraud BIBREF15 . As blockchain and virtual currencies prevail, fraudulent transactions cannot be ignored. Toyoda et al. identify characteristics of fraudulent Bitcoin addresses by extracting features from transactions through analysis of transaction patterns BIBREF16 ; Bian et. al predict the quality of ICO projects through different kinds of information such as white papers, founding teams, GitHub libraries, and websites BIBREF17 .
Conclusion
We developed a LSTM + CNN system for detecting the sentiment analysis of social-network opinion in this paper. Compared with the baseline approach, our system obtains good results. Our goal is to optimize the amount of parameters, network structure, and release automated detection tools, public RESTful API and chatbot. The future work is to reduce the complex task and train for higher performance in confronting the sentiment, in order to improve sentiment analysis. We also built a "help us" website to label more datasets is shown in Fig. 5 . The experiment material and research results are shown on the website if there are any updates.
More importantly, we collected the users’ cryptocurrency comments from the social network (as shown in Table 11 ) and deploy our sentiment analysis on RatingToken and Coin Master (Android application of Cheetah Mobile Blockchain Security Center). Our proposed methodology can effectively provide detail information to resolve risks of being fake and fraud problems. Fig. 6 is the screenshot of our sentiment analysis production website.
Acknowledgement
This work would not have been possible without the valuable dataset offered by Cheetah Mobile. Special thanks to RatingToken and Coin Master. | Yes |
5be94c7c54593144ba2ac79729d7545f27c79d37 | 5be94c7c54593144ba2ac79729d7545f27c79d37_0 | Q: What is the challenge for other language except English
Text: Introduction
Offensive language in user-generated content on online platforms and its implications has been gaining attention over the last couple of years. This interest is sparked by the fact that many of the online social media platforms have come under scrutiny on how this type of content should be detected and dealt with. It is, however, far from trivial to deal with this type of language directly due to the gigantic amount of user-generated content created every day. For this reason, automatic methods are required, using natural language processing (NLP) and machine learning techniques.
Given the fact that the research on offensive language detection has to a large extent been focused on the English language, we set out to explore the design of models that can successfully be used for both English and Danish. To accomplish this, an appropriate dataset must be constructed, annotated with the guidelines described in BIBREF0 . We, furthermore, set out to analyze the linguistic features that prove hard to detect by analyzing the patterns that prove hard to detect.
Background
Offensive language varies greatly, ranging from simple profanity to much more severe types of language. One of the more troublesome types of language is hate speech and the presence of hate speech on social media platforms has been shown to be in correlation with hate crimes in real life settings BIBREF1 . It can be quite hard to distinguish between generally offensive language and hate speech as few universal definitions exist BIBREF2 . There does, however, seem to be a general consensus that hate speech can be defined as language that targets a group with the intent to be harmful or to cause social chaos. This targeting is usually done on the basis of some characteristics such as race, color, ethnicity, gender, sexual orientation, nationality or religion BIBREF3 . In section "Background" , hate speech is defined in more detail. Offensive language, on the other hand, is a more general category containing any type of profanity or insult. Hate speech can, therefore, be classified as a subset of offensive language. BIBREF0 propose guidelines for classifying offensive language as well as the type and the target of offensive language. These guidelines capture the characteristics of generally offensive language, hate speech and other types of targeted offensive language such as cyberbullying. However, despite offensive language detection being a burgeoning field, no dataset yet exists for Danish BIBREF4 despite this phenomenon being present BIBREF5 .
Many different sub-tasks have been considered in the literature on offensive and harmful language detection, ranging from the detection of general offensive language to more refined tasks such as hate speech detection BIBREF2 , and cyberbullying detection BIBREF6 .
A key aspect in the research of automatic classification methods for language of any kind is having substantial amount of high quality data that reflects the goal of the task at hand, and that also contains a decent amount of samples belonging to each of the classes being considered. To approach this problem as a supervised classification task the data needs to be annotated according to a well-defined annotation schema that clearly reflects the problem statement. The quality of the data is of vital importance, since low quality data is unlikely to provide meaningful results. Cyberbullying is commonly defined as targeted insults or threats against an individual BIBREF0 . Three factors are mentioned as indicators of cyberbullying BIBREF6 : intent to cause harm, repetitiveness, and an imbalance of power. This type of online harassment most commonly occurs between children and teenagers, and cyberbullying acts are prohibited by law in several countries, as well as many of the US states BIBREF7 .
BIBREF8 focus on classifying cyberbullying events in Dutch. They define cyberbullying as textual content that is published online by an individual and is aggressive or hurtful against a victim. The annotation-schema used consists of two steps. In the first step, a three-point harmfulness score is assigned to each post as well as a category denoting the authors role (i.e. harasser, victim, or bystander). In the second step a more refined categorization is applied, by annotating the posts using the the following labels: Threat/Blackmail, Insult, Curse/Exclusion, Defamation, Sexual Talk, Defense, and Encouragement to the harasser. Hate Speech. As discussed in Section "Classification Structure" , hate speech is generally defined as language that is targeted towards a group, with the intend to be harmful or cause social chaos. This targeting is usually based on characteristics such as race, color, ethnicity, gender, sexual orientation, nationality or religion BIBREF3 . Hate speech is prohibited by law in many countries, although the definitions may vary. In article 20 of the International Covenant on Civil and Political Rights (ICCPR) it is stated that "Any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law" BIBREF9 . In Denmark, hate speech is prohibited by law, and is formally defined as public statements where a group is threatened, insulted, or degraded on the basis of characteristics such as nationality, ethnicity, religion, or sexual orientation BIBREF10 . Hate speech is generally prohibited by law in the European Union, where it is defined as public incitement to violence or hatred directed against a group defined on the basis of characteristics such as race, religion, and national or ethnic origin BIBREF11 . Hate speech is, however, not prohibited by law in the United States. This is due to the fact that hate speech is protected by the freedom of speech act in the First Amendment of the U.S. Constitution BIBREF12 .
BIBREF2 focus is on classifying hate speech by distinguishing between general offensive language and hate speech. They define hate speech as "language that is used to express hatred towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group". They argue that the high use of profanity on social media makes it vitally important to be able to effectively distinguish between generally offensive language and the more severe hate speech. The dataset is constructed by gathering data from Twitter, using a hate speech lexicon to query the data with crowdsourced annotations.
Contradicting definitions. It becomes clear that one of the key challenges in doing meaningful research on the topic are the differences in both the annotation-schemas and the definitions used, since it makes it difficult to effectively compare results to existing work, as pointed out by several authors ( BIBREF13 , BIBREF3 , BIBREF14 , BIBREF0 ). These issues become clear when comparing the work of BIBREF6 , where racist and sexist remarks are classified as a subset of insults, to the work of BIBREF15 , where similar remarks are split into two categories; hate speech and derogatory language. Another clear example of conflicting definitions becomes visible when comparing BIBREF16 , where hate speech is considered without any consideration of overlaps with the more general type of offensive language, to BIBREF2 where a clear distinction is made between the two, by classifying posts as either Hate speech, Offensive or Neither. This lack of consensus led BIBREF14 to propose annotation guidelines and introduce a typology. BIBREF17 argue that these proposed guidelines do not effectively capture both the type and target of the offensive language.
Dataset
In this section we give a comprehensive overview of the structure of the task and describe the dataset provided in BIBREF0 . Our work adopts this framing of the offensive language phenomenon.
Classification Structure
Offensive content is broken into three sub-tasks to be able to effectively identify both the type and the target of the offensive posts. These three sub-tasks are chosen with the objective of being able to capture different types of offensive language, such as hate speech and cyberbullying (section "Background" ).
In sub-task A the goal is to classify posts as either offensive or not. Offensive posts include insults and threats as well as any form of untargeted profanity BIBREF17 . Each sample is annotated with one of the following labels:
In English this could be a post such as #TheNunMovie was just as scary as I thought it would be. Clearly the critics don't think she is terrifyingly creepy. I like how it ties in with #TheConjuring series. In Danish this could be a post such as Kim Larsen var god, men hans død blev alt for hyped.
. In English this could be a post such as USER is a #pervert himself!. In Danish this could be a post such as Kalle er faggot...
In sub-task B the goal is to classify the type of offensive language by determining if the offensive language is targeted or not. Targeted offensive language contains insults and threats to an individual, group, or others BIBREF17 . Untargeted posts contain general profanity while not clearly targeting anyone BIBREF17 . Only posts labeled as offensive (OFF) in sub-task A are considered in this task. Each sample is annotated with one of the following labels:
Targeted Insult (TIN). In English this could be a post such as @USER Please ban this cheating scum. In Danish this could be e.g. Hun skal da selv have 99 år, den smatso.
Untargeted (UNT). In English this could be a post such as 2 weeks of resp done and I still don't know shit my ass still on vacation mode. In Danish this could e.g. Dumme svin...
In sub-task C the goal is to classify the target of the offensive language. Only posts labeled as targeted insults (TIN) in sub-task B are considered in this task BIBREF17 . Samples are annotated with one of the following:
Individual (IND): Posts targeting a named or unnamed person that is part of the conversation. In English this could be a post such as @USER Is a FRAUD Female @USER group paid for and organized by @USER. In Danish this could be a post such as USER du er sku da syg i hoved. These examples further demonstrate that this category captures the characteristics of cyberbullying, as it is defined in section "Background" .
Group (GRP): Posts targeting a group of people based on ethnicity, gender or sexual orientation, political affiliation, religious belief, or other characteristics. In English this could be a post such as #Antifa are mentally unstable cowards, pretending to be relevant. In Danish this could be e.g. Åh nej! Svensk lorteret!
Other (OTH): The target of the offensive language does not fit the criteria of either of the previous two categories. BIBREF17 . In English this could be a post such as And these entertainment agencies just gonna have to be an ass about it.. In Danish this could be a post such as Netto er jo et tempel over lort.
One of the main concerns when it comes to collecting data for the task of offensive language detection is to find high quality sources of user-generated content that represent each class in the annotation-schema to some extent. In our exploration phase we considered various social media platforms such as Twitter, Facebook, and Reddit.
We consider three social media sites as data.
Twitter. Twitter has been used extensively as a source of user-generated content and it was the first source considered in our initial data collection phase. The platform provides excellent interface for developers making it easy to gather substantial amounts of data with limited efforts. However, Twitter was not a suitable source of data for our task. This is due to the fact that Twitter has limited usage in Denmark, resulting in low quality data with many classes of interest unrepresented.
Facebook. We next considered Facebook, and the public page for the Danish media company Ekstra Bladet. We looked at user-generated comments on articles posted by Ekstra Bladet, and initial analysis of these comments showed great promise as they have a high degree of variation. The user behaviour on the page and the language used ranges from neutral language to very aggressive, where some users pour out sexist, racist and generally hateful language. We faced obstacles when collecting data from Facebook, due to the fact that Facebook recently made the decision to shut down all access to public pages through their developer interface. This makes computational data collection approaches impossible. We faced restrictions on scraping public pages with Facebook, and turned to manual collection of randomly selected user-generated comments from Ekstra Bladet's public page, yielding 800 comments of sufficient quality.
Reddit. Given that language classification tasks in general require substantial amounts of data, our exploration for suitable sources continued and our search next led us to Reddit. We scraped Reddit, collecting the top 500 posts from the Danish sub-reddits r/DANMAG and r/Denmark, as well as the user comments contained within each post.
We published a survey on Reddit asking Danish speaking users to suggest offensive, sexist, and racist terms for a lexicon. Language and user behaviour varies between platforms, so the goal is to capture platform-specific terms. This gave 113 offensive and hateful terms which were used to find offensive comments. The remainder of comments in the corpus were shuffled and a subset of this corpus was then used to fill the remainder of the final dataset. The resulting dataset contains 3600 user-generated comments, 800 from Ekstra Bladet on Facebook, 1400 from r/DANMAG and 1400 from r/Denmark. In light of the General Data Protection Regulations in Europe (GDPR) and the increased concern for online privacy, we applied some necessary pre-processing steps on our dataset to ensure the privacy of the authors of the comments that were used. Personally identifying content (such as the names of individuals, not including celebrity names) was removed. This was handled by replacing each name of an individual (i.e. author or subject) with @USER, as presented in both BIBREF0 and BIBREF2 . All comments containing any sensitive information were removed. We classify sensitive information as any information that can be used to uniquely identify someone by the following characteristics; racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, genetic data, and bio-metric data.
We base our annotation procedure on the guidelines and schemas presented in BIBREF0 , discussed in detail in section "Classification Structure" . As a warm-up procedure, the first 100 posts were annotated by two annotators (the author and the supervisor) and the results compared. This was used as an opportunity to refine the mutual understanding of the task at hand and to discuss the mismatches in these annotations for each sub-task.
We used a Jaccard index BIBREF18 to assess the similarity of our annotations. In sub-task A the Jaccard index of these initial 100 posts was 41.9%, 39.1% for sub-task B , and 42.8% for sub-task C. After some analysis of these results and the posts that we disagreed on it became obvious that to a large extent the disagreement was mainly caused by two reasons:
Guesswork of the context where the post itself was too vague to make a decisive decision on whether it was offensive or not without more context. An example of this is a post such as Skal de hjælpes hjem, næ nej de skal sendes hjem, where one might conclude, given the current political climate, that this is an offensive post targeted at immigrants. The context is, however, lacking so we cannot make a decisive decision. This post should, therefore, be labeled as non-offensive, since the post does not contain any profanity or a clearly stated group.
Failure to label posts containing some kind of profanity as offensive (typically when the posts themselves were not aggressive, harmful, or hateful). An example could be a post like @USER sgu da ikke hans skyld at hun ikke han finde ud af at koge fucking pasta, where the post itself is rather mild, but the presence of fucking makes this an offensive post according to our definitions.
In light of these findings our internal guidelines were refined so that no post should be labeled as offensive by interpreting any context that is not directly visible in the post itself and that any post containing any form of profanity should automatically be labeled as offensive. These stricter guidelines made the annotation procedure considerably easier while ensuring consistency. The remainder of the annotation task was performed by the author, resulting in 3600 annotated samples.
Final Dataset
In Table 1 the distribution of samples by sources in our final dataset is presented. Although a useful tool, using the hate speech lexicon as a filter only resulted in 232 comments. The remaining comments from Reddit were then randomly sampled from the remaining corpus.
The fully annotated dataset was split into a train and test set, while maintaining the distribution of labels from the original dataset. The training set contains 80% of the samples, and the test set contains 20%. Table 2 presents the distribution of samples by label for both the train and test set. The dataset is skewed, with around 88% of the posts labeled as not offensive (NOT). This is, however, generally the case when it comes to user-generated content on online platforms, and any automatic detection system needs be able to handle the problem of imbalanced data in order to be truly effective.
Features
One of the most important factors to consider when it comes to automatic classification tasks the the feature representation. This section discusses various representations used in the abusive language detection literature.
Top-level features. In BIBREF3 information comes from top-level features such as bag-of-words, uni-grams and more complex n-grams, and the literature certainly supports this. In their work on cyberbullying detection, BIBREF8 use word n-grams, character n-grams, and bag-of-words. They report uni-gram bag-of-word features as most predictive, followed by character tri-gram bag-of-words. Later work finds character n-grams are the most helpful features BIBREF15 , underlying the need for the modeling of un-normalized text. these simple top-level feature approaches are good but not without their limitations, since they often have high recall but lead to high rate of false positives BIBREF2 . This is due to the fact that the presence of certain terms can easily lead to misclassification when using these types of features. Many words, however, do not clearly indicate which category the text sample belongs to, e.g. the word gay can be used in both neutral and offensive contexts.
Linguistic Features BIBREF15 use a number of linguistic features, including the length of samples, average word lengths, number of periods and question marks, number of capitalized letters, number of URLs, number of polite words, number of unknown words (by using an English dictionary), and number of insults and hate speech words. Although these features have not proven to provide much value on their own, they have been shown to be a good addition to the overall feature space BIBREF15 .
Word Representations. Top-level features often require the predictive words to occur in both the training set and the test sets, as discussed in BIBREF3 . For this reason, some sort of word generalization is required. BIBREF15 explore three types of embedding-derived features. First, they explore pre-trained embeddings derived from a large corpus of news samples. Secondly, they use word2vec BIBREF19 to generate word embeddings using their own corpus of text samples. We use both approaches. Both the pre-trained and word2vec models represent each word as a 200 dimensional distributed real number vector. Lastly, they develop 100 dimensional comment2vec model, based on the work of BIBREF20 . Their results show that the comment2vec and the word2vec models provide the most predictive features BIBREF15 . In BIBREF21 they experiment with pre-trained GloVe embeddings BIBREF22 , learned FastText embeddings BIBREF23 , and randomly initialized learned embeddings. Interestingly, the randomly initialized embeddings slightly outperform the others BIBREF21 .
Sentiment Scores. Sentiment scores are a common addition to the feature space of classification systems dealing with offensive and hateful speech. In our work we experiment with sentiment scores and some of our models rely on them as a dimension in their feature space. To compute these sentiment score features our systems use two Python libraries: VADER BIBREF24 and AFINN BIBREF25 .Our models use the compound attribute, which gives a normalized sum of sentiment scores over all words in the sample. The compound attribute ranges from $-1$ (extremely negative) to $+1$ (extremely positive).
Reading Ease. As well as some of the top-level features mentioned so far, we also use Flesch-Kincaid Grade Level and Flesch Reading Ease scores. The Flesch-Kincaid Grade Level is a metric assessing the level of reading ability required to easily understand a sample of text.
Models
We introduce a variety of models in our work to compare different approaches to the task at hand. First of all, we introduce naive baselines that simply classify each sample as one of the categories of interest (based on BIBREF0 ). Next, we introduce a logistic regression model based on the work of BIBREF2 , using the same set of features as introduced there. Finally, we introduce three deep learning models: Learned-BiLSTM, Fast-BiLSTM, and AUX-Fast-BiLSTM. The logistic regression model is built using Scikit Learn BIBREF26 and the deep learning models are built using Keras BIBREF27 . The following sections describe these model architectures in detail, the algorithms they are based on, and the features they use.
Results and Analysis
For each sub-task (A, B, and C, Section "Classification Structure" ) we present results for all methods in each language.
A - Offensive language identification:
English. For English (Table 3 ) Fast-BiLSTM performs best, trained for 100 epochs, using the OLID dataset. The model achieves a macro averaged F1-score of $0.735$ . This result is comparable to the BiLSTM based methods in OffensEval.
Additional training data from HSAOFL BIBREF2 does not consistently improve results. For the models using word embeddings results are worse with additional training data. On the other hand, for models that use a range of additional features (Logistic Regression and AUX-Fast-BiLSTM), the additional training data helps.
Danish. Results are in Table 4 . Logistic Regression works best with an F1-score of $0.699$ . This is the second best performing model for English, though the best performing model for English (Fast-BiLSTM) is worst for Danish.
Best results are given in Table 5 . The low scores for Danish compared to English may be explained by the low amount of data in the Danish dataset. The Danish training set contains $2,879$ samples (table 2 ) while the English training set contains $13,240$ sample.Futher, in the English dataset around $33\%$ of the samples are labeled offensive while in the Danish set this rate is only at around $12\%$ . The effect that this under represented class has on the Danish classification task can be seen in more detail in Table 5 .
B - Categorization of offensive language type
English. In Table 6 the results are presented for sub-task B on English. The Learned-BiLSTM model trained for 60 epochs performs the best, obtaining a macro F1-score of $0.619$ .
Recall and precision scores are lower for UNT than TIN (Table 5 ). One reason is skew in the data, with only around $14\%$ of the posts labeled as UNT. The pre-trained embedding model, Fast-BiLSTM, performs the worst, with a macro averaged F1-score of $0.567$ . This indicates this approach is not good for detecting subtle differences in offensive samples in skewed data, while more complex feature models perform better.
Danish. Table 7 presents the results for sub-task B and the Danish language. The best performing system is the AUX-Fast-BiLSTM model (section UID26 ) trained for 100 epochs, which obtains an impressive macro F1-score of $0.729$ . This suggests that models that only rely on pre-trained word embeddings may not be optimal for this task. This is be considered alongside the indication in Section "Final Dataset" that relying on lexicon-based selection also performs poorly.
The limiting factor seems to be recall for the UNT category (Table 8 ). As mentioned in Section "Background" , the best performing system for sub-task B in OffensEval was a rule-based system, suggesting that more refined features, (e.g. lexica) may improve performance on this task. The better performance of models for Danish over English can most likely be explained by the fact that the training set used for Danish is more balanced, with around $42\%$ of the posts labeled as UNT.
C - Offensive language target identification
English. The results for sub-task C and the English language are presented in Table 9 . The best performing system is the Learned-BiLSTM model (section UID24 ) trained for 10 epochs, obtaining a macro averaged F1-score of $0.557$ . This is an improvement over the models introduced in BIBREF0 , where the BiLSTM based model achieves a macro F1-score of $0.470$ .
The main limitations of our model seems to be in the classification of OTH samples, as seen in Table 11 . This may be explained by the imbalance in the training data. It is interesting to see that this imbalance does not effect the GRP category as much, which only constitutes about $28\%$ of the training samples. One cause for the differences in these, is the fact that the definitions of the OTH category are vague, capturing all samples that do not belong to the previous two.
Danish. Table 10 presents the results for sub-task C and the Danish language. The best performing system is the same as in English, the Learned-BiLSTM model (section UID24 ), trained for 100 epochs, obtaining a macro averaged F1-score of $0.629$ . Given that this is the same model as the one that performed the best for English, this further indicates that task specific embeddings are helpful for more refined classification tasks.
It is interesting to see that both of the models using the additional set of features (Logistic Regression and AUX-Fast-BiLSTM) perform the worst. This indicates that these additional features are not beneficial for this more refined sub-task in Danish. The amount of samples used in training for this sub-task is very low. Imbalance does have as much effect for Danish as it does in English, as can be seen in Table 11 . Only about $14\%$ of the samples are labeled as OTH in the data (table 2 ), but the recall and precision scores are closer than they are for English.
Analysis
We perform analysis of the misclassified samples in the evaluation of our best performing models. To accomplish this, we compute the TF-IDF scores for a range of n-grams. We then take the top scoring n-grams in each category and try to discover any patterns that might exist. We also perform some manual analysis of these misclassified samples. The goal of this process is to try to get a clear idea of the areas our classifiers are lacking in. The following sections describe this process for each of the sub-tasks.
A - Offensive language identification
The classifier struggles to identify obfuscated offensive terms. This includes words that are concatenated together, such as barrrysoetorobullshit. The classifier also seems to associate she with offensiveness, and samples containing she are misclassified as offensive in several samples while he is less often associated with offensive language.
There are several examples where our classifier labels profanity-bearing content as offensive that are labeled as non-offensive in the test set. Posts such as Are you fucking serious? and Fuck I cried in this scene are labeled non-offensive in the test set, but according to annotation guidelines should be classified as offensive.
The best classifier is inclined to classify longer sequences as offensive. The mean character length of misclassified offensive samples is $204.7$ , while the mean character length of the samples misclassified not offensive is $107.9$ . This may be due to any post containing any form of profanity being offensive in sub-task A, so more words increase the likelihood of $>0$ profane words.
The classifier suffers from the same limitations as the classifier for English when it comes to obfuscated words, misclassifying samples such as Hahhaaha lær det biiiiiaaaatch as non-offensive. It also seems to associate the occurrence of the word svensken with offensive language, and quite a few samples containing that word are misclassified as offensive. This can be explained by the fact that offensive language towards Swedes is common in the training data, resulting in this association. From this, we can conclude that the classifier relies too much on the presence of individual keywords, ignoring the context of these keywords.
B - Categorization of offensive language type
Obfuscation prevails in sub-task B. Our classifier misses indicators of targeted insults such as WalkAwayFromAllDemocrats. It seems to rely too highly on the presence of profanity, misclassifying samples containing terms such as bitch, fuck, shit, etc. as targeted insults.
The issue of the data quality is also concerning in this sub-task, as we discover samples containing clear targeted insults such as HillaryForPrison being labeled as untargeted in the test set.
Our Danish classifier also seems to be missing obfuscated words such as kidsarefuckingstupid in the classification of targeted insults. It relies to some extent to heavily on the presence of profanity such as pikfjæs, lorte and fucking, and misclassifies untargeted posts containing these keywords as targeted insults.
C - Offensive language target identification Misclassification based on obfuscated terms as discussed earlier also seems to be an issue for sub-task C. This problem of obfuscated terms could be tackled by introducing character-level features such as character level n-grams.
Conclusion
Offensive language on online social media platforms is harmful. Due to the vast amount of user-generated content on online platforms, automatic methods are required to detect this kind of harmful content. Until now, most of the research on the topic has focused on solving the problem for English. We explored English and Danish hate speed detection and categorization, finding that sharing information across languages and platforms leads to good models for the task.
The resources and classifiers are available from the authors under CC-BY license, pending use in a shared task; a data statement BIBREF29 is included in the appendix. Extended results and analysis are given in BIBREF30 . | not researched as much as English |
32e8eda2183bcafbd79b22f757f8f55895a0b7b2 | 32e8eda2183bcafbd79b22f757f8f55895a0b7b2_0 | Q: How many categories of offensive language were there?
Text: Introduction
Offensive language in user-generated content on online platforms and its implications has been gaining attention over the last couple of years. This interest is sparked by the fact that many of the online social media platforms have come under scrutiny on how this type of content should be detected and dealt with. It is, however, far from trivial to deal with this type of language directly due to the gigantic amount of user-generated content created every day. For this reason, automatic methods are required, using natural language processing (NLP) and machine learning techniques.
Given the fact that the research on offensive language detection has to a large extent been focused on the English language, we set out to explore the design of models that can successfully be used for both English and Danish. To accomplish this, an appropriate dataset must be constructed, annotated with the guidelines described in BIBREF0 . We, furthermore, set out to analyze the linguistic features that prove hard to detect by analyzing the patterns that prove hard to detect.
Background
Offensive language varies greatly, ranging from simple profanity to much more severe types of language. One of the more troublesome types of language is hate speech and the presence of hate speech on social media platforms has been shown to be in correlation with hate crimes in real life settings BIBREF1 . It can be quite hard to distinguish between generally offensive language and hate speech as few universal definitions exist BIBREF2 . There does, however, seem to be a general consensus that hate speech can be defined as language that targets a group with the intent to be harmful or to cause social chaos. This targeting is usually done on the basis of some characteristics such as race, color, ethnicity, gender, sexual orientation, nationality or religion BIBREF3 . In section "Background" , hate speech is defined in more detail. Offensive language, on the other hand, is a more general category containing any type of profanity or insult. Hate speech can, therefore, be classified as a subset of offensive language. BIBREF0 propose guidelines for classifying offensive language as well as the type and the target of offensive language. These guidelines capture the characteristics of generally offensive language, hate speech and other types of targeted offensive language such as cyberbullying. However, despite offensive language detection being a burgeoning field, no dataset yet exists for Danish BIBREF4 despite this phenomenon being present BIBREF5 .
Many different sub-tasks have been considered in the literature on offensive and harmful language detection, ranging from the detection of general offensive language to more refined tasks such as hate speech detection BIBREF2 , and cyberbullying detection BIBREF6 .
A key aspect in the research of automatic classification methods for language of any kind is having substantial amount of high quality data that reflects the goal of the task at hand, and that also contains a decent amount of samples belonging to each of the classes being considered. To approach this problem as a supervised classification task the data needs to be annotated according to a well-defined annotation schema that clearly reflects the problem statement. The quality of the data is of vital importance, since low quality data is unlikely to provide meaningful results. Cyberbullying is commonly defined as targeted insults or threats against an individual BIBREF0 . Three factors are mentioned as indicators of cyberbullying BIBREF6 : intent to cause harm, repetitiveness, and an imbalance of power. This type of online harassment most commonly occurs between children and teenagers, and cyberbullying acts are prohibited by law in several countries, as well as many of the US states BIBREF7 .
BIBREF8 focus on classifying cyberbullying events in Dutch. They define cyberbullying as textual content that is published online by an individual and is aggressive or hurtful against a victim. The annotation-schema used consists of two steps. In the first step, a three-point harmfulness score is assigned to each post as well as a category denoting the authors role (i.e. harasser, victim, or bystander). In the second step a more refined categorization is applied, by annotating the posts using the the following labels: Threat/Blackmail, Insult, Curse/Exclusion, Defamation, Sexual Talk, Defense, and Encouragement to the harasser. Hate Speech. As discussed in Section "Classification Structure" , hate speech is generally defined as language that is targeted towards a group, with the intend to be harmful or cause social chaos. This targeting is usually based on characteristics such as race, color, ethnicity, gender, sexual orientation, nationality or religion BIBREF3 . Hate speech is prohibited by law in many countries, although the definitions may vary. In article 20 of the International Covenant on Civil and Political Rights (ICCPR) it is stated that "Any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law" BIBREF9 . In Denmark, hate speech is prohibited by law, and is formally defined as public statements where a group is threatened, insulted, or degraded on the basis of characteristics such as nationality, ethnicity, religion, or sexual orientation BIBREF10 . Hate speech is generally prohibited by law in the European Union, where it is defined as public incitement to violence or hatred directed against a group defined on the basis of characteristics such as race, religion, and national or ethnic origin BIBREF11 . Hate speech is, however, not prohibited by law in the United States. This is due to the fact that hate speech is protected by the freedom of speech act in the First Amendment of the U.S. Constitution BIBREF12 .
BIBREF2 focus is on classifying hate speech by distinguishing between general offensive language and hate speech. They define hate speech as "language that is used to express hatred towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group". They argue that the high use of profanity on social media makes it vitally important to be able to effectively distinguish between generally offensive language and the more severe hate speech. The dataset is constructed by gathering data from Twitter, using a hate speech lexicon to query the data with crowdsourced annotations.
Contradicting definitions. It becomes clear that one of the key challenges in doing meaningful research on the topic are the differences in both the annotation-schemas and the definitions used, since it makes it difficult to effectively compare results to existing work, as pointed out by several authors ( BIBREF13 , BIBREF3 , BIBREF14 , BIBREF0 ). These issues become clear when comparing the work of BIBREF6 , where racist and sexist remarks are classified as a subset of insults, to the work of BIBREF15 , where similar remarks are split into two categories; hate speech and derogatory language. Another clear example of conflicting definitions becomes visible when comparing BIBREF16 , where hate speech is considered without any consideration of overlaps with the more general type of offensive language, to BIBREF2 where a clear distinction is made between the two, by classifying posts as either Hate speech, Offensive or Neither. This lack of consensus led BIBREF14 to propose annotation guidelines and introduce a typology. BIBREF17 argue that these proposed guidelines do not effectively capture both the type and target of the offensive language.
Dataset
In this section we give a comprehensive overview of the structure of the task and describe the dataset provided in BIBREF0 . Our work adopts this framing of the offensive language phenomenon.
Classification Structure
Offensive content is broken into three sub-tasks to be able to effectively identify both the type and the target of the offensive posts. These three sub-tasks are chosen with the objective of being able to capture different types of offensive language, such as hate speech and cyberbullying (section "Background" ).
In sub-task A the goal is to classify posts as either offensive or not. Offensive posts include insults and threats as well as any form of untargeted profanity BIBREF17 . Each sample is annotated with one of the following labels:
In English this could be a post such as #TheNunMovie was just as scary as I thought it would be. Clearly the critics don't think she is terrifyingly creepy. I like how it ties in with #TheConjuring series. In Danish this could be a post such as Kim Larsen var god, men hans død blev alt for hyped.
. In English this could be a post such as USER is a #pervert himself!. In Danish this could be a post such as Kalle er faggot...
In sub-task B the goal is to classify the type of offensive language by determining if the offensive language is targeted or not. Targeted offensive language contains insults and threats to an individual, group, or others BIBREF17 . Untargeted posts contain general profanity while not clearly targeting anyone BIBREF17 . Only posts labeled as offensive (OFF) in sub-task A are considered in this task. Each sample is annotated with one of the following labels:
Targeted Insult (TIN). In English this could be a post such as @USER Please ban this cheating scum. In Danish this could be e.g. Hun skal da selv have 99 år, den smatso.
Untargeted (UNT). In English this could be a post such as 2 weeks of resp done and I still don't know shit my ass still on vacation mode. In Danish this could e.g. Dumme svin...
In sub-task C the goal is to classify the target of the offensive language. Only posts labeled as targeted insults (TIN) in sub-task B are considered in this task BIBREF17 . Samples are annotated with one of the following:
Individual (IND): Posts targeting a named or unnamed person that is part of the conversation. In English this could be a post such as @USER Is a FRAUD Female @USER group paid for and organized by @USER. In Danish this could be a post such as USER du er sku da syg i hoved. These examples further demonstrate that this category captures the characteristics of cyberbullying, as it is defined in section "Background" .
Group (GRP): Posts targeting a group of people based on ethnicity, gender or sexual orientation, political affiliation, religious belief, or other characteristics. In English this could be a post such as #Antifa are mentally unstable cowards, pretending to be relevant. In Danish this could be e.g. Åh nej! Svensk lorteret!
Other (OTH): The target of the offensive language does not fit the criteria of either of the previous two categories. BIBREF17 . In English this could be a post such as And these entertainment agencies just gonna have to be an ass about it.. In Danish this could be a post such as Netto er jo et tempel over lort.
One of the main concerns when it comes to collecting data for the task of offensive language detection is to find high quality sources of user-generated content that represent each class in the annotation-schema to some extent. In our exploration phase we considered various social media platforms such as Twitter, Facebook, and Reddit.
We consider three social media sites as data.
Twitter. Twitter has been used extensively as a source of user-generated content and it was the first source considered in our initial data collection phase. The platform provides excellent interface for developers making it easy to gather substantial amounts of data with limited efforts. However, Twitter was not a suitable source of data for our task. This is due to the fact that Twitter has limited usage in Denmark, resulting in low quality data with many classes of interest unrepresented.
Facebook. We next considered Facebook, and the public page for the Danish media company Ekstra Bladet. We looked at user-generated comments on articles posted by Ekstra Bladet, and initial analysis of these comments showed great promise as they have a high degree of variation. The user behaviour on the page and the language used ranges from neutral language to very aggressive, where some users pour out sexist, racist and generally hateful language. We faced obstacles when collecting data from Facebook, due to the fact that Facebook recently made the decision to shut down all access to public pages through their developer interface. This makes computational data collection approaches impossible. We faced restrictions on scraping public pages with Facebook, and turned to manual collection of randomly selected user-generated comments from Ekstra Bladet's public page, yielding 800 comments of sufficient quality.
Reddit. Given that language classification tasks in general require substantial amounts of data, our exploration for suitable sources continued and our search next led us to Reddit. We scraped Reddit, collecting the top 500 posts from the Danish sub-reddits r/DANMAG and r/Denmark, as well as the user comments contained within each post.
We published a survey on Reddit asking Danish speaking users to suggest offensive, sexist, and racist terms for a lexicon. Language and user behaviour varies between platforms, so the goal is to capture platform-specific terms. This gave 113 offensive and hateful terms which were used to find offensive comments. The remainder of comments in the corpus were shuffled and a subset of this corpus was then used to fill the remainder of the final dataset. The resulting dataset contains 3600 user-generated comments, 800 from Ekstra Bladet on Facebook, 1400 from r/DANMAG and 1400 from r/Denmark. In light of the General Data Protection Regulations in Europe (GDPR) and the increased concern for online privacy, we applied some necessary pre-processing steps on our dataset to ensure the privacy of the authors of the comments that were used. Personally identifying content (such as the names of individuals, not including celebrity names) was removed. This was handled by replacing each name of an individual (i.e. author or subject) with @USER, as presented in both BIBREF0 and BIBREF2 . All comments containing any sensitive information were removed. We classify sensitive information as any information that can be used to uniquely identify someone by the following characteristics; racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, genetic data, and bio-metric data.
We base our annotation procedure on the guidelines and schemas presented in BIBREF0 , discussed in detail in section "Classification Structure" . As a warm-up procedure, the first 100 posts were annotated by two annotators (the author and the supervisor) and the results compared. This was used as an opportunity to refine the mutual understanding of the task at hand and to discuss the mismatches in these annotations for each sub-task.
We used a Jaccard index BIBREF18 to assess the similarity of our annotations. In sub-task A the Jaccard index of these initial 100 posts was 41.9%, 39.1% for sub-task B , and 42.8% for sub-task C. After some analysis of these results and the posts that we disagreed on it became obvious that to a large extent the disagreement was mainly caused by two reasons:
Guesswork of the context where the post itself was too vague to make a decisive decision on whether it was offensive or not without more context. An example of this is a post such as Skal de hjælpes hjem, næ nej de skal sendes hjem, where one might conclude, given the current political climate, that this is an offensive post targeted at immigrants. The context is, however, lacking so we cannot make a decisive decision. This post should, therefore, be labeled as non-offensive, since the post does not contain any profanity or a clearly stated group.
Failure to label posts containing some kind of profanity as offensive (typically when the posts themselves were not aggressive, harmful, or hateful). An example could be a post like @USER sgu da ikke hans skyld at hun ikke han finde ud af at koge fucking pasta, where the post itself is rather mild, but the presence of fucking makes this an offensive post according to our definitions.
In light of these findings our internal guidelines were refined so that no post should be labeled as offensive by interpreting any context that is not directly visible in the post itself and that any post containing any form of profanity should automatically be labeled as offensive. These stricter guidelines made the annotation procedure considerably easier while ensuring consistency. The remainder of the annotation task was performed by the author, resulting in 3600 annotated samples.
Final Dataset
In Table 1 the distribution of samples by sources in our final dataset is presented. Although a useful tool, using the hate speech lexicon as a filter only resulted in 232 comments. The remaining comments from Reddit were then randomly sampled from the remaining corpus.
The fully annotated dataset was split into a train and test set, while maintaining the distribution of labels from the original dataset. The training set contains 80% of the samples, and the test set contains 20%. Table 2 presents the distribution of samples by label for both the train and test set. The dataset is skewed, with around 88% of the posts labeled as not offensive (NOT). This is, however, generally the case when it comes to user-generated content on online platforms, and any automatic detection system needs be able to handle the problem of imbalanced data in order to be truly effective.
Features
One of the most important factors to consider when it comes to automatic classification tasks the the feature representation. This section discusses various representations used in the abusive language detection literature.
Top-level features. In BIBREF3 information comes from top-level features such as bag-of-words, uni-grams and more complex n-grams, and the literature certainly supports this. In their work on cyberbullying detection, BIBREF8 use word n-grams, character n-grams, and bag-of-words. They report uni-gram bag-of-word features as most predictive, followed by character tri-gram bag-of-words. Later work finds character n-grams are the most helpful features BIBREF15 , underlying the need for the modeling of un-normalized text. these simple top-level feature approaches are good but not without their limitations, since they often have high recall but lead to high rate of false positives BIBREF2 . This is due to the fact that the presence of certain terms can easily lead to misclassification when using these types of features. Many words, however, do not clearly indicate which category the text sample belongs to, e.g. the word gay can be used in both neutral and offensive contexts.
Linguistic Features BIBREF15 use a number of linguistic features, including the length of samples, average word lengths, number of periods and question marks, number of capitalized letters, number of URLs, number of polite words, number of unknown words (by using an English dictionary), and number of insults and hate speech words. Although these features have not proven to provide much value on their own, they have been shown to be a good addition to the overall feature space BIBREF15 .
Word Representations. Top-level features often require the predictive words to occur in both the training set and the test sets, as discussed in BIBREF3 . For this reason, some sort of word generalization is required. BIBREF15 explore three types of embedding-derived features. First, they explore pre-trained embeddings derived from a large corpus of news samples. Secondly, they use word2vec BIBREF19 to generate word embeddings using their own corpus of text samples. We use both approaches. Both the pre-trained and word2vec models represent each word as a 200 dimensional distributed real number vector. Lastly, they develop 100 dimensional comment2vec model, based on the work of BIBREF20 . Their results show that the comment2vec and the word2vec models provide the most predictive features BIBREF15 . In BIBREF21 they experiment with pre-trained GloVe embeddings BIBREF22 , learned FastText embeddings BIBREF23 , and randomly initialized learned embeddings. Interestingly, the randomly initialized embeddings slightly outperform the others BIBREF21 .
Sentiment Scores. Sentiment scores are a common addition to the feature space of classification systems dealing with offensive and hateful speech. In our work we experiment with sentiment scores and some of our models rely on them as a dimension in their feature space. To compute these sentiment score features our systems use two Python libraries: VADER BIBREF24 and AFINN BIBREF25 .Our models use the compound attribute, which gives a normalized sum of sentiment scores over all words in the sample. The compound attribute ranges from $-1$ (extremely negative) to $+1$ (extremely positive).
Reading Ease. As well as some of the top-level features mentioned so far, we also use Flesch-Kincaid Grade Level and Flesch Reading Ease scores. The Flesch-Kincaid Grade Level is a metric assessing the level of reading ability required to easily understand a sample of text.
Models
We introduce a variety of models in our work to compare different approaches to the task at hand. First of all, we introduce naive baselines that simply classify each sample as one of the categories of interest (based on BIBREF0 ). Next, we introduce a logistic regression model based on the work of BIBREF2 , using the same set of features as introduced there. Finally, we introduce three deep learning models: Learned-BiLSTM, Fast-BiLSTM, and AUX-Fast-BiLSTM. The logistic regression model is built using Scikit Learn BIBREF26 and the deep learning models are built using Keras BIBREF27 . The following sections describe these model architectures in detail, the algorithms they are based on, and the features they use.
Results and Analysis
For each sub-task (A, B, and C, Section "Classification Structure" ) we present results for all methods in each language.
A - Offensive language identification:
English. For English (Table 3 ) Fast-BiLSTM performs best, trained for 100 epochs, using the OLID dataset. The model achieves a macro averaged F1-score of $0.735$ . This result is comparable to the BiLSTM based methods in OffensEval.
Additional training data from HSAOFL BIBREF2 does not consistently improve results. For the models using word embeddings results are worse with additional training data. On the other hand, for models that use a range of additional features (Logistic Regression and AUX-Fast-BiLSTM), the additional training data helps.
Danish. Results are in Table 4 . Logistic Regression works best with an F1-score of $0.699$ . This is the second best performing model for English, though the best performing model for English (Fast-BiLSTM) is worst for Danish.
Best results are given in Table 5 . The low scores for Danish compared to English may be explained by the low amount of data in the Danish dataset. The Danish training set contains $2,879$ samples (table 2 ) while the English training set contains $13,240$ sample.Futher, in the English dataset around $33\%$ of the samples are labeled offensive while in the Danish set this rate is only at around $12\%$ . The effect that this under represented class has on the Danish classification task can be seen in more detail in Table 5 .
B - Categorization of offensive language type
English. In Table 6 the results are presented for sub-task B on English. The Learned-BiLSTM model trained for 60 epochs performs the best, obtaining a macro F1-score of $0.619$ .
Recall and precision scores are lower for UNT than TIN (Table 5 ). One reason is skew in the data, with only around $14\%$ of the posts labeled as UNT. The pre-trained embedding model, Fast-BiLSTM, performs the worst, with a macro averaged F1-score of $0.567$ . This indicates this approach is not good for detecting subtle differences in offensive samples in skewed data, while more complex feature models perform better.
Danish. Table 7 presents the results for sub-task B and the Danish language. The best performing system is the AUX-Fast-BiLSTM model (section UID26 ) trained for 100 epochs, which obtains an impressive macro F1-score of $0.729$ . This suggests that models that only rely on pre-trained word embeddings may not be optimal for this task. This is be considered alongside the indication in Section "Final Dataset" that relying on lexicon-based selection also performs poorly.
The limiting factor seems to be recall for the UNT category (Table 8 ). As mentioned in Section "Background" , the best performing system for sub-task B in OffensEval was a rule-based system, suggesting that more refined features, (e.g. lexica) may improve performance on this task. The better performance of models for Danish over English can most likely be explained by the fact that the training set used for Danish is more balanced, with around $42\%$ of the posts labeled as UNT.
C - Offensive language target identification
English. The results for sub-task C and the English language are presented in Table 9 . The best performing system is the Learned-BiLSTM model (section UID24 ) trained for 10 epochs, obtaining a macro averaged F1-score of $0.557$ . This is an improvement over the models introduced in BIBREF0 , where the BiLSTM based model achieves a macro F1-score of $0.470$ .
The main limitations of our model seems to be in the classification of OTH samples, as seen in Table 11 . This may be explained by the imbalance in the training data. It is interesting to see that this imbalance does not effect the GRP category as much, which only constitutes about $28\%$ of the training samples. One cause for the differences in these, is the fact that the definitions of the OTH category are vague, capturing all samples that do not belong to the previous two.
Danish. Table 10 presents the results for sub-task C and the Danish language. The best performing system is the same as in English, the Learned-BiLSTM model (section UID24 ), trained for 100 epochs, obtaining a macro averaged F1-score of $0.629$ . Given that this is the same model as the one that performed the best for English, this further indicates that task specific embeddings are helpful for more refined classification tasks.
It is interesting to see that both of the models using the additional set of features (Logistic Regression and AUX-Fast-BiLSTM) perform the worst. This indicates that these additional features are not beneficial for this more refined sub-task in Danish. The amount of samples used in training for this sub-task is very low. Imbalance does have as much effect for Danish as it does in English, as can be seen in Table 11 . Only about $14\%$ of the samples are labeled as OTH in the data (table 2 ), but the recall and precision scores are closer than they are for English.
Analysis
We perform analysis of the misclassified samples in the evaluation of our best performing models. To accomplish this, we compute the TF-IDF scores for a range of n-grams. We then take the top scoring n-grams in each category and try to discover any patterns that might exist. We also perform some manual analysis of these misclassified samples. The goal of this process is to try to get a clear idea of the areas our classifiers are lacking in. The following sections describe this process for each of the sub-tasks.
A - Offensive language identification
The classifier struggles to identify obfuscated offensive terms. This includes words that are concatenated together, such as barrrysoetorobullshit. The classifier also seems to associate she with offensiveness, and samples containing she are misclassified as offensive in several samples while he is less often associated with offensive language.
There are several examples where our classifier labels profanity-bearing content as offensive that are labeled as non-offensive in the test set. Posts such as Are you fucking serious? and Fuck I cried in this scene are labeled non-offensive in the test set, but according to annotation guidelines should be classified as offensive.
The best classifier is inclined to classify longer sequences as offensive. The mean character length of misclassified offensive samples is $204.7$ , while the mean character length of the samples misclassified not offensive is $107.9$ . This may be due to any post containing any form of profanity being offensive in sub-task A, so more words increase the likelihood of $>0$ profane words.
The classifier suffers from the same limitations as the classifier for English when it comes to obfuscated words, misclassifying samples such as Hahhaaha lær det biiiiiaaaatch as non-offensive. It also seems to associate the occurrence of the word svensken with offensive language, and quite a few samples containing that word are misclassified as offensive. This can be explained by the fact that offensive language towards Swedes is common in the training data, resulting in this association. From this, we can conclude that the classifier relies too much on the presence of individual keywords, ignoring the context of these keywords.
B - Categorization of offensive language type
Obfuscation prevails in sub-task B. Our classifier misses indicators of targeted insults such as WalkAwayFromAllDemocrats. It seems to rely too highly on the presence of profanity, misclassifying samples containing terms such as bitch, fuck, shit, etc. as targeted insults.
The issue of the data quality is also concerning in this sub-task, as we discover samples containing clear targeted insults such as HillaryForPrison being labeled as untargeted in the test set.
Our Danish classifier also seems to be missing obfuscated words such as kidsarefuckingstupid in the classification of targeted insults. It relies to some extent to heavily on the presence of profanity such as pikfjæs, lorte and fucking, and misclassifies untargeted posts containing these keywords as targeted insults.
C - Offensive language target identification Misclassification based on obfuscated terms as discussed earlier also seems to be an issue for sub-task C. This problem of obfuscated terms could be tackled by introducing character-level features such as character level n-grams.
Conclusion
Offensive language on online social media platforms is harmful. Due to the vast amount of user-generated content on online platforms, automatic methods are required to detect this kind of harmful content. Until now, most of the research on the topic has focused on solving the problem for English. We explored English and Danish hate speed detection and categorization, finding that sharing information across languages and platforms leads to good models for the task.
The resources and classifiers are available from the authors under CC-BY license, pending use in a shared task; a data statement BIBREF29 is included in the appendix. Extended results and analysis are given in BIBREF30 . | 3 |
b69f0438c1af4b9ed89e531c056d9812d4994016 | b69f0438c1af4b9ed89e531c056d9812d4994016_0 | Q: How large was the dataset of Danish comments?
Text: Introduction
Offensive language in user-generated content on online platforms and its implications has been gaining attention over the last couple of years. This interest is sparked by the fact that many of the online social media platforms have come under scrutiny on how this type of content should be detected and dealt with. It is, however, far from trivial to deal with this type of language directly due to the gigantic amount of user-generated content created every day. For this reason, automatic methods are required, using natural language processing (NLP) and machine learning techniques.
Given the fact that the research on offensive language detection has to a large extent been focused on the English language, we set out to explore the design of models that can successfully be used for both English and Danish. To accomplish this, an appropriate dataset must be constructed, annotated with the guidelines described in BIBREF0 . We, furthermore, set out to analyze the linguistic features that prove hard to detect by analyzing the patterns that prove hard to detect.
Background
Offensive language varies greatly, ranging from simple profanity to much more severe types of language. One of the more troublesome types of language is hate speech and the presence of hate speech on social media platforms has been shown to be in correlation with hate crimes in real life settings BIBREF1 . It can be quite hard to distinguish between generally offensive language and hate speech as few universal definitions exist BIBREF2 . There does, however, seem to be a general consensus that hate speech can be defined as language that targets a group with the intent to be harmful or to cause social chaos. This targeting is usually done on the basis of some characteristics such as race, color, ethnicity, gender, sexual orientation, nationality or religion BIBREF3 . In section "Background" , hate speech is defined in more detail. Offensive language, on the other hand, is a more general category containing any type of profanity or insult. Hate speech can, therefore, be classified as a subset of offensive language. BIBREF0 propose guidelines for classifying offensive language as well as the type and the target of offensive language. These guidelines capture the characteristics of generally offensive language, hate speech and other types of targeted offensive language such as cyberbullying. However, despite offensive language detection being a burgeoning field, no dataset yet exists for Danish BIBREF4 despite this phenomenon being present BIBREF5 .
Many different sub-tasks have been considered in the literature on offensive and harmful language detection, ranging from the detection of general offensive language to more refined tasks such as hate speech detection BIBREF2 , and cyberbullying detection BIBREF6 .
A key aspect in the research of automatic classification methods for language of any kind is having substantial amount of high quality data that reflects the goal of the task at hand, and that also contains a decent amount of samples belonging to each of the classes being considered. To approach this problem as a supervised classification task the data needs to be annotated according to a well-defined annotation schema that clearly reflects the problem statement. The quality of the data is of vital importance, since low quality data is unlikely to provide meaningful results. Cyberbullying is commonly defined as targeted insults or threats against an individual BIBREF0 . Three factors are mentioned as indicators of cyberbullying BIBREF6 : intent to cause harm, repetitiveness, and an imbalance of power. This type of online harassment most commonly occurs between children and teenagers, and cyberbullying acts are prohibited by law in several countries, as well as many of the US states BIBREF7 .
BIBREF8 focus on classifying cyberbullying events in Dutch. They define cyberbullying as textual content that is published online by an individual and is aggressive or hurtful against a victim. The annotation-schema used consists of two steps. In the first step, a three-point harmfulness score is assigned to each post as well as a category denoting the authors role (i.e. harasser, victim, or bystander). In the second step a more refined categorization is applied, by annotating the posts using the the following labels: Threat/Blackmail, Insult, Curse/Exclusion, Defamation, Sexual Talk, Defense, and Encouragement to the harasser. Hate Speech. As discussed in Section "Classification Structure" , hate speech is generally defined as language that is targeted towards a group, with the intend to be harmful or cause social chaos. This targeting is usually based on characteristics such as race, color, ethnicity, gender, sexual orientation, nationality or religion BIBREF3 . Hate speech is prohibited by law in many countries, although the definitions may vary. In article 20 of the International Covenant on Civil and Political Rights (ICCPR) it is stated that "Any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law" BIBREF9 . In Denmark, hate speech is prohibited by law, and is formally defined as public statements where a group is threatened, insulted, or degraded on the basis of characteristics such as nationality, ethnicity, religion, or sexual orientation BIBREF10 . Hate speech is generally prohibited by law in the European Union, where it is defined as public incitement to violence or hatred directed against a group defined on the basis of characteristics such as race, religion, and national or ethnic origin BIBREF11 . Hate speech is, however, not prohibited by law in the United States. This is due to the fact that hate speech is protected by the freedom of speech act in the First Amendment of the U.S. Constitution BIBREF12 .
BIBREF2 focus is on classifying hate speech by distinguishing between general offensive language and hate speech. They define hate speech as "language that is used to express hatred towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group". They argue that the high use of profanity on social media makes it vitally important to be able to effectively distinguish between generally offensive language and the more severe hate speech. The dataset is constructed by gathering data from Twitter, using a hate speech lexicon to query the data with crowdsourced annotations.
Contradicting definitions. It becomes clear that one of the key challenges in doing meaningful research on the topic are the differences in both the annotation-schemas and the definitions used, since it makes it difficult to effectively compare results to existing work, as pointed out by several authors ( BIBREF13 , BIBREF3 , BIBREF14 , BIBREF0 ). These issues become clear when comparing the work of BIBREF6 , where racist and sexist remarks are classified as a subset of insults, to the work of BIBREF15 , where similar remarks are split into two categories; hate speech and derogatory language. Another clear example of conflicting definitions becomes visible when comparing BIBREF16 , where hate speech is considered without any consideration of overlaps with the more general type of offensive language, to BIBREF2 where a clear distinction is made between the two, by classifying posts as either Hate speech, Offensive or Neither. This lack of consensus led BIBREF14 to propose annotation guidelines and introduce a typology. BIBREF17 argue that these proposed guidelines do not effectively capture both the type and target of the offensive language.
Dataset
In this section we give a comprehensive overview of the structure of the task and describe the dataset provided in BIBREF0 . Our work adopts this framing of the offensive language phenomenon.
Classification Structure
Offensive content is broken into three sub-tasks to be able to effectively identify both the type and the target of the offensive posts. These three sub-tasks are chosen with the objective of being able to capture different types of offensive language, such as hate speech and cyberbullying (section "Background" ).
In sub-task A the goal is to classify posts as either offensive or not. Offensive posts include insults and threats as well as any form of untargeted profanity BIBREF17 . Each sample is annotated with one of the following labels:
In English this could be a post such as #TheNunMovie was just as scary as I thought it would be. Clearly the critics don't think she is terrifyingly creepy. I like how it ties in with #TheConjuring series. In Danish this could be a post such as Kim Larsen var god, men hans død blev alt for hyped.
. In English this could be a post such as USER is a #pervert himself!. In Danish this could be a post such as Kalle er faggot...
In sub-task B the goal is to classify the type of offensive language by determining if the offensive language is targeted or not. Targeted offensive language contains insults and threats to an individual, group, or others BIBREF17 . Untargeted posts contain general profanity while not clearly targeting anyone BIBREF17 . Only posts labeled as offensive (OFF) in sub-task A are considered in this task. Each sample is annotated with one of the following labels:
Targeted Insult (TIN). In English this could be a post such as @USER Please ban this cheating scum. In Danish this could be e.g. Hun skal da selv have 99 år, den smatso.
Untargeted (UNT). In English this could be a post such as 2 weeks of resp done and I still don't know shit my ass still on vacation mode. In Danish this could e.g. Dumme svin...
In sub-task C the goal is to classify the target of the offensive language. Only posts labeled as targeted insults (TIN) in sub-task B are considered in this task BIBREF17 . Samples are annotated with one of the following:
Individual (IND): Posts targeting a named or unnamed person that is part of the conversation. In English this could be a post such as @USER Is a FRAUD Female @USER group paid for and organized by @USER. In Danish this could be a post such as USER du er sku da syg i hoved. These examples further demonstrate that this category captures the characteristics of cyberbullying, as it is defined in section "Background" .
Group (GRP): Posts targeting a group of people based on ethnicity, gender or sexual orientation, political affiliation, religious belief, or other characteristics. In English this could be a post such as #Antifa are mentally unstable cowards, pretending to be relevant. In Danish this could be e.g. Åh nej! Svensk lorteret!
Other (OTH): The target of the offensive language does not fit the criteria of either of the previous two categories. BIBREF17 . In English this could be a post such as And these entertainment agencies just gonna have to be an ass about it.. In Danish this could be a post such as Netto er jo et tempel over lort.
One of the main concerns when it comes to collecting data for the task of offensive language detection is to find high quality sources of user-generated content that represent each class in the annotation-schema to some extent. In our exploration phase we considered various social media platforms such as Twitter, Facebook, and Reddit.
We consider three social media sites as data.
Twitter. Twitter has been used extensively as a source of user-generated content and it was the first source considered in our initial data collection phase. The platform provides excellent interface for developers making it easy to gather substantial amounts of data with limited efforts. However, Twitter was not a suitable source of data for our task. This is due to the fact that Twitter has limited usage in Denmark, resulting in low quality data with many classes of interest unrepresented.
Facebook. We next considered Facebook, and the public page for the Danish media company Ekstra Bladet. We looked at user-generated comments on articles posted by Ekstra Bladet, and initial analysis of these comments showed great promise as they have a high degree of variation. The user behaviour on the page and the language used ranges from neutral language to very aggressive, where some users pour out sexist, racist and generally hateful language. We faced obstacles when collecting data from Facebook, due to the fact that Facebook recently made the decision to shut down all access to public pages through their developer interface. This makes computational data collection approaches impossible. We faced restrictions on scraping public pages with Facebook, and turned to manual collection of randomly selected user-generated comments from Ekstra Bladet's public page, yielding 800 comments of sufficient quality.
Reddit. Given that language classification tasks in general require substantial amounts of data, our exploration for suitable sources continued and our search next led us to Reddit. We scraped Reddit, collecting the top 500 posts from the Danish sub-reddits r/DANMAG and r/Denmark, as well as the user comments contained within each post.
We published a survey on Reddit asking Danish speaking users to suggest offensive, sexist, and racist terms for a lexicon. Language and user behaviour varies between platforms, so the goal is to capture platform-specific terms. This gave 113 offensive and hateful terms which were used to find offensive comments. The remainder of comments in the corpus were shuffled and a subset of this corpus was then used to fill the remainder of the final dataset. The resulting dataset contains 3600 user-generated comments, 800 from Ekstra Bladet on Facebook, 1400 from r/DANMAG and 1400 from r/Denmark. In light of the General Data Protection Regulations in Europe (GDPR) and the increased concern for online privacy, we applied some necessary pre-processing steps on our dataset to ensure the privacy of the authors of the comments that were used. Personally identifying content (such as the names of individuals, not including celebrity names) was removed. This was handled by replacing each name of an individual (i.e. author or subject) with @USER, as presented in both BIBREF0 and BIBREF2 . All comments containing any sensitive information were removed. We classify sensitive information as any information that can be used to uniquely identify someone by the following characteristics; racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, genetic data, and bio-metric data.
We base our annotation procedure on the guidelines and schemas presented in BIBREF0 , discussed in detail in section "Classification Structure" . As a warm-up procedure, the first 100 posts were annotated by two annotators (the author and the supervisor) and the results compared. This was used as an opportunity to refine the mutual understanding of the task at hand and to discuss the mismatches in these annotations for each sub-task.
We used a Jaccard index BIBREF18 to assess the similarity of our annotations. In sub-task A the Jaccard index of these initial 100 posts was 41.9%, 39.1% for sub-task B , and 42.8% for sub-task C. After some analysis of these results and the posts that we disagreed on it became obvious that to a large extent the disagreement was mainly caused by two reasons:
Guesswork of the context where the post itself was too vague to make a decisive decision on whether it was offensive or not without more context. An example of this is a post such as Skal de hjælpes hjem, næ nej de skal sendes hjem, where one might conclude, given the current political climate, that this is an offensive post targeted at immigrants. The context is, however, lacking so we cannot make a decisive decision. This post should, therefore, be labeled as non-offensive, since the post does not contain any profanity or a clearly stated group.
Failure to label posts containing some kind of profanity as offensive (typically when the posts themselves were not aggressive, harmful, or hateful). An example could be a post like @USER sgu da ikke hans skyld at hun ikke han finde ud af at koge fucking pasta, where the post itself is rather mild, but the presence of fucking makes this an offensive post according to our definitions.
In light of these findings our internal guidelines were refined so that no post should be labeled as offensive by interpreting any context that is not directly visible in the post itself and that any post containing any form of profanity should automatically be labeled as offensive. These stricter guidelines made the annotation procedure considerably easier while ensuring consistency. The remainder of the annotation task was performed by the author, resulting in 3600 annotated samples.
Final Dataset
In Table 1 the distribution of samples by sources in our final dataset is presented. Although a useful tool, using the hate speech lexicon as a filter only resulted in 232 comments. The remaining comments from Reddit were then randomly sampled from the remaining corpus.
The fully annotated dataset was split into a train and test set, while maintaining the distribution of labels from the original dataset. The training set contains 80% of the samples, and the test set contains 20%. Table 2 presents the distribution of samples by label for both the train and test set. The dataset is skewed, with around 88% of the posts labeled as not offensive (NOT). This is, however, generally the case when it comes to user-generated content on online platforms, and any automatic detection system needs be able to handle the problem of imbalanced data in order to be truly effective.
Features
One of the most important factors to consider when it comes to automatic classification tasks the the feature representation. This section discusses various representations used in the abusive language detection literature.
Top-level features. In BIBREF3 information comes from top-level features such as bag-of-words, uni-grams and more complex n-grams, and the literature certainly supports this. In their work on cyberbullying detection, BIBREF8 use word n-grams, character n-grams, and bag-of-words. They report uni-gram bag-of-word features as most predictive, followed by character tri-gram bag-of-words. Later work finds character n-grams are the most helpful features BIBREF15 , underlying the need for the modeling of un-normalized text. these simple top-level feature approaches are good but not without their limitations, since they often have high recall but lead to high rate of false positives BIBREF2 . This is due to the fact that the presence of certain terms can easily lead to misclassification when using these types of features. Many words, however, do not clearly indicate which category the text sample belongs to, e.g. the word gay can be used in both neutral and offensive contexts.
Linguistic Features BIBREF15 use a number of linguistic features, including the length of samples, average word lengths, number of periods and question marks, number of capitalized letters, number of URLs, number of polite words, number of unknown words (by using an English dictionary), and number of insults and hate speech words. Although these features have not proven to provide much value on their own, they have been shown to be a good addition to the overall feature space BIBREF15 .
Word Representations. Top-level features often require the predictive words to occur in both the training set and the test sets, as discussed in BIBREF3 . For this reason, some sort of word generalization is required. BIBREF15 explore three types of embedding-derived features. First, they explore pre-trained embeddings derived from a large corpus of news samples. Secondly, they use word2vec BIBREF19 to generate word embeddings using their own corpus of text samples. We use both approaches. Both the pre-trained and word2vec models represent each word as a 200 dimensional distributed real number vector. Lastly, they develop 100 dimensional comment2vec model, based on the work of BIBREF20 . Their results show that the comment2vec and the word2vec models provide the most predictive features BIBREF15 . In BIBREF21 they experiment with pre-trained GloVe embeddings BIBREF22 , learned FastText embeddings BIBREF23 , and randomly initialized learned embeddings. Interestingly, the randomly initialized embeddings slightly outperform the others BIBREF21 .
Sentiment Scores. Sentiment scores are a common addition to the feature space of classification systems dealing with offensive and hateful speech. In our work we experiment with sentiment scores and some of our models rely on them as a dimension in their feature space. To compute these sentiment score features our systems use two Python libraries: VADER BIBREF24 and AFINN BIBREF25 .Our models use the compound attribute, which gives a normalized sum of sentiment scores over all words in the sample. The compound attribute ranges from $-1$ (extremely negative) to $+1$ (extremely positive).
Reading Ease. As well as some of the top-level features mentioned so far, we also use Flesch-Kincaid Grade Level and Flesch Reading Ease scores. The Flesch-Kincaid Grade Level is a metric assessing the level of reading ability required to easily understand a sample of text.
Models
We introduce a variety of models in our work to compare different approaches to the task at hand. First of all, we introduce naive baselines that simply classify each sample as one of the categories of interest (based on BIBREF0 ). Next, we introduce a logistic regression model based on the work of BIBREF2 , using the same set of features as introduced there. Finally, we introduce three deep learning models: Learned-BiLSTM, Fast-BiLSTM, and AUX-Fast-BiLSTM. The logistic regression model is built using Scikit Learn BIBREF26 and the deep learning models are built using Keras BIBREF27 . The following sections describe these model architectures in detail, the algorithms they are based on, and the features they use.
Results and Analysis
For each sub-task (A, B, and C, Section "Classification Structure" ) we present results for all methods in each language.
A - Offensive language identification:
English. For English (Table 3 ) Fast-BiLSTM performs best, trained for 100 epochs, using the OLID dataset. The model achieves a macro averaged F1-score of $0.735$ . This result is comparable to the BiLSTM based methods in OffensEval.
Additional training data from HSAOFL BIBREF2 does not consistently improve results. For the models using word embeddings results are worse with additional training data. On the other hand, for models that use a range of additional features (Logistic Regression and AUX-Fast-BiLSTM), the additional training data helps.
Danish. Results are in Table 4 . Logistic Regression works best with an F1-score of $0.699$ . This is the second best performing model for English, though the best performing model for English (Fast-BiLSTM) is worst for Danish.
Best results are given in Table 5 . The low scores for Danish compared to English may be explained by the low amount of data in the Danish dataset. The Danish training set contains $2,879$ samples (table 2 ) while the English training set contains $13,240$ sample.Futher, in the English dataset around $33\%$ of the samples are labeled offensive while in the Danish set this rate is only at around $12\%$ . The effect that this under represented class has on the Danish classification task can be seen in more detail in Table 5 .
B - Categorization of offensive language type
English. In Table 6 the results are presented for sub-task B on English. The Learned-BiLSTM model trained for 60 epochs performs the best, obtaining a macro F1-score of $0.619$ .
Recall and precision scores are lower for UNT than TIN (Table 5 ). One reason is skew in the data, with only around $14\%$ of the posts labeled as UNT. The pre-trained embedding model, Fast-BiLSTM, performs the worst, with a macro averaged F1-score of $0.567$ . This indicates this approach is not good for detecting subtle differences in offensive samples in skewed data, while more complex feature models perform better.
Danish. Table 7 presents the results for sub-task B and the Danish language. The best performing system is the AUX-Fast-BiLSTM model (section UID26 ) trained for 100 epochs, which obtains an impressive macro F1-score of $0.729$ . This suggests that models that only rely on pre-trained word embeddings may not be optimal for this task. This is be considered alongside the indication in Section "Final Dataset" that relying on lexicon-based selection also performs poorly.
The limiting factor seems to be recall for the UNT category (Table 8 ). As mentioned in Section "Background" , the best performing system for sub-task B in OffensEval was a rule-based system, suggesting that more refined features, (e.g. lexica) may improve performance on this task. The better performance of models for Danish over English can most likely be explained by the fact that the training set used for Danish is more balanced, with around $42\%$ of the posts labeled as UNT.
C - Offensive language target identification
English. The results for sub-task C and the English language are presented in Table 9 . The best performing system is the Learned-BiLSTM model (section UID24 ) trained for 10 epochs, obtaining a macro averaged F1-score of $0.557$ . This is an improvement over the models introduced in BIBREF0 , where the BiLSTM based model achieves a macro F1-score of $0.470$ .
The main limitations of our model seems to be in the classification of OTH samples, as seen in Table 11 . This may be explained by the imbalance in the training data. It is interesting to see that this imbalance does not effect the GRP category as much, which only constitutes about $28\%$ of the training samples. One cause for the differences in these, is the fact that the definitions of the OTH category are vague, capturing all samples that do not belong to the previous two.
Danish. Table 10 presents the results for sub-task C and the Danish language. The best performing system is the same as in English, the Learned-BiLSTM model (section UID24 ), trained for 100 epochs, obtaining a macro averaged F1-score of $0.629$ . Given that this is the same model as the one that performed the best for English, this further indicates that task specific embeddings are helpful for more refined classification tasks.
It is interesting to see that both of the models using the additional set of features (Logistic Regression and AUX-Fast-BiLSTM) perform the worst. This indicates that these additional features are not beneficial for this more refined sub-task in Danish. The amount of samples used in training for this sub-task is very low. Imbalance does have as much effect for Danish as it does in English, as can be seen in Table 11 . Only about $14\%$ of the samples are labeled as OTH in the data (table 2 ), but the recall and precision scores are closer than they are for English.
Analysis
We perform analysis of the misclassified samples in the evaluation of our best performing models. To accomplish this, we compute the TF-IDF scores for a range of n-grams. We then take the top scoring n-grams in each category and try to discover any patterns that might exist. We also perform some manual analysis of these misclassified samples. The goal of this process is to try to get a clear idea of the areas our classifiers are lacking in. The following sections describe this process for each of the sub-tasks.
A - Offensive language identification
The classifier struggles to identify obfuscated offensive terms. This includes words that are concatenated together, such as barrrysoetorobullshit. The classifier also seems to associate she with offensiveness, and samples containing she are misclassified as offensive in several samples while he is less often associated with offensive language.
There are several examples where our classifier labels profanity-bearing content as offensive that are labeled as non-offensive in the test set. Posts such as Are you fucking serious? and Fuck I cried in this scene are labeled non-offensive in the test set, but according to annotation guidelines should be classified as offensive.
The best classifier is inclined to classify longer sequences as offensive. The mean character length of misclassified offensive samples is $204.7$ , while the mean character length of the samples misclassified not offensive is $107.9$ . This may be due to any post containing any form of profanity being offensive in sub-task A, so more words increase the likelihood of $>0$ profane words.
The classifier suffers from the same limitations as the classifier for English when it comes to obfuscated words, misclassifying samples such as Hahhaaha lær det biiiiiaaaatch as non-offensive. It also seems to associate the occurrence of the word svensken with offensive language, and quite a few samples containing that word are misclassified as offensive. This can be explained by the fact that offensive language towards Swedes is common in the training data, resulting in this association. From this, we can conclude that the classifier relies too much on the presence of individual keywords, ignoring the context of these keywords.
B - Categorization of offensive language type
Obfuscation prevails in sub-task B. Our classifier misses indicators of targeted insults such as WalkAwayFromAllDemocrats. It seems to rely too highly on the presence of profanity, misclassifying samples containing terms such as bitch, fuck, shit, etc. as targeted insults.
The issue of the data quality is also concerning in this sub-task, as we discover samples containing clear targeted insults such as HillaryForPrison being labeled as untargeted in the test set.
Our Danish classifier also seems to be missing obfuscated words such as kidsarefuckingstupid in the classification of targeted insults. It relies to some extent to heavily on the presence of profanity such as pikfjæs, lorte and fucking, and misclassifies untargeted posts containing these keywords as targeted insults.
C - Offensive language target identification Misclassification based on obfuscated terms as discussed earlier also seems to be an issue for sub-task C. This problem of obfuscated terms could be tackled by introducing character-level features such as character level n-grams.
Conclusion
Offensive language on online social media platforms is harmful. Due to the vast amount of user-generated content on online platforms, automatic methods are required to detect this kind of harmful content. Until now, most of the research on the topic has focused on solving the problem for English. We explored English and Danish hate speed detection and categorization, finding that sharing information across languages and platforms leads to good models for the task.
The resources and classifiers are available from the authors under CC-BY license, pending use in a shared task; a data statement BIBREF29 is included in the appendix. Extended results and analysis are given in BIBREF30 . | 3600 user-generated comments |
2e9c6e01909503020070ec4faa6c8bf2d6c0af42 | 2e9c6e01909503020070ec4faa6c8bf2d6c0af42_0 | Q: Who were the annotators?
Text: Introduction
Offensive language in user-generated content on online platforms and its implications has been gaining attention over the last couple of years. This interest is sparked by the fact that many of the online social media platforms have come under scrutiny on how this type of content should be detected and dealt with. It is, however, far from trivial to deal with this type of language directly due to the gigantic amount of user-generated content created every day. For this reason, automatic methods are required, using natural language processing (NLP) and machine learning techniques.
Given the fact that the research on offensive language detection has to a large extent been focused on the English language, we set out to explore the design of models that can successfully be used for both English and Danish. To accomplish this, an appropriate dataset must be constructed, annotated with the guidelines described in BIBREF0 . We, furthermore, set out to analyze the linguistic features that prove hard to detect by analyzing the patterns that prove hard to detect.
Background
Offensive language varies greatly, ranging from simple profanity to much more severe types of language. One of the more troublesome types of language is hate speech and the presence of hate speech on social media platforms has been shown to be in correlation with hate crimes in real life settings BIBREF1 . It can be quite hard to distinguish between generally offensive language and hate speech as few universal definitions exist BIBREF2 . There does, however, seem to be a general consensus that hate speech can be defined as language that targets a group with the intent to be harmful or to cause social chaos. This targeting is usually done on the basis of some characteristics such as race, color, ethnicity, gender, sexual orientation, nationality or religion BIBREF3 . In section "Background" , hate speech is defined in more detail. Offensive language, on the other hand, is a more general category containing any type of profanity or insult. Hate speech can, therefore, be classified as a subset of offensive language. BIBREF0 propose guidelines for classifying offensive language as well as the type and the target of offensive language. These guidelines capture the characteristics of generally offensive language, hate speech and other types of targeted offensive language such as cyberbullying. However, despite offensive language detection being a burgeoning field, no dataset yet exists for Danish BIBREF4 despite this phenomenon being present BIBREF5 .
Many different sub-tasks have been considered in the literature on offensive and harmful language detection, ranging from the detection of general offensive language to more refined tasks such as hate speech detection BIBREF2 , and cyberbullying detection BIBREF6 .
A key aspect in the research of automatic classification methods for language of any kind is having substantial amount of high quality data that reflects the goal of the task at hand, and that also contains a decent amount of samples belonging to each of the classes being considered. To approach this problem as a supervised classification task the data needs to be annotated according to a well-defined annotation schema that clearly reflects the problem statement. The quality of the data is of vital importance, since low quality data is unlikely to provide meaningful results. Cyberbullying is commonly defined as targeted insults or threats against an individual BIBREF0 . Three factors are mentioned as indicators of cyberbullying BIBREF6 : intent to cause harm, repetitiveness, and an imbalance of power. This type of online harassment most commonly occurs between children and teenagers, and cyberbullying acts are prohibited by law in several countries, as well as many of the US states BIBREF7 .
BIBREF8 focus on classifying cyberbullying events in Dutch. They define cyberbullying as textual content that is published online by an individual and is aggressive or hurtful against a victim. The annotation-schema used consists of two steps. In the first step, a three-point harmfulness score is assigned to each post as well as a category denoting the authors role (i.e. harasser, victim, or bystander). In the second step a more refined categorization is applied, by annotating the posts using the the following labels: Threat/Blackmail, Insult, Curse/Exclusion, Defamation, Sexual Talk, Defense, and Encouragement to the harasser. Hate Speech. As discussed in Section "Classification Structure" , hate speech is generally defined as language that is targeted towards a group, with the intend to be harmful or cause social chaos. This targeting is usually based on characteristics such as race, color, ethnicity, gender, sexual orientation, nationality or religion BIBREF3 . Hate speech is prohibited by law in many countries, although the definitions may vary. In article 20 of the International Covenant on Civil and Political Rights (ICCPR) it is stated that "Any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law" BIBREF9 . In Denmark, hate speech is prohibited by law, and is formally defined as public statements where a group is threatened, insulted, or degraded on the basis of characteristics such as nationality, ethnicity, religion, or sexual orientation BIBREF10 . Hate speech is generally prohibited by law in the European Union, where it is defined as public incitement to violence or hatred directed against a group defined on the basis of characteristics such as race, religion, and national or ethnic origin BIBREF11 . Hate speech is, however, not prohibited by law in the United States. This is due to the fact that hate speech is protected by the freedom of speech act in the First Amendment of the U.S. Constitution BIBREF12 .
BIBREF2 focus is on classifying hate speech by distinguishing between general offensive language and hate speech. They define hate speech as "language that is used to express hatred towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group". They argue that the high use of profanity on social media makes it vitally important to be able to effectively distinguish between generally offensive language and the more severe hate speech. The dataset is constructed by gathering data from Twitter, using a hate speech lexicon to query the data with crowdsourced annotations.
Contradicting definitions. It becomes clear that one of the key challenges in doing meaningful research on the topic are the differences in both the annotation-schemas and the definitions used, since it makes it difficult to effectively compare results to existing work, as pointed out by several authors ( BIBREF13 , BIBREF3 , BIBREF14 , BIBREF0 ). These issues become clear when comparing the work of BIBREF6 , where racist and sexist remarks are classified as a subset of insults, to the work of BIBREF15 , where similar remarks are split into two categories; hate speech and derogatory language. Another clear example of conflicting definitions becomes visible when comparing BIBREF16 , where hate speech is considered without any consideration of overlaps with the more general type of offensive language, to BIBREF2 where a clear distinction is made between the two, by classifying posts as either Hate speech, Offensive or Neither. This lack of consensus led BIBREF14 to propose annotation guidelines and introduce a typology. BIBREF17 argue that these proposed guidelines do not effectively capture both the type and target of the offensive language.
Dataset
In this section we give a comprehensive overview of the structure of the task and describe the dataset provided in BIBREF0 . Our work adopts this framing of the offensive language phenomenon.
Classification Structure
Offensive content is broken into three sub-tasks to be able to effectively identify both the type and the target of the offensive posts. These three sub-tasks are chosen with the objective of being able to capture different types of offensive language, such as hate speech and cyberbullying (section "Background" ).
In sub-task A the goal is to classify posts as either offensive or not. Offensive posts include insults and threats as well as any form of untargeted profanity BIBREF17 . Each sample is annotated with one of the following labels:
In English this could be a post such as #TheNunMovie was just as scary as I thought it would be. Clearly the critics don't think she is terrifyingly creepy. I like how it ties in with #TheConjuring series. In Danish this could be a post such as Kim Larsen var god, men hans død blev alt for hyped.
. In English this could be a post such as USER is a #pervert himself!. In Danish this could be a post such as Kalle er faggot...
In sub-task B the goal is to classify the type of offensive language by determining if the offensive language is targeted or not. Targeted offensive language contains insults and threats to an individual, group, or others BIBREF17 . Untargeted posts contain general profanity while not clearly targeting anyone BIBREF17 . Only posts labeled as offensive (OFF) in sub-task A are considered in this task. Each sample is annotated with one of the following labels:
Targeted Insult (TIN). In English this could be a post such as @USER Please ban this cheating scum. In Danish this could be e.g. Hun skal da selv have 99 år, den smatso.
Untargeted (UNT). In English this could be a post such as 2 weeks of resp done and I still don't know shit my ass still on vacation mode. In Danish this could e.g. Dumme svin...
In sub-task C the goal is to classify the target of the offensive language. Only posts labeled as targeted insults (TIN) in sub-task B are considered in this task BIBREF17 . Samples are annotated with one of the following:
Individual (IND): Posts targeting a named or unnamed person that is part of the conversation. In English this could be a post such as @USER Is a FRAUD Female @USER group paid for and organized by @USER. In Danish this could be a post such as USER du er sku da syg i hoved. These examples further demonstrate that this category captures the characteristics of cyberbullying, as it is defined in section "Background" .
Group (GRP): Posts targeting a group of people based on ethnicity, gender or sexual orientation, political affiliation, religious belief, or other characteristics. In English this could be a post such as #Antifa are mentally unstable cowards, pretending to be relevant. In Danish this could be e.g. Åh nej! Svensk lorteret!
Other (OTH): The target of the offensive language does not fit the criteria of either of the previous two categories. BIBREF17 . In English this could be a post such as And these entertainment agencies just gonna have to be an ass about it.. In Danish this could be a post such as Netto er jo et tempel over lort.
One of the main concerns when it comes to collecting data for the task of offensive language detection is to find high quality sources of user-generated content that represent each class in the annotation-schema to some extent. In our exploration phase we considered various social media platforms such as Twitter, Facebook, and Reddit.
We consider three social media sites as data.
Twitter. Twitter has been used extensively as a source of user-generated content and it was the first source considered in our initial data collection phase. The platform provides excellent interface for developers making it easy to gather substantial amounts of data with limited efforts. However, Twitter was not a suitable source of data for our task. This is due to the fact that Twitter has limited usage in Denmark, resulting in low quality data with many classes of interest unrepresented.
Facebook. We next considered Facebook, and the public page for the Danish media company Ekstra Bladet. We looked at user-generated comments on articles posted by Ekstra Bladet, and initial analysis of these comments showed great promise as they have a high degree of variation. The user behaviour on the page and the language used ranges from neutral language to very aggressive, where some users pour out sexist, racist and generally hateful language. We faced obstacles when collecting data from Facebook, due to the fact that Facebook recently made the decision to shut down all access to public pages through their developer interface. This makes computational data collection approaches impossible. We faced restrictions on scraping public pages with Facebook, and turned to manual collection of randomly selected user-generated comments from Ekstra Bladet's public page, yielding 800 comments of sufficient quality.
Reddit. Given that language classification tasks in general require substantial amounts of data, our exploration for suitable sources continued and our search next led us to Reddit. We scraped Reddit, collecting the top 500 posts from the Danish sub-reddits r/DANMAG and r/Denmark, as well as the user comments contained within each post.
We published a survey on Reddit asking Danish speaking users to suggest offensive, sexist, and racist terms for a lexicon. Language and user behaviour varies between platforms, so the goal is to capture platform-specific terms. This gave 113 offensive and hateful terms which were used to find offensive comments. The remainder of comments in the corpus were shuffled and a subset of this corpus was then used to fill the remainder of the final dataset. The resulting dataset contains 3600 user-generated comments, 800 from Ekstra Bladet on Facebook, 1400 from r/DANMAG and 1400 from r/Denmark. In light of the General Data Protection Regulations in Europe (GDPR) and the increased concern for online privacy, we applied some necessary pre-processing steps on our dataset to ensure the privacy of the authors of the comments that were used. Personally identifying content (such as the names of individuals, not including celebrity names) was removed. This was handled by replacing each name of an individual (i.e. author or subject) with @USER, as presented in both BIBREF0 and BIBREF2 . All comments containing any sensitive information were removed. We classify sensitive information as any information that can be used to uniquely identify someone by the following characteristics; racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, genetic data, and bio-metric data.
We base our annotation procedure on the guidelines and schemas presented in BIBREF0 , discussed in detail in section "Classification Structure" . As a warm-up procedure, the first 100 posts were annotated by two annotators (the author and the supervisor) and the results compared. This was used as an opportunity to refine the mutual understanding of the task at hand and to discuss the mismatches in these annotations for each sub-task.
We used a Jaccard index BIBREF18 to assess the similarity of our annotations. In sub-task A the Jaccard index of these initial 100 posts was 41.9%, 39.1% for sub-task B , and 42.8% for sub-task C. After some analysis of these results and the posts that we disagreed on it became obvious that to a large extent the disagreement was mainly caused by two reasons:
Guesswork of the context where the post itself was too vague to make a decisive decision on whether it was offensive or not without more context. An example of this is a post such as Skal de hjælpes hjem, næ nej de skal sendes hjem, where one might conclude, given the current political climate, that this is an offensive post targeted at immigrants. The context is, however, lacking so we cannot make a decisive decision. This post should, therefore, be labeled as non-offensive, since the post does not contain any profanity or a clearly stated group.
Failure to label posts containing some kind of profanity as offensive (typically when the posts themselves were not aggressive, harmful, or hateful). An example could be a post like @USER sgu da ikke hans skyld at hun ikke han finde ud af at koge fucking pasta, where the post itself is rather mild, but the presence of fucking makes this an offensive post according to our definitions.
In light of these findings our internal guidelines were refined so that no post should be labeled as offensive by interpreting any context that is not directly visible in the post itself and that any post containing any form of profanity should automatically be labeled as offensive. These stricter guidelines made the annotation procedure considerably easier while ensuring consistency. The remainder of the annotation task was performed by the author, resulting in 3600 annotated samples.
Final Dataset
In Table 1 the distribution of samples by sources in our final dataset is presented. Although a useful tool, using the hate speech lexicon as a filter only resulted in 232 comments. The remaining comments from Reddit were then randomly sampled from the remaining corpus.
The fully annotated dataset was split into a train and test set, while maintaining the distribution of labels from the original dataset. The training set contains 80% of the samples, and the test set contains 20%. Table 2 presents the distribution of samples by label for both the train and test set. The dataset is skewed, with around 88% of the posts labeled as not offensive (NOT). This is, however, generally the case when it comes to user-generated content on online platforms, and any automatic detection system needs be able to handle the problem of imbalanced data in order to be truly effective.
Features
One of the most important factors to consider when it comes to automatic classification tasks the the feature representation. This section discusses various representations used in the abusive language detection literature.
Top-level features. In BIBREF3 information comes from top-level features such as bag-of-words, uni-grams and more complex n-grams, and the literature certainly supports this. In their work on cyberbullying detection, BIBREF8 use word n-grams, character n-grams, and bag-of-words. They report uni-gram bag-of-word features as most predictive, followed by character tri-gram bag-of-words. Later work finds character n-grams are the most helpful features BIBREF15 , underlying the need for the modeling of un-normalized text. these simple top-level feature approaches are good but not without their limitations, since they often have high recall but lead to high rate of false positives BIBREF2 . This is due to the fact that the presence of certain terms can easily lead to misclassification when using these types of features. Many words, however, do not clearly indicate which category the text sample belongs to, e.g. the word gay can be used in both neutral and offensive contexts.
Linguistic Features BIBREF15 use a number of linguistic features, including the length of samples, average word lengths, number of periods and question marks, number of capitalized letters, number of URLs, number of polite words, number of unknown words (by using an English dictionary), and number of insults and hate speech words. Although these features have not proven to provide much value on their own, they have been shown to be a good addition to the overall feature space BIBREF15 .
Word Representations. Top-level features often require the predictive words to occur in both the training set and the test sets, as discussed in BIBREF3 . For this reason, some sort of word generalization is required. BIBREF15 explore three types of embedding-derived features. First, they explore pre-trained embeddings derived from a large corpus of news samples. Secondly, they use word2vec BIBREF19 to generate word embeddings using their own corpus of text samples. We use both approaches. Both the pre-trained and word2vec models represent each word as a 200 dimensional distributed real number vector. Lastly, they develop 100 dimensional comment2vec model, based on the work of BIBREF20 . Their results show that the comment2vec and the word2vec models provide the most predictive features BIBREF15 . In BIBREF21 they experiment with pre-trained GloVe embeddings BIBREF22 , learned FastText embeddings BIBREF23 , and randomly initialized learned embeddings. Interestingly, the randomly initialized embeddings slightly outperform the others BIBREF21 .
Sentiment Scores. Sentiment scores are a common addition to the feature space of classification systems dealing with offensive and hateful speech. In our work we experiment with sentiment scores and some of our models rely on them as a dimension in their feature space. To compute these sentiment score features our systems use two Python libraries: VADER BIBREF24 and AFINN BIBREF25 .Our models use the compound attribute, which gives a normalized sum of sentiment scores over all words in the sample. The compound attribute ranges from $-1$ (extremely negative) to $+1$ (extremely positive).
Reading Ease. As well as some of the top-level features mentioned so far, we also use Flesch-Kincaid Grade Level and Flesch Reading Ease scores. The Flesch-Kincaid Grade Level is a metric assessing the level of reading ability required to easily understand a sample of text.
Models
We introduce a variety of models in our work to compare different approaches to the task at hand. First of all, we introduce naive baselines that simply classify each sample as one of the categories of interest (based on BIBREF0 ). Next, we introduce a logistic regression model based on the work of BIBREF2 , using the same set of features as introduced there. Finally, we introduce three deep learning models: Learned-BiLSTM, Fast-BiLSTM, and AUX-Fast-BiLSTM. The logistic regression model is built using Scikit Learn BIBREF26 and the deep learning models are built using Keras BIBREF27 . The following sections describe these model architectures in detail, the algorithms they are based on, and the features they use.
Results and Analysis
For each sub-task (A, B, and C, Section "Classification Structure" ) we present results for all methods in each language.
A - Offensive language identification:
English. For English (Table 3 ) Fast-BiLSTM performs best, trained for 100 epochs, using the OLID dataset. The model achieves a macro averaged F1-score of $0.735$ . This result is comparable to the BiLSTM based methods in OffensEval.
Additional training data from HSAOFL BIBREF2 does not consistently improve results. For the models using word embeddings results are worse with additional training data. On the other hand, for models that use a range of additional features (Logistic Regression and AUX-Fast-BiLSTM), the additional training data helps.
Danish. Results are in Table 4 . Logistic Regression works best with an F1-score of $0.699$ . This is the second best performing model for English, though the best performing model for English (Fast-BiLSTM) is worst for Danish.
Best results are given in Table 5 . The low scores for Danish compared to English may be explained by the low amount of data in the Danish dataset. The Danish training set contains $2,879$ samples (table 2 ) while the English training set contains $13,240$ sample.Futher, in the English dataset around $33\%$ of the samples are labeled offensive while in the Danish set this rate is only at around $12\%$ . The effect that this under represented class has on the Danish classification task can be seen in more detail in Table 5 .
B - Categorization of offensive language type
English. In Table 6 the results are presented for sub-task B on English. The Learned-BiLSTM model trained for 60 epochs performs the best, obtaining a macro F1-score of $0.619$ .
Recall and precision scores are lower for UNT than TIN (Table 5 ). One reason is skew in the data, with only around $14\%$ of the posts labeled as UNT. The pre-trained embedding model, Fast-BiLSTM, performs the worst, with a macro averaged F1-score of $0.567$ . This indicates this approach is not good for detecting subtle differences in offensive samples in skewed data, while more complex feature models perform better.
Danish. Table 7 presents the results for sub-task B and the Danish language. The best performing system is the AUX-Fast-BiLSTM model (section UID26 ) trained for 100 epochs, which obtains an impressive macro F1-score of $0.729$ . This suggests that models that only rely on pre-trained word embeddings may not be optimal for this task. This is be considered alongside the indication in Section "Final Dataset" that relying on lexicon-based selection also performs poorly.
The limiting factor seems to be recall for the UNT category (Table 8 ). As mentioned in Section "Background" , the best performing system for sub-task B in OffensEval was a rule-based system, suggesting that more refined features, (e.g. lexica) may improve performance on this task. The better performance of models for Danish over English can most likely be explained by the fact that the training set used for Danish is more balanced, with around $42\%$ of the posts labeled as UNT.
C - Offensive language target identification
English. The results for sub-task C and the English language are presented in Table 9 . The best performing system is the Learned-BiLSTM model (section UID24 ) trained for 10 epochs, obtaining a macro averaged F1-score of $0.557$ . This is an improvement over the models introduced in BIBREF0 , where the BiLSTM based model achieves a macro F1-score of $0.470$ .
The main limitations of our model seems to be in the classification of OTH samples, as seen in Table 11 . This may be explained by the imbalance in the training data. It is interesting to see that this imbalance does not effect the GRP category as much, which only constitutes about $28\%$ of the training samples. One cause for the differences in these, is the fact that the definitions of the OTH category are vague, capturing all samples that do not belong to the previous two.
Danish. Table 10 presents the results for sub-task C and the Danish language. The best performing system is the same as in English, the Learned-BiLSTM model (section UID24 ), trained for 100 epochs, obtaining a macro averaged F1-score of $0.629$ . Given that this is the same model as the one that performed the best for English, this further indicates that task specific embeddings are helpful for more refined classification tasks.
It is interesting to see that both of the models using the additional set of features (Logistic Regression and AUX-Fast-BiLSTM) perform the worst. This indicates that these additional features are not beneficial for this more refined sub-task in Danish. The amount of samples used in training for this sub-task is very low. Imbalance does have as much effect for Danish as it does in English, as can be seen in Table 11 . Only about $14\%$ of the samples are labeled as OTH in the data (table 2 ), but the recall and precision scores are closer than they are for English.
Analysis
We perform analysis of the misclassified samples in the evaluation of our best performing models. To accomplish this, we compute the TF-IDF scores for a range of n-grams. We then take the top scoring n-grams in each category and try to discover any patterns that might exist. We also perform some manual analysis of these misclassified samples. The goal of this process is to try to get a clear idea of the areas our classifiers are lacking in. The following sections describe this process for each of the sub-tasks.
A - Offensive language identification
The classifier struggles to identify obfuscated offensive terms. This includes words that are concatenated together, such as barrrysoetorobullshit. The classifier also seems to associate she with offensiveness, and samples containing she are misclassified as offensive in several samples while he is less often associated with offensive language.
There are several examples where our classifier labels profanity-bearing content as offensive that are labeled as non-offensive in the test set. Posts such as Are you fucking serious? and Fuck I cried in this scene are labeled non-offensive in the test set, but according to annotation guidelines should be classified as offensive.
The best classifier is inclined to classify longer sequences as offensive. The mean character length of misclassified offensive samples is $204.7$ , while the mean character length of the samples misclassified not offensive is $107.9$ . This may be due to any post containing any form of profanity being offensive in sub-task A, so more words increase the likelihood of $>0$ profane words.
The classifier suffers from the same limitations as the classifier for English when it comes to obfuscated words, misclassifying samples such as Hahhaaha lær det biiiiiaaaatch as non-offensive. It also seems to associate the occurrence of the word svensken with offensive language, and quite a few samples containing that word are misclassified as offensive. This can be explained by the fact that offensive language towards Swedes is common in the training data, resulting in this association. From this, we can conclude that the classifier relies too much on the presence of individual keywords, ignoring the context of these keywords.
B - Categorization of offensive language type
Obfuscation prevails in sub-task B. Our classifier misses indicators of targeted insults such as WalkAwayFromAllDemocrats. It seems to rely too highly on the presence of profanity, misclassifying samples containing terms such as bitch, fuck, shit, etc. as targeted insults.
The issue of the data quality is also concerning in this sub-task, as we discover samples containing clear targeted insults such as HillaryForPrison being labeled as untargeted in the test set.
Our Danish classifier also seems to be missing obfuscated words such as kidsarefuckingstupid in the classification of targeted insults. It relies to some extent to heavily on the presence of profanity such as pikfjæs, lorte and fucking, and misclassifies untargeted posts containing these keywords as targeted insults.
C - Offensive language target identification Misclassification based on obfuscated terms as discussed earlier also seems to be an issue for sub-task C. This problem of obfuscated terms could be tackled by introducing character-level features such as character level n-grams.
Conclusion
Offensive language on online social media platforms is harmful. Due to the vast amount of user-generated content on online platforms, automatic methods are required to detect this kind of harmful content. Until now, most of the research on the topic has focused on solving the problem for English. We explored English and Danish hate speed detection and categorization, finding that sharing information across languages and platforms leads to good models for the task.
The resources and classifiers are available from the authors under CC-BY license, pending use in a shared task; a data statement BIBREF29 is included in the appendix. Extended results and analysis are given in BIBREF30 . | the author and the supervisor |
fc65f19a30150a0e981fb69c1f5720f0136325b0 | fc65f19a30150a0e981fb69c1f5720f0136325b0_0 | Q: Is is known whether Sina Weibo posts are censored by humans or some automatic classifier?
Text: Introduction
In 2019, Freedom in the World, a yearly survey produced by Freedom House that measures the degree of civil liberties and political rights in every nation, recorded the 13th consecutive year of decline in global freedom. This decline spans across long-standing democracies such as USA as well as authoritarian regimes such as China and Russia. “Democracy is in retreat. The offensive against freedom of expression is being supercharged by a new and more effective form of digital authoritarianism." According to the report, China is now exporting its model of comprehensive internet censorship and surveillance around the world, offering trainings, seminars, and even study trips as well as advanced equipment.
In this paper, we deal with a particular type of censorship – when a post gets removed from a social media platform semi-automatically based on its content. We are interested in exploring whether there are systematic linguistic differences between posts that get removed by censors from Sina Weibo, a Chinese microblogging platform, and the posts that remain on the website. Sina Weibo was launched in 2009 and became the most popular social media platform in China. Sina Weibo has over 431 million monthly active users.
In cooperation with the ruling regime, Weibo sets strict control over the content published under its service BIBREF0. According to Zhu et al. zhu-etal:2013, Weibo uses a variety of strategies to target censorable posts, ranging from keyword list filtering to individual user monitoring. Among all posts that are eventually censored, nearly 30% of them are censored within 5–30 minutes, and nearly 90% within 24 hours BIBREF1. We hypothesize that the former are done automatically, while the latter are removed by human censors.
Research shows that some of the censorship decisions are not necessarily driven by the criticism of the state BIBREF2, the presence of controversial topics BIBREF3, BIBREF4, or posts that describe negative events BIBREF5. Rather, censorship is triggered by other factors, such as for example, the collective action potential BIBREF2, i.e., censors target posts that stimulate collective action, such as riots and protests.
The goal of this paper is to compare censored and uncensored posts that contain the same sensitive keywords and topics. Using the linguistic features extracted, a neural network model is built to explore whether censorship decision can be deduced from the linguistic characteristics of the posts.
The contributions of this paper are: 1. We decipher a way to determine whether a blogpost on Weibo has been deleted by the author or censored by Weibo. 2. We develop a corpus of censored and uncensored Weibo blogposts that contain sensitive keyword(s). 3. We build a neural network classifier that predicts censorship significantly better than non-expert humans. 4. We find a set of linguistics features that contributes to the censorship prediction problem. 5. We indirectly test the construct of Collective Action Potential (CAP) proposed by King et al. king-etal:2013 through crowdsourcing experiments and find that the existence of CAP is more prevalent in censored blogposts than uncensored blogposts as judged by human annotators.
Previous Work
There have been significant efforts to develop strategies to detect and evade censorship. Most work, however, focuses on exploiting technological limitations with existing routing protocols BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10. Research that pays more attention to linguistic properties of online censorship in the context of censorship evasion include, for example, Safaka et al. safaka-etal:2016 who apply linguistic steganography to circumvent censorship. Lee lee:2016 uses parodic satire to bypass censorship in China and claims that this stylistic device delays and often evades censorship. Hiruncharoenvate et al. hirun-etal:2015 show that the use of homophones of censored keywords on Sina Weibo could help extend the time a Weibo post could remain available online. All these methods rely on a significant amount of human effort to interpret and annotate texts to evaluate the likelihood of censorship, which might not be practical to carry out for common Internet users in real life. There has also been research that uses linguistic and content clues to detect censorship. Knockel et al. knockel-etal:2015 and Zhu et al. zhu-etal:2013 propose detection mechanisms to categorize censored content and automatically learn keywords that get censored. King et al. king-etal:2013 in turn study the relationship between political criticism and chance of censorship. They come to the conclusion that posts that have a Collective Action Potential get deleted by the censors even if they support the state. Bamman et al. bamman-etal:2012 uncover a set of politically sensitive keywords and find that the presence of some of them in a Weibo blogpost contribute to a higher chance of the post being censored. Ng et al. kei-nlp4if:2018 also target a set of topics that have been suggested to be sensitive, but unlike Bamman et al. bamman-etal:2012, they cover areas not limited to politics. Ng et al. kei-nlp4if:2018 investigate how the textual content as a whole might be relevant to censorship decisions when both the censored and uncensored blogposts include the same sensitive keyword(s).
Our work is related to Ng et al. kei-nlp4if:2018 and Ng et al. ng-etal-2019-neural; however, we introduce a larger and more diverse dataset of censored posts; we experiment with a wider range of features and in fact show that not all the features reported in Ng et al. guarantee the best performance. We built a classifier that significantly outperforms Ng et al. We conduct a crowdsourcing experiment testing human judgments of controversy and censorship as well as indirectly testing the construct of collective action potential proposed by King et al.
Tracking Censorship
Tracking censorship topics on Weibo is a challenging task due to the transient nature of censored posts and the scarcity of censored data from well-known sources such as FreeWeibo and WeiboScope. The most straightforward way to collect data from a social media platform is to make use of its API. However, Weibo imposes various restrictions on the use of its API such as restricted access to certain endpoints and restricted number of posts returned per request. Above all, the Weibo API does not provide any endpoint that allows easy and efficient collection of the target data (posts that contain sensitive keywords). Therefore, an alternative method is needed to track censorship for our purpose.
Datasets ::: Using Zhu et al. (2003)'s Corpus
Zhu et al. zhu-etal:2013 collected over 2 million posts published by a set of around 3,500 sensitive users during a 2-month period in 2012. We extract around 20 thousand text-only posts using 64 keywords across 26 topics, which partially overlap with those included in the New Corpus (see below and in Table TABREF20). We filter all duplicates. Among the extracted posts, 930 (4.63%) are censored by Weibo as verified by Zhu et al. zhu-etal:2013 The extracted data from Zhu et al.zhu-etal:2013's are also used in building classification models.
While it is possible to study the linguistic features in Zhu et al’s dataset without collecting new data, we created another corpus that targets `normal' users (Zhu et al. target `sensitive' users) and a different time period so that the results are not specific to a particular group of users and time.
Datasets ::: New Corpus ::: Web Scraping
We develop a web scraper that continuously collects and tracks data that contain sensitive keywords on the front-end. The scraper's target interface displays 20 to 24 posts that contain a certain search key term(s), resembling a search engine's result page. We call this interface the Topic Timeline since the posts all contain the same keyword(s) and are displayed in reverse chronological order. The Weibo API does not provide any endpoint that returns the same set of data appeared on the Topic Timeline. Through a series of trial-and-errors to avoid CAPTCHAs that interrupt the data collection process, we found an optimal scraping frequency of querying the Topic Timeline every 5 to 10 minutes using 17 search terms (see Appendix) across 10 topics (see Table TABREF13) for a period of 4 months (August 29, 2018 to December 29, 2018). In each query, all relevant posts and their meta-data are saved to our database. We save posts that contain texts only (i.e. posts that do not contain images, hyperlinks, re-blogged content etc.) and filter out duplicates.
Datasets ::: New Corpus ::: Decoding Censorship
According to Zhu et al. zhu-etal:2013, the unique ID of a Weibo post is the key to distinguish whether a post has been censored by Weibo or has been instead removed by the authors themselves. If a post has been censored by Weibo, querying its unique ID through the API returns an error message of “permission denied" (system-deleted), whereas a user-removed post returns an error message of “the post does not exist" (user-deleted). However, since the Topic Timeline (the data source of our web scraper) can be accessed only on the front-end (i.e. there is no API endpoint associated with it), we rely on both the front-end and the API to identify system- and user-deleted posts. It is not possible to distinguish the two types of deletion by directly querying the unique ID of all scraped posts because, through empirical experimentation, uncensored posts and censored (system-deleted) posts both return the same error message – “permission denied"). Therefore, we need to first check if a post still exists on the front-end, and then send an API request using the unique ID of the post that no longer exists to determine whether it has been deleted by the system or the user. The steps to identify censorship status of each post are illustrated in Figure FIGREF12. First, we check whether a scraped post is still available through visiting the user interface of each post. This is carried out automatically in a headless browser 2 days after a post is published. If a post has been removed (either by system or by user), the headless browser is redirected to an interface that says “the page doesn't exist"; otherwise, the browser brings us to the original interface that displays the post content. Next, after 14 days, we use the same methods in step 1 to check the posts' status again. This step allows our dataset to include posts that have been removed at a later stage. Finally, we send a follow-up API query using the unique ID of posts that no longer exist on the browser in step 1 and step 2 to determine censorship status using the same decoding techniques proposed by Zhu et al. as described above zhu-etal:2013. Altogether, around 41 thousand posts are collected, in which 952 posts (2.28%) are censored by Weibo. In our ongoing work, we are comparing the accuracy of the classifier on posts that are automatically removed vs. those removed by humans. The results will be reported in the future publications.
We would like to emphasize that while the data collection methods could be seen as recreating a keyword search, the scraping pipeline also deciphers the logic in discovering censorship on Weibo.
Datasets ::: Sample Data
Figure FIGREF15 shows several examples selected randomly from our dataset. Each pair of censored and uncensored posts contains the same sensitive keyword.
Crowdsourcing Experiment
A balanced corpus is created. The uncensored posts of each dataset are randomly sampled to match with the number of their censored counterparts (see Table TABREF13 and Table TABREF20). We select randomly a subset of the data collected by the web scraper to construct surveys for crowdsourcing experiment. The surveys ask participants three questions (see Figure FIGREF16).
Sample questions are included in Appendix. Question 1 explores how humans perform on the task of censorship classification; question 2 explores whether a blogpost is controversial; question 3 serves as a way to explore in our data the concept of Collective Action Potential (CAP) suggested by King et al.king-etal:2013. According to King et al. king-etal:2013, Collective Action Potential is the potential to cause collective action such as protest or organized crowd formation outside the Internet. Participants can respond either Yes or No to the 3 questions above. A total of 800 blogposts (400 censored and 400 uncensored) are presented to 37 different participants through a crowdsourcing platform Witmart in 8 batches (100 blogposts per batch). Each blogpost is annotated by 6 to 12 participants. The purpose of this paper is to shed light on the “knowledge gap” between censors and normal weibo users about censorable content. We believe weibo users are aware of potential censorable content but are not “trained” enough to avoid or identify them. The results are summarized in Table TABREF19.
The annotation results are intuitive – participants tend to see censored blogposts as more controversial and more likely to trigger action in real life than the uncensored counterpart.
We obtain a Fleiss's kappa score for each question to study the inter-rater agreement. Since the number and identity of participants of each batch of the survey are different, we obtain an average Fleiss' kappa from the results of each batch. The Fleiss' kappa of questions 1 to 3 are 0.13, 0.07, and 0.03 respectively, which all fall under the category of slight agreement.
We hypothesize that since all blogposts contain sensitive keyword(s), the annotators choose to label a fair amount of uncensored blogposts as controversial, and even as likely to be censored or cause action in real life. This might also be the reason of the low agreement scores – the sensitive keywords might be the cause of divided opinions.
Regarding the result of censorship prediction, 23.83% of censored blogposts are correctly annotated as censored, while 83.59% of uncensored blogposts are correctly annotated as uncensored. This result suggests that participants tend to predict a blogpost to survive censorship on Weibo, despite the fact that they can see the presence of controversial element(s) in a blogpost as suggested by the annotation results of question 2. This suggests that detecting censorable content is a non-trivial task and humans do not have a good intuition (unless specifically trained, perhaps) what material is going to be censored. It might be true that there is some level of subjectivity form human censors. We believe there are commonalities among censored blogposts that pass through the “subjectivity filters” and such commonalities could be the linguistic features that contribute to our experiment results (see sections SECREF6 and SECREF7).
Feature Extraction
To build an automatic classifier, we first extract features from both our scraped data and Zhu et al.'s dataset. While the datasets we use are different from that of Ng et al. kei-nlp4if:2018 and Ng et al. ng-etal-2019-neural, some of the features we extract are similar to theirs. We include CRIE features (see below) and the number of followers feature that are not extracted in Ng et al. kei-nlp4if:2018's work.
Feature Extraction ::: Linguistic Features
We extract 5 sets of linguistic features from both datasets (see below) – the LIWC features, the CRIE features, the sentiment features, the semantic features, and word embeddings. We are interested in the LIWC and CRIE features because they are purely linguistic, which aligns with the objective of our study. Also, some of the LIWC features extracted from Ng et al. ng2018detecting's data have shown to be useful in classifying censored and uncensored tweets.
Feature Extraction ::: Linguistic Features ::: LIWC features
The English Linguistic Inquiry and Word Count (LIWC) BIBREF11, BIBREF12 is a program that analyzes text on a word-by-word basis, calculating percentage of words that match each language dimension, e.g., pronouns, function words, social processes, cognitive processes, drives, informal language use etc. Its lexicon consists of approximately 6400 words divided into categories belong to different linguistic dimensions and psychological processes. LIWC builds on previous research establishing strong links between linguistic patterns and personality/psychological state. We use a version of LIWC developed for Chinese by Huang et al. huang-etal:2012 to extract the frequency of word categories. Altogether we extract 95 features from LIWC. One important feature of the LIWC lexicon is that categories form a tree structure hierarchy. Some features subsume others.
Feature Extraction ::: Linguistic Features ::: Sentiment features
We use BaiduAI to obtain a set of sentiment scores for each post. BaiduAI's sentiment analyzer is built using deep learning techniques based on data found on Baidu, one of the most popular search engines and encyclopedias in mainland China. It outputs a positive sentiment score and a negative sentiment score which sum to 1.
Feature Extraction ::: Linguistic Features ::: CRIE features
We use the Chinese Readability Index Explorer (CRIE) BIBREF13, a text analysis tool developed for measuring the readability of a Chinese text based on the its linguistic components. Its internal dictionaries and lexical information are developed based on dominant corpora such as the Sinica Tree Bank. CRIE outputs 50 linguistic features (see Appendix), such as word, syntax, semantics, and cohesion in each text or produce an aggregated result for a batch of texts. CRIE can train and categorize texts based on their readability levels. We use the textual-features analysis for our data and derive readability scores for each post in our data. These scores are mainly based on descriptive statistics.
Feature Extraction ::: Linguistic Features ::: Semantic features
We use the Chinese Thesaurus developed by Mei mei:1984 and extended by HIT-SCIR to extract semantic features. The structure of this semantic dictionary is similar to WordNet, where words are divided into 12 semantic classes and each word can belong to one or more classes. It can be roughly compared to the concept of word senses. We derive a semantic ambiguity feature by dividing the number of words in each post by the number of semantic classes in it.
Feature Extraction ::: Linguistic Features ::: Frequency & readability
We compute the average frequency of characters and words in each post using Da da:2004's work and Aihanyu's CNCorpus respectively. For words with a frequency lower than 50 in the reference corpus, we count it as 0.0001%. It is intuitive to think that a text with less semantic variety and more common words and characters is relatively easier to read and understand. We derive a Readability feature by taking the mean of character frequency, word frequency and word count to semantic classes described above. It is assumed that the lower the mean of the 3 components, the less readable a text is. In fact, these 3 components are part of Sung et al. sung-et-al:2015's readability metric for native speakers on the word level and semantic level.
Feature Extraction ::: Linguistic Features ::: Word embeddings
Word vectors are trained using the word2vec tool BIBREF14, BIBREF15 on 300,000 of the latest Chinese articles on Wikipedia. A 200-dimensional vector is computed for each word of each blogpost. The vector average of each blogpost is the sum of word vectors divided by the number of vectors. The 200-dimensional vector average are used as features for classification.
Feature Extraction ::: Non-linguistic Features ::: Followers
The number of followers of the author of each post is recorded and used as a feature for classification.
Classification
Features extracted from the balanced datasets (See Table 1 and Table 3) are used for classifications. Although the amount of uncensored blogposts significantly outnumber censored in real-life, such unbalanced corpus might be more suitable for anomaly detection. All numeric values of the features have been standardized before classification. We use a multilayer perceptron (MLP) classifier to classify instances into censored and uncensored. A number of classification experiments using different combinations of features are carried out.
Best performances are achieved using the combination of CRIE, sentiment, semantic, frequency, readability and follower features (i.e. all features but LIWC and word embeddings) (see Table TABREF36). The feature selection is performed using random sampling. As a result 77 features are selected that perform consistently well across the datasets. We call these features the best features set. (see https://msuweb.montclair.edu/~feldmana/publications/aaai20_appendix.pdf for the full list of features).
We vary the number of epochs and hidden layers. The rest of the parameters are set to default – learning rate of 0.3, momentum of 0.2, batch size of 100, validation threshold of 20. Classification experiments are performed on 1) both datasets 2) scraped data only 3) Zhu et al.'s data only. Each experiment is validated with 10-fold cross validation. We report the accuracy of each model in Table TABREF36. It is worth mentioning that using the LIWC features only, or the CRIE features only, or the word embeddings only, or all features excluding the CRIE features, or all features except the LIWC and CRIE features all result in poor performance of below 60%. Besides MLP, we also use the same sets of features to train classifiers using Naive Bayes, Logistic, and Support Vector Machine. However, the performances are all below 65%.
Discussion and Conclusion
Our best results are over 30% higher than the baseline and about 60% higher than the human baseline obtained through crowdsourcing, which shows that our classifier has a greater censorship predictive ability compared to human judgments. The classification on both datasets together tends to give higher accuracy using at least 3 hidden layers. However, the performance does not improve when adding additional layers (other parameters being the same). Since the two datasets were collected differently and contain different topics, combining them together results in a richer dataset that requires more hidden layers to train a better model. It is worth noting that classifying both datasets using the best features set decreases the accuracy, while using all features but LIWC improves the classification performance. The reason for this behavior could be an existence of consistent differences in the LIWC features between the datasets. Since the LIWC features in the best features set (see Appendix https://msuweb.montclair.edu/~feldmana/publications/aaai20_appendix.pdf) consist of mostly word categories of different genres of vocabulary (i.e. grammar and style agnostic), it might suggest that the two datasets use vocabularies differently. Yet, the high performance obtained excluding the LIWC features shows that the key to distinguishing between censored and uncensored posts seems to be the features related to writing style, readability, sentiment, and semantic complexity of a text.
Figure FIGREF38 shows two blogposts annotated by CRIE with number of verbs and number of first person pronoun features.
To narrow down on what might be the best features that contribute to distinguishing censored and uncensored posts, we compare the mean of each feature of the two classes (see Figure FIGREF37). The 6 features distinguish censored from uncensored are:
1. negative sentiment
2. average number of idioms in each sentence
3. number of content word categories
4. number of idioms
5. number of complex semantic categories
6. verbs
On the other hand, the 4 features that distinguish uncensored from censored are:
1. positive sentiment
2. words related to leisure
3. words related to reward
4. words related to money
This might suggest that the censored posts generally convey more negative sentiment and are more idiomatic and semantically complex in terms of word usage. According to King et al. king-etal:2013, Collective Action Potential, which is related to a blogpost's potential of causing riot or assembly in real-life, is the key determinant of a blogpost getting censored. Although there is not a definitive measure of this construct, it is intuitive to relate a higher average use of verbs to a post that calls for action.
On the other hand, the uncensored posts might be in general more positive in nature (positive sentiment) and include more content that talks about neutral matters (money, leisure, reward).
We further explore how the use of verbs might possibly affect censorship by studying the types of verbs used in censored and uncensored blogposts. We extracted verbs from all blogposts by using the Jieba Part-of-speech tagger . We then used the Chinese Thesaurus described in Section SECREF21 to categorize the verbs into 5 classes: Actions, Psychology, Human activities, States and phenomena, and Relations. However, no significant differences have been found across censored and uncensored blogposts. A further analysis on verbs in terms of their relationship with actions and arousal can be a part of future work.
Since the focus of this paper is to study the linguistic content of blogposts, rather than rate of censorship, we did not employ technical methods to differentiate blogposts that have different survival rates. Future work could be done to investigate any differences between blogposts that get censored at different rates. In our ongoing work, we are comparing the accuracy of the classifier on posts that are automatically removed vs. those removed by humans. The results will be reported in the future publications.
To conclude, our work shows that there are linguistic fingerprints of censorship, and it is possible to use linguistic properties of a social media post to automatically predict if it is going to be censored. It will be interesting to explore if the same linguistic features can be used to predict censorship on other social media platforms and in other languages.
Acknowledgments
This work is supported by the National Science Foundation under Grant No.: 1704113, Division of Computer and Networked Systems, Secure and Trustworthy Cyberspace (SaTC). We also thank Jed Crandall for sharing Zhu et al. zhu-etal:2013's dataset with us. | No |
5067e5eb2cddbb34b71e8b74ab9210cd46bb09c5 | 5067e5eb2cddbb34b71e8b74ab9210cd46bb09c5_0 | Q: Which matching features do they employ?
Text: Introduction
Natural Language Inference (NLI) is a crucial subtopic in Natural Language Processing (NLP). Most studies treat NLI as a classification problem, aiming at recognizing the relation types of hypothesis-premise sentence pairs, usually including “Entailment”, “Contradiction” and “Neutral”.
NLI is also called Recognizing Textual Entailment (RTE) BIBREF0 in earlier works and a lot of statistical-based BIBREF1 and rule-based approaches BIBREF2 are proposed to solve the problem. In 2015, Bowman released the SNLI corpus BIBREF3 that provides more than 570K hypothesis-premise sentence pairs. The large-scale data of SNLI allows a Neural Network (NN) based model to perform on the NLI. Since then, a variety of NN based models have been proposed, most of which can be divided into two kinds of frameworks. The first one is based on “Siamense" network BIBREF3 , BIBREF4 . It first applies either Recurrent Neural Network (RNN) or Convolutional Neural Networks (CNN) to generates sentence representations on both premise and hypothesis, and then concatenate them for the final classification. The second one is called “matching-aggregation" network BIBREF5 , BIBREF6 . It matches two sentences at word level, and then aggregates the matching results to generate a fixed vector for prediction. Matching is implemented by several functions based on element-wise operations BIBREF7 , BIBREF8 . Studies on SNLI show that the second one performs better.
Though the second framework has made considerable success on the NLI task, there are still some limitations. First, the inference on the mixed matching feature only adopts one-pass process, which means some detailed information would not be retrieved once missing. While the multi-turn inference can overcome this deficiency and make better use of these matching features. Second, the mixed matching feature only concatenates different matching features as the input for aggregation. It lacks interaction among various matching features. Furthermore, it treats all the matching features equally and cannot assign different importance to different matching features.
In this paper, we propose the MIMN model to tackle these limitations. Our model uses the matching features described in BIBREF5 , BIBREF9 . However, we do not simply concatenate the features but introduce a multi-turn inference mechanism to infer different matching features with a memory component iteratively. The merits of MIMN are as follows:
We conduct experiments on three NLI datasets: SNLI BIBREF3 , SCITAIL BIBREF10 and MPE BIBREF11 . On the SNLI dataset, our single model achieves 88.3% in accuracy and our ensemble model achieves 89.3% in terms of accuracy, which are both comparable with the state-of-the-art results. Furthermore, our MIMN model outperforms all previous works on both SCITAIL and MPE dataset. Especially, the model gains substantial (8.9%) improvement on MPE dataset which contains multiple premises. This result shows our model is expert in aggregating the information of multiple premises.
Related Work
Early work on the NLI task mainly uses conventional statistical methods on small-scale datasets BIBREF0 , BIBREF12 . Recently, the neural models on NLI are based on large-scale datasets and can be categorized into two central frameworks: (i) Siamense-based framework which focuses on building sentence embeddings separately and integrates the two sentence representations to make the final prediction BIBREF4 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 ; (ii) “matching-aggregation” framework which uses various matching methods to get the interactive space of two input sentences and then aggregates the matching results to dig for deep information BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF8 , BIBREF6 , BIBREF23 , BIBREF24 , BIBREF18 , BIBREF25 , BIBREF26 .
Our model is directly motivated by the approaches proposed by BIBREF7 , BIBREF9 . BIBREF7 introduces the “matching-aggregation" framework to compare representations between words and then aggregate their matching results for final decision.
BIBREF9 enhances the comparing approaches by adding element-wise subtraction and element-wise multiplication, which further improve the performance on SNLI. The previous work shows that matching layer is an essential component of this framework and different matching methods can affect the final classification result.
Various attention-based memory neural networks BIBREF27 have been explored to solve the NLI problem BIBREF20 , BIBREF28 , BIBREF14 . BIBREF20 presents a model of deep fusion LSTMs (DF-LSTMs) (Long Short-Term Memory ) which utilizes a strong interaction between text pairs in a recursive matching memory. BIBREF28 uses a memory network to extend the LSTM architecture. BIBREF14 employs a variable sized memory model to enrich the LSTM-based input encoding information. However, all the above models are not specially designed for NLI and they all focus on input sentence encoding.
Inspired by the previous work, we propose the MIMN model. We iteratively update memory by feeding in different sequence matching features. We are the first to apply memory mechanism to matching component for the NLI task. Our experiment results on several datasets show that our MIMN model is significantly better than the previous models.
Model
In this section, we describe our MIMN model, which consists of the following five major components: encoding layer, attention layer, matching layer, multi-turn inference layer and output layer. Fig. FIGREF3 shows the architecture of our MIMN model.
We represent each example of the NLI task as a triple INLINEFORM0 , where INLINEFORM1 is a given premise, INLINEFORM2 is a given hypothesis, INLINEFORM3 and INLINEFORM4 are word embeddings of r-dimension. The true label INLINEFORM5 indicates the logical relationship between the premise INLINEFORM6 and the hypothesis INLINEFORM7 , where INLINEFORM8 . Our model aims to compute the conditional probability INLINEFORM9 and predict the label for examples in testing data set by INLINEFORM10 .
Encoding Layer
In this paper, we utilize a bidirectional LSTM (BiLSTM) BIBREF29 as our encoder to transform the word embeddings of premise and hypothesis to context vectors. The premise and the hypothesis share the same weights of BiLSTM. DISPLAYFORM0
where the context vectors INLINEFORM0 and INLINEFORM1 are the concatenation of the forward and backward hidden outputs of BiLSTM respectively. The outputs of the encoding layer are the context vectors INLINEFORM2 and INLINEFORM3 , where INLINEFORM4 is the number of hidden units of INLINEFORM5 .
Attention Layer
On the NLI task, the relevant contexts between the premise and the hypothesis are important clues for final classification. The relevant contexts can be acquired by a soft-attention mechanism BIBREF30 , BIBREF31 , which has been applied to a bunch of tasks successfully. The alignments between a premise and a hypothesis are based on a score matrix. There are three most commonly used methods to compute the score matrix: linear combination, bilinear combination, and dot product. For simplicity, we choose dot product in the following computation BIBREF8 . First, each element in the score matrix is computed based on the context vectors of INLINEFORM0 and INLINEFORM1 as follows: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are computed in Equations ( EQREF5 ) and (), and INLINEFORM2 is a scalar which indicates how INLINEFORM3 is related to INLINEFORM4 .
Then, we compute the alignment vectors for each word in the premise and the hypothesis as follows: DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 is the weighted summaries of thehypothesis in terms of each word in the premise. The same operation is applied to INLINEFORM1 . The outputs of this layer are INLINEFORM2 and INLINEFORM3 . For the context vectors INLINEFORM4 , the relevant contexts in the hypothesis INLINEFORM5 are represented in INLINEFORM6 . The same is applied to INLINEFORM7 and INLINEFORM8 .
Matching Layer
The goal of the matching layer is to match the context vectors INLINEFORM0 and INLINEFORM1 with the corresponding aligned vectors INLINEFORM2 and INLINEFORM3 from multi-perspective to generate a matching sequence.
In this layer, we match each context vector INLINEFORM0 against each aligned vector INLINEFORM1 to capture richer semantic information. We design three effective matching functions: INLINEFORM2 , INLINEFORM3 and INLINEFORM4 to match two vectors BIBREF32 , BIBREF5 , BIBREF9 . Each matching function takes the context vector INLINEFORM5 ( INLINEFORM6 ) and the aligned vector INLINEFORM7 ( INLINEFORM8 ) as inputs, then matches the inputs by an feed-forward network based on a particular matching operation and finally outputs a matching vector. The formulas of the three matching functions INLINEFORM9 , INLINEFORM10 and INLINEFORM11 are described in formulas ( EQREF11 ) () (). To avoid repetition, we will only describe the application of these functions to INLINEFORM12 and INLINEFORM13 . The readers can infer these equations for INLINEFORM14 and INLINEFORM15 . DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 represent concatenation, subtraction, and multiplication respectively, INLINEFORM3 , INLINEFORM4 and INLINEFORM5 are weight parameters to be learned, and INLINEFORM6 are bias parameters to be learned. The outputs of each matching function are INLINEFORM7 , which represent the matching result from three perspectives respectively. After matching the context vectors INLINEFORM8 and the aligned vectors INLINEFORM9 by INLINEFORM10 , INLINEFORM11 and INLINEFORM12 , we can get three matching features INLINEFORM13 , INLINEFORM14 and INLINEFORM15 .
After matching the context vectors INLINEFORM0 and the aligned vectors INLINEFORM1 by INLINEFORM2 , INLINEFORM3 and INLINEFORM4 , we can get three matching features INLINEFORM5 , INLINEFORM6 and INLINEFORM7 . where INLINEFORM8 , INLINEFORM9 , and INLINEFORM10 .
The INLINEFORM0 can be considered as a joint-feature of combing the context vectors INLINEFORM1 with aligned vectors INLINEFORM2 , which preserves all the information. And the INLINEFORM3 can be seen as a diff-feature of the INLINEFORM4 and INLINEFORM5 , which preserves the different parts and removes the similar parts. And the INLINEFORM6 can be regarded as a sim-feature of INLINEFORM7 and INLINEFORM8 , which emphasizes on the similar parts and neglects the different parts between INLINEFORM9 and INLINEFORM10 . Each feature helps us focus on particular parts between the context vectors and the aligned vectors. These matching features are vector representations with low dimension, but containing high-order semantic information. To make further use of these matching features, we collect them to generate a matching sequence INLINEFORM11 . DISPLAYFORM0
where INLINEFORM0 .
The output of this layer is the matching sequence INLINEFORM0 , which stores three kinds of matching features. The order of the matching features in INLINEFORM1 is inspired by the attention trajectory of human beings making inference on premise and hypothesis. We process the matching sequence in turn in the multi-turn inference layer. Intuitively, given a premise and a hypothesis, we will first read the original sentences to find the relevant information. Next, it's natural for us to combine all the parts of the original information and the relevant information. Then we move the attention to the different parts. Finally, we pay attention to the similar parts.
Multi-turn Inference Layer
In this layer, we aim to acquire inference outputs by aggregating the information in the matching sequence by multi-turn inference mechanism. We regard the inference on the matching sequence as the multi-turn interaction among various matching features. In each turn, we process one matching feature instead of all the matching features BIBREF9 , BIBREF26 . To enhance the information interaction between matching features, a memory component is employed to store the inference information of the previous turns. Then, the inference of each turn is based on the current matching feature and the memory. Here, we utilize another BiLSTM for the inference. DISPLAYFORM0
where INLINEFORM0 is an inference vector in the current turn, INLINEFORM1 is the index current turn, INLINEFORM2 , INLINEFORM3 is a memory vector stores the historical inference information, and INLINEFORM4 is used for dimension reduction.
Then we update the memory by combining the current inference vector INLINEFORM0 with the memory vector of last turn INLINEFORM1 . An update gate is used to control the ratio of current information and history information adaptively BIBREF33 . The initial values of all the memory vectors are all zeros. DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are parameters to be learned, and INLINEFORM2 is a sigmoid function to compress the ratio between 0-1. Finally, we use the latest memory matrix INLINEFORM3 as the inference output of premise INLINEFORM4 . Then we calculate INLINEFORM5 in a similar way. The final outputs of this layer are INLINEFORM6 and INLINEFORM7 . DISPLAYFORM0
where INLINEFORM0 stores the inference results of all matching features. The final outputs of multi-turn inference layer are INLINEFORM1 and INLINEFORM2 . The calculation of INLINEFORM3 is the same as INLINEFORM4 .
Output Layer
The final relationship judgment depends on the sentence embeddings of premise and hypothesis. We convert INLINEFORM0 and INLINEFORM1 to sentence embeddings of premise and hypothesis by max pooling and average pooling. Next, we concatenate the two sentence embeddings to a fixed-length output vector. Then we feed the output vector to a multilayer perceptron (MLP) classifier that includes a hidden layer with INLINEFORM2 activation and a softmax layer to get the final prediction. The model is trained end-to-end. We employ multi-class cross-entropy as the cost function when training the model.
Data
To verify the effectiveness of our model, we conduct experiments on three NLI datasets. The basic information about the three datasets is shown in Table TABREF19 .
The large SNLI BIBREF3 corpus is served as a major benchmark for the NLI task. The MPE corpus BIBREF11 is a newly released textual entailment dataset. Each pair in MPE consists of four premises, one hypothesis, and one label, which is different from the standard NLI datasets. Entailment relationship holds if the hypothesis comes from the same image as the four premises. The SCITAIL BIBREF10 is a dataset about science question answering. The premises are created from relevant web sentences, while hypotheses are created from science questions and the corresponding answer candidates.
Models for Comparison
We compare our model with “matching-aggregation” related and attention-based memory related models. In addition, to verify the effectiveness of these major components in our model, we design the following model variations for comparison: ESIM is considered as a typical model of “matching-aggregation”, so we choose ESIM as the principal comparison object. We choose the LSTMN model with deep attention fusion as a complement comparison, which is a memory related model. Besides above models, following variants of our model are designed for comparing:
ESIM We choose the ESIM model as our baseline. It mixes all the matching feature together in the matching layer and then infers the matching result in a single-turn with a BiLSTM.
600D MIMN: This is our main model described in section SECREF3 .
600D MIMN-memory: This model removes the memory component. The motivation of this experiment is to verify whether the multiple turns inference can acquire more sufficient information than one-pass inference. In this model, we process one matching feature in one iteration. The three matching features are encoded by INLINEFORM0 in multi-turns iteratively without previous memory information. The output of each iteration is concatenated to be the final output of the multi-turn inference layer: Then the Equation ( EQREF14 ) and ( EQREF16 ) are changed into Equation ( EQREF24 ) and () respectively and the Equation ( EQREF15 ) is removed. DISPLAYFORM0
600D MIMN-gate+ReLU : This model replaces the update gate in the memory component with a ReLU layer. The motivation of this model is to verify the effectiveness of update gate for combining current inference result and previous memory. Then the Equation ( EQREF15 ) is changed into Equation ( EQREF26 ). INLINEFORM0 stays the same as Equations ( EQREF16 ). DISPLAYFORM0
Experimental Settings
We implement our model with Tensorflow BIBREF34 . We initialize the word embeddings by the pre-trained embeddings of 300D GloVe 840B vectors BIBREF35 . The word embeddings of the out-of-vocabulary words are randomly initialized. The hidden units of INLINEFORM0 and INLINEFORM1 are 300 dimensions. All weights are constrained by L2 regularization with the weight decay coefficient of 0.0003. We also apply dropout BIBREF36 to all the layers with a dropout rate of 0.2. Batch size is set to 32. The model is optimized with Adam BIBREF37 with an initial learning rate of 0.0005, the first momentum of 0.9 and the second of 0.999. The word embeddings are fixed during all the training time. We use early-stopping (patience=10) based on the validation set accuracy. We use three turns on all the datasets. The evaluation metric is the classification accuracy. To help duplicate our results, we will release our source code at https://github.com/blcunlp/RTE/tree/master/MIMN.
Experiments on SNLI
Experimental results of the current state-of-the-art models and three variants of our model are listed in Table TABREF29 . The first group of models (1)-(3) are the attention-based memory models on the NLI task. BIBREF20 uses external memory to increase the capacity of LSTMs. BIBREF14 utilizes an encoding memory matrix to maintain the input information. BIBREF28 extends the LSTM architecture with a memory network to enhance the interaction between the current input and all previous inputs.
The next group of models (4)-(12) belong to the “matching-aggregation” framework with bidirectional inter-attention. Decomposable attention BIBREF8 first applies the “matching-aggregation” on SNLI dataset explicitly. BIBREF5 enriches the framework with several comparison functions. BiMPM BIBREF6 employs a multi-perspective matching function to match the two sentences. BiMPM BIBREF6 does not only exploit a multi-perspective matching function but also allows the two sentences to match from multi-granularity. ESIM BIBREF9 further sublimates the framework by enhancing the matching tuples with element-wise subtraction and element-wise multiplication. ESIM achieves 88.0% in accuracy on the SNLI test set, which exceeds the human performance (87.7%) for the first time. BIBREF18 and BIBREF1 both further improve the performance by taking the ESIM model as a baseline model. The studies related to “matching-aggregation” but without bidirectional interaction are not listed BIBREF19 , BIBREF7 .
Motivated by the attention-based memory models and the bidirectional inter-attention models, we propose the MIMN model. The last group of models (13)-(16) are models described in this paper. Our single MIMN model obtains an accuracy of 88.3% on SNLI test set, which is comparable with the current state-of-the-art single models. The single MIMN model improves 0.3% on the test set compared with ESIM, which shows that multi-turn inference based on the matching features and memory achieves better performance. From model (14), we also observe that memory is generally beneficial, and the accuracy drops 0.8% when the memory is removed. This finding proves that the interaction between matching features is significantly important for the final classification. To explore the way of updating memory, we replace the update gate in MIMN with a ReLU layer to update the memory, which drops 0.1%.
To further improve the performance on SNLI dataset, an ensemble model MIMN is built for comparison. We design the ensemble model by simply averaging the probability distributions BIBREF6 of four MIMN models. Each of the models has the same architecture but initialized by different seeds. Our ensemble model achieves the state-of-the-art performance by obtains an accuracy of 89.3% on SNLI test set.
Experiments on MPE
The MPE dataset is a brand-new dataset for NLI with four premises, one hypothesis, and one label. In order to maintain the same data format as other textual entailment datasets (one premise, one hypothesis, and one label), we concatenate the four premises as one premise.
Table TABREF31 shows the results of our models along with the published models on this dataset. LSTM is a conditional LSTM model used in BIBREF19 . WbW-Attention aligns each word in the hypothesis with the premise. The state-of-the-art model on MPE dataset is SE model proposed by BIBREF11 , which makes four independent predictions for each sentence pairs, and the final prediction is the summation of four predictions. Compared with SE, our MIMN model obtains a dramatic improvement (9.7%) on MPE dataset by achieving 66.0% in accuracy.
To compare with the bidirectional inter-attention model, we re-implement the ESIM, which obtains 59.0% in accuracy. We observe that MIMN-memory model achieves 61.6% in accuracy. This finding implies that inferring the matching features by multi-turns works better than single turn. Compared with the ESIM, our MIMN model increases 7.0% in accuracy. We further find that the performance of MIMN achieves 77.9% and 73.1% in accuracy of entailment and contradiction respectively, outperforming all previous models. From the accuracy distributions on N, E, and C in Table TABREF31 , we can see that the MIMN model is good at dealing with entailment and contradiction while achieves only average performance on neural.
Consequently, the experiment results show that our MIMN model achieves a new state-of-the-art performance on MPE test set. Besides, our MIMN-memory model and MIMN-gate+ReLU model both achieve better performance than previous models. All of our models perform well on the entailment label, which reveals that our models can aggregate information from multiple sentences for entailment judgment.
Experiments on SCITAIL
In this section, we study the effectiveness of our model on the SCITAIL dataset. Table TABREF31 presents the results of our models and the previous models on this dataset. Apart from the results reported in the original paper BIBREF10 : Majority class, ngram, decomposable attention, ESIM and DGEM, we compare further with the current state-of-the-art model CAFE BIBREF18 .
We can see that the MIMN model achieves 84.0% in accuracy on SCITAIL test set, which outperforms the CAFE by a margin of 0.5%. Moreover, the MIMN-gate+ReLU model exceeds the CAFE slightly. The MIMN model increases 13.3% in test accuracy compared with the ESIM, which again proves that multi-turn inference is better than one-pass inference.
Conclusion
In this paper, we propose the MIMN model for NLI task. Our model introduces a multi-turns inference mechanism to process multi-perspective matching features. Furthermore, the model employs the memory mechanism to carry proceeding inference information. In each turn, the inference is based on the current matching feature and previous memory. Experimental results on SNLI dataset show that the MIMN model is on par with the state-of-the-art models. Moreover, our model achieves new state-of-the-art results on the MPE and the SCITAL datasets. Experimental results prove that the MIMN model can extract important information from multiple premises for the final judgment. And the model is good at handling the relationships of entailment and contradiction.
Acknowledgements
This work is funded by Beijing Advanced Innovation for Language Resources of BLCU, the Fundamental Research Funds for the Central Universities in BLCU (No.17PT05) and Graduate Innovation Fund of BLCU (No.18YCX010). | Matching features from matching sentences from various perspectives. |
03502826f4919e251edba1525f84dd42f21b0253 | 03502826f4919e251edba1525f84dd42f21b0253_0 | Q: How much better in terms of JSD measure did their model perform?
Text: Introduction
Recurrent neural network (RNN) based techniques such as language models are the most popular approaches for text generation. These RNN-based text generators rely on maximum likelihood estimation (MLE) solutions such as teacher forcing BIBREF0 (i.e. the model is trained to predict the next item given all previous observations); however, it is well-known in the literature that MLE is a simplistic objective for this complex NLP task BIBREF1 . MLE-based methods suffer from exposure bias BIBREF2 , which means that at training time the model is exposed to gold data only, but at test time it observes its own predictions.
However, GANs which are based on the adversarial loss function and have the generator and the discriminator networks suffers less from the mentioned problems. GANs could provide a better image generation framework comparing to the traditional MLE-based methods and achieved substantial success in the field of computer vision for generating realistic and sharp images. This great success motivated researchers to apply its framework to NLP applications as well.
GANs have been exploited recently in various NLP applications such as machine translation BIBREF3 , BIBREF4 , dialogue models BIBREF1 , question answering BIBREF5 , and natural language generation BIBREF6 , BIBREF2 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . However, applying GAN in NLP is challenging due to the discrete nature of the text. Consequently, back-propagation would not be feasible for discrete outputs and it is not straightforward to pass the gradients through the discrete output words of the generator. The existing GAN-based solutions can be categorized according to the technique that they leveraged for handling the problem of the discrete nature of text: Reinforcement learning (RL) based methods, latent space based solutions, and approaches based on continuous approximation of discrete sampling. Several versions of the RL-based techniques have been introduced in the literature including Seq-GAN BIBREF11 , MaskGAN BIBREF12 , and LeakGAN BIBREF13 . However, they often need pre-training and are computationally more expensive compared to the methods of the other two categories. Latent space-based solutions derive a latent space representation of the text using an AE and attempt to learn data manifold of that space BIBREF8 . Another approach for generating text with GANs is to find a continuous approximation of the discrete sampling by using the Gumbel Softmax technique BIBREF14 or approximating the non-differentiable argmax operator BIBREF9 with a continuous function.
In this work, we introduce TextKD-GAN as a new solution for the main bottleneck of using GAN for text generation with knowledge distillation: a technique that transfer the knowledge of softened output of a teacher model to a student model BIBREF15 . Our solution is based on an AE (Teacher) to derive a smooth representation of the real text. This smooth representation is fed to the TextKD-GAN discriminator instead of the conventional one-hot representation. The generator (Student) tries to learn the manifold of the softened smooth representation of the AE. We show that TextKD-GAN outperforms the conventional GAN-based text generators that do not need pre-training. The remainder of the paper is organized as follows. In the next two sections, some preliminary background on generative adversarial networks and related work in the literature will be reviewed. The proposed method will be presented in section SECREF4 . In section SECREF5 , the experimental details will be discussed. Finally, section SECREF6 will conclude the paper.
Background
Generative adversarial networks include two separate deep networks: a generator and a discriminator. The generator takes in a random variable, INLINEFORM0 following a distribution INLINEFORM1 and attempt to map it to the data distribution INLINEFORM2 . The output distribution of the generator is expected to converge to the data distribution during the training. On the other hand, the discriminator is expected to discern real samples from generated ones by outputting zeros and ones, respectively. During training, the generator and discriminator generate samples and classify them, respectively by adversarially affecting the performance of each other. In this regard, an adversarial loss function is employed for training BIBREF16 : DISPLAYFORM0
This is a two-player minimax game for which a Nash-equilibrium point should be derived. Finding the solution of this game is non-trivial and there has been a great extent of literature dedicated in this regard BIBREF17 .
As stated, using GANs for text generation is challenging because of the discrete nature of text. To clarify the issue, Figure FIGREF2 depicts a simplistic architecture for GAN-based text generation. The main bottleneck of the design is the argmax operator which is not differentiable and blocks the gradient flow from the discriminator to the generator. DISPLAYFORM0
Knowledge Distillation
Knowledge distillation has been studied in model compression where knowledge of a large cumbersome model is transferred to a small model for easy deployment. Several studies have been studied on the knowledge transfer technique BIBREF15 , BIBREF18 . It starts by training a big teacher model (or ensemble model) and then train a small student model which tries to mimic the characteristics of the teacher model, such as hidden representations BIBREF18 , it's output probabilities BIBREF15 , or directly on the generated sentences by the teacher model in neural machine translation BIBREF19 . The first teacher-student framework for knowledge distillation was proposed in BIBREF15 by introducing the softened teacher's output. In this paper, we propose a GAN framework for text generation where the generator (Student) tries to mimic the reconstructed output representation of an auto-encoder (Teacher) instead of mapping to a conventional one-hot representations.
Improved WGAN
Generating text with pure GANs is inspired by improved Wasserstein GAN (IWGAN) work BIBREF6 . In IWGAN, a character level language model is developed based on adversarial training of a generator and a discriminator without using any extra element such as policy gradient reinforcement learning BIBREF20 . The generator produces a softmax vector over the entire vocabulary. The discriminator is responsible for distinguishing between the one-hot representations of the real text and the softmax vector of the generated text. The IWGAN method is described in Figure FIGREF6 . A disadvantage of this technique is that the discriminator is able to tell apart the one-hot input from the softmax input very easily. Hence, the generator will have a hard time fooling the discriminator and vanishing gradient problem is highly probable.
Related Work
A new version of Wasserstein GAN for text generation using gradient penalty for discriminator was proposed in BIBREF6 . Their generator is a CNN network generating fixed-length texts. The discriminator is another CNN receiving 3D tensors as input sentences. It determines whether the tensor is coming from the generator or sampled from the real data. The real sentences and the generated ones are represented using one-hot and softmax representations, respectively.
A similar approach was proposed in BIBREF2 with an RNN-based generator. They used a curriculum learning strategy BIBREF21 to produce sequences of gradually increasing lengths as training progresses. In BIBREF7 , RNN is trained to generate text with GAN using curriculum learning. The authors proposed a procedure called teacher helping, which helps the generator to produce long sequences by conditioning on shorter ground-truth sequences. All these approaches use a discriminator to discriminate the generated softmax output from one-hot real data as in Figure FIGREF6 , which is a clear downside for them. The reason is the discriminator receives inputs of different representations: a one-hot vector for real data and a probabilistic vector output from the generator. It makes the discrimination rather trivial.
AEs have been exploited along with GANs in different architectures for computer vision application such as AAE BIBREF22 , ALI BIBREF23 , and HALI BIBREF24 . Similarly, AEs can be used with GANs for generating text. For instance, an adversarially regularized AE (ARAE) was proposed in BIBREF8 . The generator is trained in parallel to an AE to learn a continuous version of the code space produced by AE encoder. Then, a discriminator will be responsible for distinguishing between the encoded hidden code and the continuous code of the generator. Basically, in this approach, a continuous distribution is generated corresponding to an encoded code of text.
Methodology
AEs can be useful in denoising text and transferring it to a code space (encoding) and then reconstructing back to the original text from the code. AEs can be combined with GANs in order to improve the generated text. In this section, we introduce a technique using AEs to replace the conventional one-hot representation BIBREF6 with a continuous softmax representation of real data for discrimination.
Distilling output probabilities of AE to TextKD-GAN generator
As stated, in conventional text-based discrimination approach BIBREF6 , the real and generated input of the discriminator will have different types (one-hot and softmax) and it can simply tell them apart. One way to avoid this issue is to derive a continuous smooth representation of words rather than their one-hot and train the discriminator to differentiate between the continuous representations. In this work, we use a conventional AE (Teacher) to replace the one-hot representation with softmax reconstructed output, which is a smooth representation that yields smaller variance in gradients BIBREF15 . The proposed model is depicted in Figure FIGREF8 . As seen, instead of the one-hot representation of the real words, we feed the softened reconstructed output of the AE to the discriminator. This technique would makes the discrimination much harder for the discriminator. The GAN generator (Student) with softmax output tries to mimic the AE output distribution instead of conventional one-hot representations used in the literature.
Why TextKD-GAN should Work Better than IWGAN
Suppose we apply IWGAN to a language vocabulary of size two: words INLINEFORM0 and INLINEFORM1 . The one-hot representation of these two words (as two points in the Cartesian coordinates) and the span of the generated softmax outputs (as a line segment connecting them) is depicted in the left panel of Figure FIGREF10 . As evident graphically, the task of the discriminator is to discriminate the points from the line connecting them, which is a rather simple very easy task.
Now, let's consider the TextKD-GAN idea using the two-word language example. As depicted in Figure FIGREF10 (Right panel), the output locus of the TextKD-GAN decoder would be two red line segments instead of two points (in the one-hot case). The two line segments lie on the output locus of the generator, which will make the generator more successful in fooling the discriminator.
Model Training
We train the AE and TextKD-GAN simultaneously. In order to do so, we break down the objective function into three terms: (1) a reconstruction term for the AE, (2) a discriminator loss function with gradient penalty, (3) an adversarial cost for the generator. Mathematically, DISPLAYFORM0
These losses are trained alternately to optimize different parts of the model. We employ the gradient penalty approach of IWGAN BIBREF6 for training the discriminator. In the gradient penalty term, we need to calculate the gradient norm of random samples INLINEFORM0 . According to the proposal in BIBREF6 , these random samples can be obtained by sampling uniformly along the line connecting pairs of generated and real data samples: DISPLAYFORM0
The complete training algorithm is described in SECREF11 .
TextKD-GAN for text generation. [1] The Adam hyperparameters INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , the batch size INLINEFORM3 . Initial AE parameters (encoder ( INLINEFORM4 ), decoder INLINEFORM5 ), discriminator parameters INLINEFORM6 and initial generator parameters INLINEFORM7
number of training iterations AE Training: Sample INLINEFORM0 and compute code-vectors INLINEFORM1
and reconstructed text INLINEFORM0 .
Backpropagate reconstruction loss INLINEFORM0 .
Update with INLINEFORM0 . Train the discriminator: k times: Sample INLINEFORM1 and Sample INLINEFORM2 .
Compute generated text INLINEFORM0
Backpropagate discriminator loss INLINEFORM0 .
Update with INLINEFORM0 . INLINEFORM1 Train the generator:
Sample INLINEFORM0 and Sample INLINEFORM1 .
Compute generated text INLINEFORM0
Backpropagate generator loss INLINEFORM0 .
Update with INLINEFORM0 . INLINEFORM1
Dataset and Experimental Setup
We carried out our experiments on two different datasets: Google 1 billion benchmark language modeling data and the Stanford Natural Language Inference (SNLI) corpus. Our text generation is performed at character level with a sentence length of 32. For the Google dataset, we used the first 1 million sentences and extract the most frequent 100 characters to build our vocabulary. For the SNLI dataset, we used the entire preprocessed training data , which contains 714667 sentences in total and the built vocabulary has 86 characters. We train the AE using one layer with 512 LSTM cells BIBREF25 for both the encoder and the decoder. We train the autoencoder using Adam optimizer with learning rate 0.001, INLINEFORM0 = 0.9, and INLINEFORM1 = 0.9. For decoding, the output from the previous time step is used as the input to the next time step. The hidden code INLINEFORM2 is also used as an additional input at each time step of decoding. The greedy search approach is applied to get the best output BIBREF8 . We keep the same CNN-based generator and discriminator with residual blocks as in BIBREF6 . The discriminator is trained for 5 times for 1 GAN generator iteration. We train the generator and the discriminator using Adam optimizer with learning rate 0.0001, INLINEFORM3 = 0.5, and INLINEFORM4 = 0.9.
We use the BLEU-N score to evaluate our techniques. BLEU-N score is calculated according to the following equation BIBREF26 , BIBREF27 , BIBREF28 : DISPLAYFORM0
where INLINEFORM0 is the probability of INLINEFORM1 -gram and INLINEFORM2 . We calculate BLEU-n scores for n-grams without a brevity penalty BIBREF10 . We train all the models for 200000 iterations and the results with the best BLEU-N scores in the generated texts are reported. To calculate the BLEU-N scores, we generate ten batches of sentences as candidate texts, i.e. 640 sentences (32-character sentences) and use the entire test set as reference texts.
Experimental Results
The results of the experiments are depicted in Table TABREF20 and TABREF21 . As seen in these tables, the proposed TextKD-GAN approach yields significant improvements in terms of BLEU-2, BLEU-3 and BLEU-4 scores over the IWGAN BIBREF6 , and the ARAE BIBREF8 approaches. Therefore, softened smooth output of the decoder can be more useful to learn better discriminator than the traditional one-hot representation. Moreover, we can see the lower BLEU-scores and less improvement for the Google dataset compared to the SNLI dataset. The reason might be the sentences in the Google dataset are more diverse and complicated. Finally, note that the text-based one-hot discrimination in IWGAN and our proposed method are better than the traditional code-based ARAE technique BIBREF8 .
Some examples of generated text from the SNLI experiment are listed in Table TABREF22 . As seen, the generated text by the proposed TextKD-GAN approach is more meaningful and contains more correct words compared to that of IWGAN BIBREF6 .
We also provide the training curves of Jensen-Shannon distances (JSD) between the INLINEFORM0 -grams of the generated sentences and that of the training (real) ones in Figure FIGREF23 . The distances are derived from SNLI experiments and calculated as in BIBREF6 . That is by calculating the log-probabilities of the INLINEFORM1 -grams of the generated and the real sentences. As depicted in the figure, the TextKD-GAN approach further minimizes the JSD compared to the literature methods BIBREF6 , BIBREF8 . In conclusion, our approach learns a more powerful discriminator, which in turn generates the data distribution close to the real data distribution.
Discussion
The results of our experiment shows the superiority of our TextKD-GAN method over other conventional GAN-based techniques. We compared our technique with those GAN-based generators which does not need pre-training. This explains why we have not included the RL-based techniques in the results. We showed the power of the continuous smooth representations over the well-known tricks to work around the discontinuity of text for GANs. Using AEs in TextKD-GAN adds another important dimension to our technique which is the latent space, which can be modeled and exploited as a separate signal for discriminating the generated text from the real data. It is worth mentioning that our observations during the experiments show training text-based generators is much easier than training the code-based techniques such as ARAE. Moreover, we observed that the gradient penalty term plays a significant part in terms of reducing the mode-collapse from the generated text of GAN. Furthermore, in this work, we focused on character-based techniques; however, TextKD-GAN is applicable to the word-based settings as well. Bear in mind that pure GAN-based text generation techniques are still in a newborn stage and they are not very powerful in terms of learning semantics of complex datasets and large sentences. This might be because of lack of capacity of capturing the long-term information using CNN networks. To address this problem, RL can be employed to empower these pure GAN-based techniques such as TextKD-GAN as a next step .
Conclusion and Future Work
In this work, we introduced TextKD-GAN as a new solution using knowledge distillation for the main bottleneck of using GAN for generating text, which is the discontinuity of text. Our solution is based on an AE (Teacher) to derive a continuous smooth representation of the real text. This smooth representation is distilled to the GAN discriminator instead of the conventional one-hot representation. We demonstrated the rationale behind this approach, which is to make the discrimination task of the discriminator between the real and generated texts more difficult and consequently providing a richer signal to the generator. At the time of training, the TextKD-GAN generator (Student) would try to learn the manifold of the smooth representation, which can later on be mapped to the real data distribution by applying the argmax operator. We evaluated TextKD-GAN over two benchmark datasets using the BLEU-N scores, JSD measures, and quality of the output generated text. The results showed that the proposed TextKD-GAN approach outperforms the traditional GAN-based text generation methods which does not need pre-training such as IWGAN and ARAE. Finally, We summarize our plan for future work in the following: | Unanswerable |
9368471073c66fefebc04f1820209f563a840240 | 9368471073c66fefebc04f1820209f563a840240_0 | Q: What does the Jensen-Shannon distance measure?
Text: Introduction
Recurrent neural network (RNN) based techniques such as language models are the most popular approaches for text generation. These RNN-based text generators rely on maximum likelihood estimation (MLE) solutions such as teacher forcing BIBREF0 (i.e. the model is trained to predict the next item given all previous observations); however, it is well-known in the literature that MLE is a simplistic objective for this complex NLP task BIBREF1 . MLE-based methods suffer from exposure bias BIBREF2 , which means that at training time the model is exposed to gold data only, but at test time it observes its own predictions.
However, GANs which are based on the adversarial loss function and have the generator and the discriminator networks suffers less from the mentioned problems. GANs could provide a better image generation framework comparing to the traditional MLE-based methods and achieved substantial success in the field of computer vision for generating realistic and sharp images. This great success motivated researchers to apply its framework to NLP applications as well.
GANs have been exploited recently in various NLP applications such as machine translation BIBREF3 , BIBREF4 , dialogue models BIBREF1 , question answering BIBREF5 , and natural language generation BIBREF6 , BIBREF2 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . However, applying GAN in NLP is challenging due to the discrete nature of the text. Consequently, back-propagation would not be feasible for discrete outputs and it is not straightforward to pass the gradients through the discrete output words of the generator. The existing GAN-based solutions can be categorized according to the technique that they leveraged for handling the problem of the discrete nature of text: Reinforcement learning (RL) based methods, latent space based solutions, and approaches based on continuous approximation of discrete sampling. Several versions of the RL-based techniques have been introduced in the literature including Seq-GAN BIBREF11 , MaskGAN BIBREF12 , and LeakGAN BIBREF13 . However, they often need pre-training and are computationally more expensive compared to the methods of the other two categories. Latent space-based solutions derive a latent space representation of the text using an AE and attempt to learn data manifold of that space BIBREF8 . Another approach for generating text with GANs is to find a continuous approximation of the discrete sampling by using the Gumbel Softmax technique BIBREF14 or approximating the non-differentiable argmax operator BIBREF9 with a continuous function.
In this work, we introduce TextKD-GAN as a new solution for the main bottleneck of using GAN for text generation with knowledge distillation: a technique that transfer the knowledge of softened output of a teacher model to a student model BIBREF15 . Our solution is based on an AE (Teacher) to derive a smooth representation of the real text. This smooth representation is fed to the TextKD-GAN discriminator instead of the conventional one-hot representation. The generator (Student) tries to learn the manifold of the softened smooth representation of the AE. We show that TextKD-GAN outperforms the conventional GAN-based text generators that do not need pre-training. The remainder of the paper is organized as follows. In the next two sections, some preliminary background on generative adversarial networks and related work in the literature will be reviewed. The proposed method will be presented in section SECREF4 . In section SECREF5 , the experimental details will be discussed. Finally, section SECREF6 will conclude the paper.
Background
Generative adversarial networks include two separate deep networks: a generator and a discriminator. The generator takes in a random variable, INLINEFORM0 following a distribution INLINEFORM1 and attempt to map it to the data distribution INLINEFORM2 . The output distribution of the generator is expected to converge to the data distribution during the training. On the other hand, the discriminator is expected to discern real samples from generated ones by outputting zeros and ones, respectively. During training, the generator and discriminator generate samples and classify them, respectively by adversarially affecting the performance of each other. In this regard, an adversarial loss function is employed for training BIBREF16 : DISPLAYFORM0
This is a two-player minimax game for which a Nash-equilibrium point should be derived. Finding the solution of this game is non-trivial and there has been a great extent of literature dedicated in this regard BIBREF17 .
As stated, using GANs for text generation is challenging because of the discrete nature of text. To clarify the issue, Figure FIGREF2 depicts a simplistic architecture for GAN-based text generation. The main bottleneck of the design is the argmax operator which is not differentiable and blocks the gradient flow from the discriminator to the generator. DISPLAYFORM0
Knowledge Distillation
Knowledge distillation has been studied in model compression where knowledge of a large cumbersome model is transferred to a small model for easy deployment. Several studies have been studied on the knowledge transfer technique BIBREF15 , BIBREF18 . It starts by training a big teacher model (or ensemble model) and then train a small student model which tries to mimic the characteristics of the teacher model, such as hidden representations BIBREF18 , it's output probabilities BIBREF15 , or directly on the generated sentences by the teacher model in neural machine translation BIBREF19 . The first teacher-student framework for knowledge distillation was proposed in BIBREF15 by introducing the softened teacher's output. In this paper, we propose a GAN framework for text generation where the generator (Student) tries to mimic the reconstructed output representation of an auto-encoder (Teacher) instead of mapping to a conventional one-hot representations.
Improved WGAN
Generating text with pure GANs is inspired by improved Wasserstein GAN (IWGAN) work BIBREF6 . In IWGAN, a character level language model is developed based on adversarial training of a generator and a discriminator without using any extra element such as policy gradient reinforcement learning BIBREF20 . The generator produces a softmax vector over the entire vocabulary. The discriminator is responsible for distinguishing between the one-hot representations of the real text and the softmax vector of the generated text. The IWGAN method is described in Figure FIGREF6 . A disadvantage of this technique is that the discriminator is able to tell apart the one-hot input from the softmax input very easily. Hence, the generator will have a hard time fooling the discriminator and vanishing gradient problem is highly probable.
Related Work
A new version of Wasserstein GAN for text generation using gradient penalty for discriminator was proposed in BIBREF6 . Their generator is a CNN network generating fixed-length texts. The discriminator is another CNN receiving 3D tensors as input sentences. It determines whether the tensor is coming from the generator or sampled from the real data. The real sentences and the generated ones are represented using one-hot and softmax representations, respectively.
A similar approach was proposed in BIBREF2 with an RNN-based generator. They used a curriculum learning strategy BIBREF21 to produce sequences of gradually increasing lengths as training progresses. In BIBREF7 , RNN is trained to generate text with GAN using curriculum learning. The authors proposed a procedure called teacher helping, which helps the generator to produce long sequences by conditioning on shorter ground-truth sequences. All these approaches use a discriminator to discriminate the generated softmax output from one-hot real data as in Figure FIGREF6 , which is a clear downside for them. The reason is the discriminator receives inputs of different representations: a one-hot vector for real data and a probabilistic vector output from the generator. It makes the discrimination rather trivial.
AEs have been exploited along with GANs in different architectures for computer vision application such as AAE BIBREF22 , ALI BIBREF23 , and HALI BIBREF24 . Similarly, AEs can be used with GANs for generating text. For instance, an adversarially regularized AE (ARAE) was proposed in BIBREF8 . The generator is trained in parallel to an AE to learn a continuous version of the code space produced by AE encoder. Then, a discriminator will be responsible for distinguishing between the encoded hidden code and the continuous code of the generator. Basically, in this approach, a continuous distribution is generated corresponding to an encoded code of text.
Methodology
AEs can be useful in denoising text and transferring it to a code space (encoding) and then reconstructing back to the original text from the code. AEs can be combined with GANs in order to improve the generated text. In this section, we introduce a technique using AEs to replace the conventional one-hot representation BIBREF6 with a continuous softmax representation of real data for discrimination.
Distilling output probabilities of AE to TextKD-GAN generator
As stated, in conventional text-based discrimination approach BIBREF6 , the real and generated input of the discriminator will have different types (one-hot and softmax) and it can simply tell them apart. One way to avoid this issue is to derive a continuous smooth representation of words rather than their one-hot and train the discriminator to differentiate between the continuous representations. In this work, we use a conventional AE (Teacher) to replace the one-hot representation with softmax reconstructed output, which is a smooth representation that yields smaller variance in gradients BIBREF15 . The proposed model is depicted in Figure FIGREF8 . As seen, instead of the one-hot representation of the real words, we feed the softened reconstructed output of the AE to the discriminator. This technique would makes the discrimination much harder for the discriminator. The GAN generator (Student) with softmax output tries to mimic the AE output distribution instead of conventional one-hot representations used in the literature.
Why TextKD-GAN should Work Better than IWGAN
Suppose we apply IWGAN to a language vocabulary of size two: words INLINEFORM0 and INLINEFORM1 . The one-hot representation of these two words (as two points in the Cartesian coordinates) and the span of the generated softmax outputs (as a line segment connecting them) is depicted in the left panel of Figure FIGREF10 . As evident graphically, the task of the discriminator is to discriminate the points from the line connecting them, which is a rather simple very easy task.
Now, let's consider the TextKD-GAN idea using the two-word language example. As depicted in Figure FIGREF10 (Right panel), the output locus of the TextKD-GAN decoder would be two red line segments instead of two points (in the one-hot case). The two line segments lie on the output locus of the generator, which will make the generator more successful in fooling the discriminator.
Model Training
We train the AE and TextKD-GAN simultaneously. In order to do so, we break down the objective function into three terms: (1) a reconstruction term for the AE, (2) a discriminator loss function with gradient penalty, (3) an adversarial cost for the generator. Mathematically, DISPLAYFORM0
These losses are trained alternately to optimize different parts of the model. We employ the gradient penalty approach of IWGAN BIBREF6 for training the discriminator. In the gradient penalty term, we need to calculate the gradient norm of random samples INLINEFORM0 . According to the proposal in BIBREF6 , these random samples can be obtained by sampling uniformly along the line connecting pairs of generated and real data samples: DISPLAYFORM0
The complete training algorithm is described in SECREF11 .
TextKD-GAN for text generation. [1] The Adam hyperparameters INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , the batch size INLINEFORM3 . Initial AE parameters (encoder ( INLINEFORM4 ), decoder INLINEFORM5 ), discriminator parameters INLINEFORM6 and initial generator parameters INLINEFORM7
number of training iterations AE Training: Sample INLINEFORM0 and compute code-vectors INLINEFORM1
and reconstructed text INLINEFORM0 .
Backpropagate reconstruction loss INLINEFORM0 .
Update with INLINEFORM0 . Train the discriminator: k times: Sample INLINEFORM1 and Sample INLINEFORM2 .
Compute generated text INLINEFORM0
Backpropagate discriminator loss INLINEFORM0 .
Update with INLINEFORM0 . INLINEFORM1 Train the generator:
Sample INLINEFORM0 and Sample INLINEFORM1 .
Compute generated text INLINEFORM0
Backpropagate generator loss INLINEFORM0 .
Update with INLINEFORM0 . INLINEFORM1
Dataset and Experimental Setup
We carried out our experiments on two different datasets: Google 1 billion benchmark language modeling data and the Stanford Natural Language Inference (SNLI) corpus. Our text generation is performed at character level with a sentence length of 32. For the Google dataset, we used the first 1 million sentences and extract the most frequent 100 characters to build our vocabulary. For the SNLI dataset, we used the entire preprocessed training data , which contains 714667 sentences in total and the built vocabulary has 86 characters. We train the AE using one layer with 512 LSTM cells BIBREF25 for both the encoder and the decoder. We train the autoencoder using Adam optimizer with learning rate 0.001, INLINEFORM0 = 0.9, and INLINEFORM1 = 0.9. For decoding, the output from the previous time step is used as the input to the next time step. The hidden code INLINEFORM2 is also used as an additional input at each time step of decoding. The greedy search approach is applied to get the best output BIBREF8 . We keep the same CNN-based generator and discriminator with residual blocks as in BIBREF6 . The discriminator is trained for 5 times for 1 GAN generator iteration. We train the generator and the discriminator using Adam optimizer with learning rate 0.0001, INLINEFORM3 = 0.5, and INLINEFORM4 = 0.9.
We use the BLEU-N score to evaluate our techniques. BLEU-N score is calculated according to the following equation BIBREF26 , BIBREF27 , BIBREF28 : DISPLAYFORM0
where INLINEFORM0 is the probability of INLINEFORM1 -gram and INLINEFORM2 . We calculate BLEU-n scores for n-grams without a brevity penalty BIBREF10 . We train all the models for 200000 iterations and the results with the best BLEU-N scores in the generated texts are reported. To calculate the BLEU-N scores, we generate ten batches of sentences as candidate texts, i.e. 640 sentences (32-character sentences) and use the entire test set as reference texts.
Experimental Results
The results of the experiments are depicted in Table TABREF20 and TABREF21 . As seen in these tables, the proposed TextKD-GAN approach yields significant improvements in terms of BLEU-2, BLEU-3 and BLEU-4 scores over the IWGAN BIBREF6 , and the ARAE BIBREF8 approaches. Therefore, softened smooth output of the decoder can be more useful to learn better discriminator than the traditional one-hot representation. Moreover, we can see the lower BLEU-scores and less improvement for the Google dataset compared to the SNLI dataset. The reason might be the sentences in the Google dataset are more diverse and complicated. Finally, note that the text-based one-hot discrimination in IWGAN and our proposed method are better than the traditional code-based ARAE technique BIBREF8 .
Some examples of generated text from the SNLI experiment are listed in Table TABREF22 . As seen, the generated text by the proposed TextKD-GAN approach is more meaningful and contains more correct words compared to that of IWGAN BIBREF6 .
We also provide the training curves of Jensen-Shannon distances (JSD) between the INLINEFORM0 -grams of the generated sentences and that of the training (real) ones in Figure FIGREF23 . The distances are derived from SNLI experiments and calculated as in BIBREF6 . That is by calculating the log-probabilities of the INLINEFORM1 -grams of the generated and the real sentences. As depicted in the figure, the TextKD-GAN approach further minimizes the JSD compared to the literature methods BIBREF6 , BIBREF8 . In conclusion, our approach learns a more powerful discriminator, which in turn generates the data distribution close to the real data distribution.
Discussion
The results of our experiment shows the superiority of our TextKD-GAN method over other conventional GAN-based techniques. We compared our technique with those GAN-based generators which does not need pre-training. This explains why we have not included the RL-based techniques in the results. We showed the power of the continuous smooth representations over the well-known tricks to work around the discontinuity of text for GANs. Using AEs in TextKD-GAN adds another important dimension to our technique which is the latent space, which can be modeled and exploited as a separate signal for discriminating the generated text from the real data. It is worth mentioning that our observations during the experiments show training text-based generators is much easier than training the code-based techniques such as ARAE. Moreover, we observed that the gradient penalty term plays a significant part in terms of reducing the mode-collapse from the generated text of GAN. Furthermore, in this work, we focused on character-based techniques; however, TextKD-GAN is applicable to the word-based settings as well. Bear in mind that pure GAN-based text generation techniques are still in a newborn stage and they are not very powerful in terms of learning semantics of complex datasets and large sentences. This might be because of lack of capacity of capturing the long-term information using CNN networks. To address this problem, RL can be employed to empower these pure GAN-based techniques such as TextKD-GAN as a next step .
Conclusion and Future Work
In this work, we introduced TextKD-GAN as a new solution using knowledge distillation for the main bottleneck of using GAN for generating text, which is the discontinuity of text. Our solution is based on an AE (Teacher) to derive a continuous smooth representation of the real text. This smooth representation is distilled to the GAN discriminator instead of the conventional one-hot representation. We demonstrated the rationale behind this approach, which is to make the discrimination task of the discriminator between the real and generated texts more difficult and consequently providing a richer signal to the generator. At the time of training, the TextKD-GAN generator (Student) would try to learn the manifold of the smooth representation, which can later on be mapped to the real data distribution by applying the argmax operator. We evaluated TextKD-GAN over two benchmark datasets using the BLEU-N scores, JSD measures, and quality of the output generated text. The results showed that the proposed TextKD-GAN approach outperforms the traditional GAN-based text generation methods which does not need pre-training such as IWGAN and ARAE. Finally, We summarize our plan for future work in the following: | Unanswerable |
981443fce6167b3f6cadf44f9f108d68c1a3f4ab | 981443fce6167b3f6cadf44f9f108d68c1a3f4ab_0 | Q: Which countries and languages do the political speeches and manifestos come from?
Text: Introduction
Modern media generate a large amount of content at an ever increasing rate. Keeping an unbiased view on what media report on requires to understand the political bias of texts. In many cases it is obvious which political bias an author has. In other cases some expertise is required to judge the political bias of a text. When dealing with large amounts of text however there are simply not enough experts to examine all possible sources and publications. Assistive technology can help in this context to try and obtain a more unbiased sample of information.
Ideally one would choose for each topic a sample of reports from the entire political spectrum in order to form an unbiased opinion. But ordering media content with respect to the political spectrum at scale requires automated prediction of political bias. The aim of this study is to provide empirical evidence indicating that leveraging open data sources of german texts, automated political bias prediction is possible with above chance accuracy. These experimental results confirm and extend previous findings BIBREF0 , BIBREF1 ; a novel contribution of this work is a proof of concept which applies this technology to sort news article recommendations according to their political bias.
When human experts determine political bias of texts they will take responsibility for what they say about a text, and they can explain their decisions. This is a key difference to many statistical learning approaches. Not only is the responsibility question problematic, it can also be difficult to interpret some of the decisions. In order to validate and explain the predictions of the models three strategies that allow for better interpretations of the models are proposed. First the model misclassifications are related to changes in party policies. Second univariate measures of correlation between text features and party affiliation allow to relate the predictions to the kind of information that political experts use for interpreting texts. Third sentiment analysis is used to investigate whether this aspect of language has discriminatory power.
In the following sec:related briefly surveys some related work, thereafter sec:data gives an overview of the data acquisition and preprocessing methods, sec:model presents the model, training and evaluation procedures; in sec:results the results are discussed and sec:conclusion concludes with some interpretations of the results and future research directions.
Related Work
Throughout the last years automated content analyses for political texts have been conducted on a variety of text data sources (parliament data blogs, tweets, news articles, party manifestos) with a variety of methods, including sentiment analysis, stylistic analyses, standard bag-of-word (BOW) text feature classifiers and more advanced natural language processing tools. While a complete overview is beyond the scope of this work, the following paragraphs list similarities and differences between this study and previous work. For a more complete overview we refer the reader to BIBREF2 , BIBREF3 .
A similar approach to the one presented here was taken in BIBREF0 . The authors extracted BOW feature vectors and applied linear classifiers to predict political party affiliation of US congress speeches. They used data from the two chambers of the US congress, House and Senat, in order to assess generalization performance of a classifier trained on data from one chamber and tested on data from another. They found that accuracies of the model when trained on one domain and tested on another were significantly decreased. Generalization was also affected by the time difference between the political speeches used for training and those used for testing.
Other work has focused on developing dedicated methods for predicting political bias. Two popular methods are WordFish BIBREF4 and WordScores BIBREF5 , or improved versions thereof, see e.g. BIBREF6 . These approaches have been very valuable for a posteriori analysis of historical data but they do not seem to be used as much for analyses of new data in a predictive analytics setting. Moreover direct comparisons of the results obtained with these so called scaling methods with the results of the present study or those of studies as BIBREF0 are difficult, due to the different modeling and evaluation approaches: Validations of WordFish/WordScore based analyses often compare parameter estimates of the different models rather than predictions of these models on held-out data with respect to the same type of labels used to train the models.
Finally Hirst et al conducted a large number of experiments on data from the Canadian parliament and the European parliament; these experiments can be directly compared to the present study both in terms of methodology but also with respect to their results BIBREF1 . The authors show that a linear classifier trained on parliament speeches uses language elements of defense and attack to classify speeches, rather than ideological vocabulary. The authors also argue that emotional content plays an important role in automatic analysis of political texts. Furthermore their results show a clear dependency between length of a political text and the accuracy with which it can be classified correctly.
Taken together, there is a large body of literature in this expanding field in which scientists from quantitative empirical disciplines as well as political science experts collaborate on the challenging topic of automated analysis of political texts. Except for few exceptions most previous work has focused on binary classification or on assignment of a one dimensional policy position (mostly left vs right). Yet many applications require to take into account more subtle differences in political policies. This work focuses on more fine grained political view prediction: for one, the case of the german parliament is more diverse than two parliament systems, allowing for a distinction between more policies; second the political view labels considered are more fine grained than in previous studies. While previous studies used such labels only for partitioning training data BIBREF4 (which is not possible at test time in real-world applications where these labels are not known) the experiments presented in this study directly predict these labels. Another important contribution of this work is that many existing studies are primarily concerned with a posteriori analysis of historical data. This work aims at prediction of political bias on out-of-domain data with a focus on the practical application of the model on new data, for which a prototypical web application is provided. The experiments on out-of-domain generalization complement the work of BIBREF0 , BIBREF1 with results from data of the german parliament and novel sentiment analyses.
Data Sets and Feature Extraction
All experiments were run on publicly available data sets of german political texts and standard libraries for processing the text. The following sections describe the details of data acquisition and feature extraction.
Data
Annotated political text data was obtained from two sources: a) the discussions and speeches held in the german parliament (Bundestag) and b) all manifesto texts of parties running for election in the german parliament in the current 18th and the last, 17th, legislation period.
Parliament texts are annotated with the respective party label, which we take here as a proxy for political bias. The texts of parliament protocols are available through the website of the german bundestag; an open source API was used to query the data in a cleaned and structured format. In total 22784 speeches were extracted for the 17th legislative period and 11317 speeches for the 18th period, queried until March 2016.
For party manifestos another openly accessible API was used, provided by the Wissenschaftszentrum Berlin (WZB). The API is released as part of the Manifestoproject BIBREF7 . The data released in this project comprises the complete manifestos for each party that ran for election enriched with annotations by political experts. Each sentence (in some cases also parts of sentences) is annotated with one of 56 political labels. Examples of these labels are pro/contra protectionism, decentralism, centralism, pro/contra welfare; for a complete list and detailed explanations on how the annotators were instructed see BIBREF8 . The set of labels was developed by political scientists at the WZB and released for public use. All manifestos of parties that were running for election in this and the last legislative period were obtained. In total this resulted in 29451 political statements that had two types of labels: First the party affiliation of each political statement; this label was used to evaluate the party evaluation classifiers trained on the parliament speeches. For this purpose the data acquisition was constrained to only those parties that were elected into the parliament. Next to the party affiliation the political view labels were extracted. For the analyses based on political view labels all parties were considered, also those that did not make it into the parliament.
The length of each annotated statement in the party manifestos was rather short. The longest statement was 522 characters long, the 25%/50%/75% percentiles were 63/95/135 characters. Measured in words the longest data point was 65 words and the 25%/50%/75% percentiles were 8/12/17 words, respectively. This can be considered as a very valuable property of the data set, because it allows a fine grained resolution of party manifestos. However for a classifier (as well as for humans) such short sentences can be rather difficult to classify. In order to obtain less 'noisy' data points from each party – for the party affiliation task only – all statements were aggregated into political topics using the manifesto code labels. Each political view label is a three digit code, the first digit represents the political domain. In total there were eight political domains (topics): External Relations, Freedom and Democracy, Political System, Economy, Welfare and Quality of Life, Fabric of Society, Social Groups and a topic undefined, for a complete list see also BIBREF8 . These 8 topics were used to aggregate all statements in each manifesto into topics. Most party manifestos covered all eight of them, some party manifestos in the 17th Bundestag only covered seven.
Bag-of-Words Vectorization
First each data set was segmented into semantic units; in the case of parliament discussions this were the speeches, in the case of the party manifesto data semantic units were the sentences or sentence parts associated with one of the 56 political view labels. Parliament speeches were often interrupted; in this case each uninterrupted part of a speech was considered a semantic unit. Strings of each semantic unit were tokenised and transformed into bag-of-word vectors as implemented in scikit-learn BIBREF9 . The general idea of bag-of-words vectors is to simply count occurrences of words (or word sequences, also called n-grams) for each data point. A data point is usually a document, here it is the semantic units of parliament speeches and manifesto sentences, respectively. The text of each semantic unit is transformed into a vector INLINEFORM0 where INLINEFORM1 is the size of the dictionary; the INLINEFORM2 th entry of INLINEFORM3 contains the (normalized) count of the INLINEFORM4 th word (or sequence of words) in our dictionary. Several options for vectorizing the speeches were tried, including term-frequency-inverse-document-frequency normalisation, n-gram patterns up to size INLINEFORM5 and several cutoffs for discarding too frequent and too infrequent words. All of these hyperparameters were subjected to hyperparameter optimization as explained in sec:crossvalidation.
Classification Model and Training Procedure
Bag-of-words feature vectors were used to train a multinomial logistic regression model. Let INLINEFORM0 be the true label, where INLINEFORM1 is the total number of labels and INLINEFORM2 is the concatenation of the weight vectors INLINEFORM3 associated with the INLINEFORM4 th party then DISPLAYFORM0
We estimated INLINEFORM0 using quasi-newton gradient descent. The optimization function was obtained by adding a penalization term to the negative log-likelihood of the multinomial logistic regression objective and the optimization hence found the INLINEFORM1 that minimized DISPLAYFORM0
Where INLINEFORM0 denotes the Frobenius Norm and INLINEFORM1 is a regularization parameter controlling the complexity of the model. The regularization parameter was optimized on a log-scaled grid from INLINEFORM2 . The performance of the model was optimized using the classification accuracy, but we also report all other standard measures, precision ( INLINEFORM3 ), recall ( INLINEFORM4 ) and f1-score ( INLINEFORM5 ).
Three different classification problems were considered:
Party affiliation is a five class problem for the 17th legislation period, and a four class problem for the 18th legislation period. Political view classification is based on the labels of the manifesto project, see sec:data and BIBREF8 . For each of first two problems, party affiliation and government membership prediction, classifiers were trained on the parliament speeches. For the third problem classifiers were trained only on the manifesto data for which political view labels were available.
Optimisation of Model Parameters
The model pipeline contained a number of hyperparameters that were optimised using cross-validation. We first split the training data into a training data set that was used for optimisation of hyperparameters and an held-out test data set for evaluating how well the model performs on in-domain data; wherever possible the generalisation performance of the models was also evaluated on out-of domain data. Hyperparameters were optimised using grid search and 3-fold cross-validation within the training set only: A cross-validation split was made to obtain train/test data for the grid search and for each setting of hyperparameters the entire pipeline was trained and evaluated – no data from the in-domain evaluation data or the out-of-domain evaluation data were used for hyperparameter optimisation. For the best setting of all hyperparameters the pipeline was trained again on all training data and evaluated on the evaluation data sets. For party affiliation prediction and government membership prediction the training and test set were 90% and 10%, respectively, of all data in a given legislative period. Out-of-domain evaluation data were the texts from party manifestos. For the political view prediction setting there was no out-of-domain evaluation data, so all labeled manifesto sentences in both legislative periods were split into a training and evaluation set of 90% (train) and 10% (evaluation).
Sentiment analysis
A publicly available key word list was used to extract sentiments BIBREF10 . A sentiment vector INLINEFORM0 was constructed from the sentiment polarity values in the sentiment dictionary. The sentiment index used for attributing positive or negative sentiment to a text was computed as the cosine similarity between BOW vectors INLINEFORM1 and INLINEFORM2 DISPLAYFORM0
Analysis of bag-of-words features
While interpretability of linear models is often propagated as one of their main advantages, doing so naively without modelling the noise covariances can lead to wrong conclusions, see e.g. BIBREF11 , BIBREF12 ; interpreting coefficients of linear models (independent of the regularizer used) implicitly assumes uncorrelated features; this assumption is violated by the text data used in this study. Thus direct interpretation of the model coefficients INLINEFORM0 is problematic. In order to allow for better interpretation of the predictions and to assess which features are discriminative correlation coefficients between each word and the party affiliation label were computed. The words corresponding to the top positive and negative correlations are shown in sec:wordpartycorrelations.
Results
The following sections give an overview of the results for all political bias prediction tasks. Some interpretations of the results are highlighted and a web application of the models is presented at the end of the section.
Predicting political party affiliation
The results for the political party affiliation prediction on held-out parliament data and on evaluation data are listed in tab:results17 for the 17th Bundestag and in tab:results18 for the 18th Bundestag, respectively. Shown are the evaluation results for in-domain data (held-out parliament speech texts) as well as the out-of-domain data; the party manifesto out-of-domain predictions were made on the sentence level.
When predicting party affiliation on text data from the same domain that was used for training the model, average precision and recall values of above 0.6 are obtained. These results are comparable to those of BIBREF1 who report a classification accuracy of 0.61 on a five class problem of prediction party affiliation in the European parliament; the accuracy for the 17th Bundestag is 0.63, results of the 18th Bundestag are difficult to compare as the number of parties is four and the legislation period is not finished yet. For out-of domain data the models yield significantly lower precision and recall values between 0.3 and 0.4. This drop in out of domain prediction accuracy is in line with previous findings BIBREF0 . A main factor that made the prediction on the out-of-domain prediction task particularly difficult is the short length of the strings to be classified, see also sec:data. In order to investigate whether this low out-of-domain prediction performance was due the domain difference (parliament speech vs manifesto data) or due to the short length of the data points, the manifesto data was aggregated based on the topic. The manifesto code political topics labels were used to concatenate texts of each party to one of eight topics, see sec:data. The topic level results are shown in tab:resultstopic and tab:confusiontopic and demonstrate that when the texts to be classified are sufficiently long and the word count statistics are sufficiently dense the classification performance on out of domain data can achieve in the case of some parties reliably precision and recall values close to 1.0. This increase is in line with previous findings on the influence of text length on political bias prediction accuracy BIBREF1 .
In order to investigate the errors the models made confusion matrices were extracted for the predictions on the out-of-domain evaluation data for sentence level predictions (see tab:confusion) as well as topic level predictions (see tab:confusiontopic). One example illustrates that the mistakes the model makes can be associated with changes in the party policy. The green party has been promoting policies for renewable energy and against nuclear energy in their manifestos prior to both legislative periods. Yet the statements of the green party are more often predicted to be from the government parties than from the party that originally promoted these green ideas, reflecting the trend that these legislative periods governing parties took over policies from the green party. This effect is even more pronounced in the topic level predictions: a model trained on data from the 18th Bundestag predicts all manifesto topics of the green party to be from one of the parties of the governing coalition, CDU/CSU or SPD.
Next to the party affiliation labels also government membership labels were used to train models that predict whether or not a text is from a party that belonged to a governing coalition of the Bundestag. In tab:resultsbinary17 and tab:resultsbinary18 the results are shown for the 17th and the 18th Bundestag, respectively. While the in-domain evaluation precision and recall values reach values close to 0.9, the out-of-domain evaluation drops again to values between 0.6 and 0.7. This is in line with the results on binary classification of political bias in the Canadian parliament BIBREF0 . The authors report classification accuracies between 0.8 and 0.87, the accuracy in the 17th Bundestag was 0.85. While topic-level predictions were not performed in this binary setting, the party affiliation results in tab:resultstopic suggest that a similar increase in out-of-domain prediction accuracy could be achieved when aggregating texts to longer segments.
Predicting political views
Parties change their policies and positions in the political spectrum. More reliable categories for political bias are party independent labels for political views, see sec:data. A separate suite of experiments was run to train and test the prediction performance of the text classifiers models described in sec:model. As there was no out-of-domain evaluation set available in this setting only evaluation error on in-domain data is reported. Note however that also in this experiment the evaluation data was never seen by any model during training time. In tab:resultsavgpoliticalview results for the best and worst classes, in terms of predictability, are listed along with the average performance metrics on all classes. Precision and recall values of close to 0.5 on average can be considered rather high considering the large number of labels.
Correlations between words and parties
The 10 highest and lowest correlations between individual words and the party affiliation label are shown for each party in fig:partywordcorrelations. Correlations were computed on the data from the current, 18th, legislative period. Some unspecific stopwords are excluded. The following paragraphs highlight some examples of words that appear to be preferentially used or avoided by each respective party. Even though interpretations of these results are problematic in that they neglect the context in which these words were mentioned some interesting patterns can be found and related to the actual policies the parties are promoting.
The left party mostly criticises measures that affect social welfare negatively, such as the Hartz IV program. Main actors that are blamed for decisions of the conservative governments by the left party are big companies (konzerne). Rarely the party addresses concerns related to security (sicherheit).
The green party heavily criticised the secret negotiations about the TiSA agreement and insists in formal inquiries that the representatives of the green party put forward in this matter (fragen, anfragen). They also often ask questions related to army projects (Rüstungsprojekte, Wehrbericht) or the military development in east europe (Jalta).
The social democrats often use words related to rights of the working class, as reflected by the heavy use of the International Labour Organisation (ILO) or rights of employes (Arbeitnehmerrechte). They rarely talk about competition (Wettbewerb) or climate change (klimapolitik).
The conservative christian party often uses words related to a pro-economy attitude, such as competitiveness or (economic) development (Wettbewerbsfähigkeit, Entwicklung) and words related to security (Sicherheit). The latter could be related to the ongoing debates about whether or not the governments should be allowed to collect data and thus restrict fundamental civil rights in order to better secure the population. In contrast to the parties of the opposition, the conservatives rarely mention the word war (krieg) or related words.
Speech sentiment correlates with political power
In order to investigate the features that give rise to the classifiers' performance the bag-of-words features were analysed with respect to their sentiment. The average sentiment of each political party is shown in fig:partysentiments. High values indicate more pronounced usage of positive words, whereas negative values indicate more pronounced usage of words associated with negative emotional content.
The results show an interesting relationship between political power and sentiment. Political power was evaluated in two ways: a) in terms of the number of seats a party has and b) in terms of membership of the government. Correlating either of these two indicators of political power with the mean sentiment of a party shows a strong positive correlation between speech sentiment and political power. This pattern is evident from the data in fig:partysentiments and in tab:sentiments: In the current Bundestag, government membership correlates with positive sentiment with a correlation coefficient of 0.98 and the number of seats correlates with 0.89.
Note that there is one party, the social democrats (SPD), which has many seats and switched from opposition to government with the 18th Bundestag: With its participation in the government the average sentiment of this party switched sign from negative to positive, suggesting that positive sentiment is a strong indicator of government membership.
An example web application
To show an example use case of the above models a web application was implemented that downloads regularly all articles from some major german news paper websites and applies some simple topic modelling to them. For each news article topic, headlines of articles are plotted along with the predictions of the political view of an article and two labels derived deterministically from the 56 class output, a left right index and the political domain of a text, see BIBREF8 . Within each topic it is then possible to get an ordered (from left to right) overview of the articles on that topic. An example of one topic that emerged on March 31st is shown in fig:fipi. A preliminary demo is live at BIBREF13 and the code is available on github BIBREF14 .
Conclusions, Limitations and Outlook
This study presents a simple approach for automated political bias prediction. The results of these experiments show that automated political bias prediction is possible with above chance accuracy in some cases. It is worth noting that even if the accuracies are not perfect, they are above chance and comparable with results of comparable studies BIBREF0 , BIBREF1 . While these results do not allow for usage in production systems for classification, it is well possible to use such a system as assistive technology for human annotators in an active learning setting.
One of the main limiting factors of an automated political bias prediction system is the availability of training data. Most training data sets that are publicly available have an inherent bias as they are sampled from a different domain. This study tried to quantify the impact of this effect. For the cases in which evaluation data from two domains was available there was a pronounced drop in prediction accuracy between the in domain evaluation set and the out of domain evaluation set. This effect was reported previously for similar data, see e.g. BIBREF0 . Also the finding that shorter texts are more difficult to classify than longer texts is in line with previous studies BIBREF1 . When considering texts of sufficient length (for instance by aggregating all texts of a given political topic) classification performance improved and in some cases reliable predictions could be obtained even beyond the training text domain.
Some aspects of these analyses could be interesting for social science researchers; three of these are highlighted here. First the misclassifications of a model can be related to the changes in policy of a party. Such analyses could be helpful to quantitatively investigate a change in policy. Second analysing the word-party correlations shows that some discriminative words can be related to the political views of a party; this allows for validation of the models by human experts. Third when correlating the sentiment of a speech with measures of political power there is a strong positive correlation between political power and positive sentiment. While such an insight in itself might seem not very surprising this quantifiable link between power and sentiment could be useful nonetheless: Sentiment analysis is a rather domain independent measure, it can be easily automated and scaled up to massive amounts of text data. Combining sentiment features with other measures of political bias could potentially help to alleviate some of the domain-adaptation problems encountered when applying models trained on parliament data to data from other domains.
All data sets used in this study were publicly available, all code for experiments and the link to a live web application can be found online BIBREF14 .
Acknowledgements
I would like to thank Friedrich Lindenberg for factoring out the https://github.com/bundestag/plpr-scraper from his bundestag project. Some backend configurations for the web application were taken from an earlier collaboration with Daniel Kirsch. Pola Lehmann and Michael Gaebler provided helpful feedback on an earlier version of the manuscript. Pola Lehman also helped with getting access to and documentation on the Manifestoproject data. | german |
6d0f2cce46bc962c6527f7b4a77721799f2455c6 | 6d0f2cce46bc962c6527f7b4a77721799f2455c6_0 | Q: Do changes in policies of the political actors account for all of the mistakes the model made?
Text: Introduction
Modern media generate a large amount of content at an ever increasing rate. Keeping an unbiased view on what media report on requires to understand the political bias of texts. In many cases it is obvious which political bias an author has. In other cases some expertise is required to judge the political bias of a text. When dealing with large amounts of text however there are simply not enough experts to examine all possible sources and publications. Assistive technology can help in this context to try and obtain a more unbiased sample of information.
Ideally one would choose for each topic a sample of reports from the entire political spectrum in order to form an unbiased opinion. But ordering media content with respect to the political spectrum at scale requires automated prediction of political bias. The aim of this study is to provide empirical evidence indicating that leveraging open data sources of german texts, automated political bias prediction is possible with above chance accuracy. These experimental results confirm and extend previous findings BIBREF0 , BIBREF1 ; a novel contribution of this work is a proof of concept which applies this technology to sort news article recommendations according to their political bias.
When human experts determine political bias of texts they will take responsibility for what they say about a text, and they can explain their decisions. This is a key difference to many statistical learning approaches. Not only is the responsibility question problematic, it can also be difficult to interpret some of the decisions. In order to validate and explain the predictions of the models three strategies that allow for better interpretations of the models are proposed. First the model misclassifications are related to changes in party policies. Second univariate measures of correlation between text features and party affiliation allow to relate the predictions to the kind of information that political experts use for interpreting texts. Third sentiment analysis is used to investigate whether this aspect of language has discriminatory power.
In the following sec:related briefly surveys some related work, thereafter sec:data gives an overview of the data acquisition and preprocessing methods, sec:model presents the model, training and evaluation procedures; in sec:results the results are discussed and sec:conclusion concludes with some interpretations of the results and future research directions.
Related Work
Throughout the last years automated content analyses for political texts have been conducted on a variety of text data sources (parliament data blogs, tweets, news articles, party manifestos) with a variety of methods, including sentiment analysis, stylistic analyses, standard bag-of-word (BOW) text feature classifiers and more advanced natural language processing tools. While a complete overview is beyond the scope of this work, the following paragraphs list similarities and differences between this study and previous work. For a more complete overview we refer the reader to BIBREF2 , BIBREF3 .
A similar approach to the one presented here was taken in BIBREF0 . The authors extracted BOW feature vectors and applied linear classifiers to predict political party affiliation of US congress speeches. They used data from the two chambers of the US congress, House and Senat, in order to assess generalization performance of a classifier trained on data from one chamber and tested on data from another. They found that accuracies of the model when trained on one domain and tested on another were significantly decreased. Generalization was also affected by the time difference between the political speeches used for training and those used for testing.
Other work has focused on developing dedicated methods for predicting political bias. Two popular methods are WordFish BIBREF4 and WordScores BIBREF5 , or improved versions thereof, see e.g. BIBREF6 . These approaches have been very valuable for a posteriori analysis of historical data but they do not seem to be used as much for analyses of new data in a predictive analytics setting. Moreover direct comparisons of the results obtained with these so called scaling methods with the results of the present study or those of studies as BIBREF0 are difficult, due to the different modeling and evaluation approaches: Validations of WordFish/WordScore based analyses often compare parameter estimates of the different models rather than predictions of these models on held-out data with respect to the same type of labels used to train the models.
Finally Hirst et al conducted a large number of experiments on data from the Canadian parliament and the European parliament; these experiments can be directly compared to the present study both in terms of methodology but also with respect to their results BIBREF1 . The authors show that a linear classifier trained on parliament speeches uses language elements of defense and attack to classify speeches, rather than ideological vocabulary. The authors also argue that emotional content plays an important role in automatic analysis of political texts. Furthermore their results show a clear dependency between length of a political text and the accuracy with which it can be classified correctly.
Taken together, there is a large body of literature in this expanding field in which scientists from quantitative empirical disciplines as well as political science experts collaborate on the challenging topic of automated analysis of political texts. Except for few exceptions most previous work has focused on binary classification or on assignment of a one dimensional policy position (mostly left vs right). Yet many applications require to take into account more subtle differences in political policies. This work focuses on more fine grained political view prediction: for one, the case of the german parliament is more diverse than two parliament systems, allowing for a distinction between more policies; second the political view labels considered are more fine grained than in previous studies. While previous studies used such labels only for partitioning training data BIBREF4 (which is not possible at test time in real-world applications where these labels are not known) the experiments presented in this study directly predict these labels. Another important contribution of this work is that many existing studies are primarily concerned with a posteriori analysis of historical data. This work aims at prediction of political bias on out-of-domain data with a focus on the practical application of the model on new data, for which a prototypical web application is provided. The experiments on out-of-domain generalization complement the work of BIBREF0 , BIBREF1 with results from data of the german parliament and novel sentiment analyses.
Data Sets and Feature Extraction
All experiments were run on publicly available data sets of german political texts and standard libraries for processing the text. The following sections describe the details of data acquisition and feature extraction.
Data
Annotated political text data was obtained from two sources: a) the discussions and speeches held in the german parliament (Bundestag) and b) all manifesto texts of parties running for election in the german parliament in the current 18th and the last, 17th, legislation period.
Parliament texts are annotated with the respective party label, which we take here as a proxy for political bias. The texts of parliament protocols are available through the website of the german bundestag; an open source API was used to query the data in a cleaned and structured format. In total 22784 speeches were extracted for the 17th legislative period and 11317 speeches for the 18th period, queried until March 2016.
For party manifestos another openly accessible API was used, provided by the Wissenschaftszentrum Berlin (WZB). The API is released as part of the Manifestoproject BIBREF7 . The data released in this project comprises the complete manifestos for each party that ran for election enriched with annotations by political experts. Each sentence (in some cases also parts of sentences) is annotated with one of 56 political labels. Examples of these labels are pro/contra protectionism, decentralism, centralism, pro/contra welfare; for a complete list and detailed explanations on how the annotators were instructed see BIBREF8 . The set of labels was developed by political scientists at the WZB and released for public use. All manifestos of parties that were running for election in this and the last legislative period were obtained. In total this resulted in 29451 political statements that had two types of labels: First the party affiliation of each political statement; this label was used to evaluate the party evaluation classifiers trained on the parliament speeches. For this purpose the data acquisition was constrained to only those parties that were elected into the parliament. Next to the party affiliation the political view labels were extracted. For the analyses based on political view labels all parties were considered, also those that did not make it into the parliament.
The length of each annotated statement in the party manifestos was rather short. The longest statement was 522 characters long, the 25%/50%/75% percentiles were 63/95/135 characters. Measured in words the longest data point was 65 words and the 25%/50%/75% percentiles were 8/12/17 words, respectively. This can be considered as a very valuable property of the data set, because it allows a fine grained resolution of party manifestos. However for a classifier (as well as for humans) such short sentences can be rather difficult to classify. In order to obtain less 'noisy' data points from each party – for the party affiliation task only – all statements were aggregated into political topics using the manifesto code labels. Each political view label is a three digit code, the first digit represents the political domain. In total there were eight political domains (topics): External Relations, Freedom and Democracy, Political System, Economy, Welfare and Quality of Life, Fabric of Society, Social Groups and a topic undefined, for a complete list see also BIBREF8 . These 8 topics were used to aggregate all statements in each manifesto into topics. Most party manifestos covered all eight of them, some party manifestos in the 17th Bundestag only covered seven.
Bag-of-Words Vectorization
First each data set was segmented into semantic units; in the case of parliament discussions this were the speeches, in the case of the party manifesto data semantic units were the sentences or sentence parts associated with one of the 56 political view labels. Parliament speeches were often interrupted; in this case each uninterrupted part of a speech was considered a semantic unit. Strings of each semantic unit were tokenised and transformed into bag-of-word vectors as implemented in scikit-learn BIBREF9 . The general idea of bag-of-words vectors is to simply count occurrences of words (or word sequences, also called n-grams) for each data point. A data point is usually a document, here it is the semantic units of parliament speeches and manifesto sentences, respectively. The text of each semantic unit is transformed into a vector INLINEFORM0 where INLINEFORM1 is the size of the dictionary; the INLINEFORM2 th entry of INLINEFORM3 contains the (normalized) count of the INLINEFORM4 th word (or sequence of words) in our dictionary. Several options for vectorizing the speeches were tried, including term-frequency-inverse-document-frequency normalisation, n-gram patterns up to size INLINEFORM5 and several cutoffs for discarding too frequent and too infrequent words. All of these hyperparameters were subjected to hyperparameter optimization as explained in sec:crossvalidation.
Classification Model and Training Procedure
Bag-of-words feature vectors were used to train a multinomial logistic regression model. Let INLINEFORM0 be the true label, where INLINEFORM1 is the total number of labels and INLINEFORM2 is the concatenation of the weight vectors INLINEFORM3 associated with the INLINEFORM4 th party then DISPLAYFORM0
We estimated INLINEFORM0 using quasi-newton gradient descent. The optimization function was obtained by adding a penalization term to the negative log-likelihood of the multinomial logistic regression objective and the optimization hence found the INLINEFORM1 that minimized DISPLAYFORM0
Where INLINEFORM0 denotes the Frobenius Norm and INLINEFORM1 is a regularization parameter controlling the complexity of the model. The regularization parameter was optimized on a log-scaled grid from INLINEFORM2 . The performance of the model was optimized using the classification accuracy, but we also report all other standard measures, precision ( INLINEFORM3 ), recall ( INLINEFORM4 ) and f1-score ( INLINEFORM5 ).
Three different classification problems were considered:
Party affiliation is a five class problem for the 17th legislation period, and a four class problem for the 18th legislation period. Political view classification is based on the labels of the manifesto project, see sec:data and BIBREF8 . For each of first two problems, party affiliation and government membership prediction, classifiers were trained on the parliament speeches. For the third problem classifiers were trained only on the manifesto data for which political view labels were available.
Optimisation of Model Parameters
The model pipeline contained a number of hyperparameters that were optimised using cross-validation. We first split the training data into a training data set that was used for optimisation of hyperparameters and an held-out test data set for evaluating how well the model performs on in-domain data; wherever possible the generalisation performance of the models was also evaluated on out-of domain data. Hyperparameters were optimised using grid search and 3-fold cross-validation within the training set only: A cross-validation split was made to obtain train/test data for the grid search and for each setting of hyperparameters the entire pipeline was trained and evaluated – no data from the in-domain evaluation data or the out-of-domain evaluation data were used for hyperparameter optimisation. For the best setting of all hyperparameters the pipeline was trained again on all training data and evaluated on the evaluation data sets. For party affiliation prediction and government membership prediction the training and test set were 90% and 10%, respectively, of all data in a given legislative period. Out-of-domain evaluation data were the texts from party manifestos. For the political view prediction setting there was no out-of-domain evaluation data, so all labeled manifesto sentences in both legislative periods were split into a training and evaluation set of 90% (train) and 10% (evaluation).
Sentiment analysis
A publicly available key word list was used to extract sentiments BIBREF10 . A sentiment vector INLINEFORM0 was constructed from the sentiment polarity values in the sentiment dictionary. The sentiment index used for attributing positive or negative sentiment to a text was computed as the cosine similarity between BOW vectors INLINEFORM1 and INLINEFORM2 DISPLAYFORM0
Analysis of bag-of-words features
While interpretability of linear models is often propagated as one of their main advantages, doing so naively without modelling the noise covariances can lead to wrong conclusions, see e.g. BIBREF11 , BIBREF12 ; interpreting coefficients of linear models (independent of the regularizer used) implicitly assumes uncorrelated features; this assumption is violated by the text data used in this study. Thus direct interpretation of the model coefficients INLINEFORM0 is problematic. In order to allow for better interpretation of the predictions and to assess which features are discriminative correlation coefficients between each word and the party affiliation label were computed. The words corresponding to the top positive and negative correlations are shown in sec:wordpartycorrelations.
Results
The following sections give an overview of the results for all political bias prediction tasks. Some interpretations of the results are highlighted and a web application of the models is presented at the end of the section.
Predicting political party affiliation
The results for the political party affiliation prediction on held-out parliament data and on evaluation data are listed in tab:results17 for the 17th Bundestag and in tab:results18 for the 18th Bundestag, respectively. Shown are the evaluation results for in-domain data (held-out parliament speech texts) as well as the out-of-domain data; the party manifesto out-of-domain predictions were made on the sentence level.
When predicting party affiliation on text data from the same domain that was used for training the model, average precision and recall values of above 0.6 are obtained. These results are comparable to those of BIBREF1 who report a classification accuracy of 0.61 on a five class problem of prediction party affiliation in the European parliament; the accuracy for the 17th Bundestag is 0.63, results of the 18th Bundestag are difficult to compare as the number of parties is four and the legislation period is not finished yet. For out-of domain data the models yield significantly lower precision and recall values between 0.3 and 0.4. This drop in out of domain prediction accuracy is in line with previous findings BIBREF0 . A main factor that made the prediction on the out-of-domain prediction task particularly difficult is the short length of the strings to be classified, see also sec:data. In order to investigate whether this low out-of-domain prediction performance was due the domain difference (parliament speech vs manifesto data) or due to the short length of the data points, the manifesto data was aggregated based on the topic. The manifesto code political topics labels were used to concatenate texts of each party to one of eight topics, see sec:data. The topic level results are shown in tab:resultstopic and tab:confusiontopic and demonstrate that when the texts to be classified are sufficiently long and the word count statistics are sufficiently dense the classification performance on out of domain data can achieve in the case of some parties reliably precision and recall values close to 1.0. This increase is in line with previous findings on the influence of text length on political bias prediction accuracy BIBREF1 .
In order to investigate the errors the models made confusion matrices were extracted for the predictions on the out-of-domain evaluation data for sentence level predictions (see tab:confusion) as well as topic level predictions (see tab:confusiontopic). One example illustrates that the mistakes the model makes can be associated with changes in the party policy. The green party has been promoting policies for renewable energy and against nuclear energy in their manifestos prior to both legislative periods. Yet the statements of the green party are more often predicted to be from the government parties than from the party that originally promoted these green ideas, reflecting the trend that these legislative periods governing parties took over policies from the green party. This effect is even more pronounced in the topic level predictions: a model trained on data from the 18th Bundestag predicts all manifesto topics of the green party to be from one of the parties of the governing coalition, CDU/CSU or SPD.
Next to the party affiliation labels also government membership labels were used to train models that predict whether or not a text is from a party that belonged to a governing coalition of the Bundestag. In tab:resultsbinary17 and tab:resultsbinary18 the results are shown for the 17th and the 18th Bundestag, respectively. While the in-domain evaluation precision and recall values reach values close to 0.9, the out-of-domain evaluation drops again to values between 0.6 and 0.7. This is in line with the results on binary classification of political bias in the Canadian parliament BIBREF0 . The authors report classification accuracies between 0.8 and 0.87, the accuracy in the 17th Bundestag was 0.85. While topic-level predictions were not performed in this binary setting, the party affiliation results in tab:resultstopic suggest that a similar increase in out-of-domain prediction accuracy could be achieved when aggregating texts to longer segments.
Predicting political views
Parties change their policies and positions in the political spectrum. More reliable categories for political bias are party independent labels for political views, see sec:data. A separate suite of experiments was run to train and test the prediction performance of the text classifiers models described in sec:model. As there was no out-of-domain evaluation set available in this setting only evaluation error on in-domain data is reported. Note however that also in this experiment the evaluation data was never seen by any model during training time. In tab:resultsavgpoliticalview results for the best and worst classes, in terms of predictability, are listed along with the average performance metrics on all classes. Precision and recall values of close to 0.5 on average can be considered rather high considering the large number of labels.
Correlations between words and parties
The 10 highest and lowest correlations between individual words and the party affiliation label are shown for each party in fig:partywordcorrelations. Correlations were computed on the data from the current, 18th, legislative period. Some unspecific stopwords are excluded. The following paragraphs highlight some examples of words that appear to be preferentially used or avoided by each respective party. Even though interpretations of these results are problematic in that they neglect the context in which these words were mentioned some interesting patterns can be found and related to the actual policies the parties are promoting.
The left party mostly criticises measures that affect social welfare negatively, such as the Hartz IV program. Main actors that are blamed for decisions of the conservative governments by the left party are big companies (konzerne). Rarely the party addresses concerns related to security (sicherheit).
The green party heavily criticised the secret negotiations about the TiSA agreement and insists in formal inquiries that the representatives of the green party put forward in this matter (fragen, anfragen). They also often ask questions related to army projects (Rüstungsprojekte, Wehrbericht) or the military development in east europe (Jalta).
The social democrats often use words related to rights of the working class, as reflected by the heavy use of the International Labour Organisation (ILO) or rights of employes (Arbeitnehmerrechte). They rarely talk about competition (Wettbewerb) or climate change (klimapolitik).
The conservative christian party often uses words related to a pro-economy attitude, such as competitiveness or (economic) development (Wettbewerbsfähigkeit, Entwicklung) and words related to security (Sicherheit). The latter could be related to the ongoing debates about whether or not the governments should be allowed to collect data and thus restrict fundamental civil rights in order to better secure the population. In contrast to the parties of the opposition, the conservatives rarely mention the word war (krieg) or related words.
Speech sentiment correlates with political power
In order to investigate the features that give rise to the classifiers' performance the bag-of-words features were analysed with respect to their sentiment. The average sentiment of each political party is shown in fig:partysentiments. High values indicate more pronounced usage of positive words, whereas negative values indicate more pronounced usage of words associated with negative emotional content.
The results show an interesting relationship between political power and sentiment. Political power was evaluated in two ways: a) in terms of the number of seats a party has and b) in terms of membership of the government. Correlating either of these two indicators of political power with the mean sentiment of a party shows a strong positive correlation between speech sentiment and political power. This pattern is evident from the data in fig:partysentiments and in tab:sentiments: In the current Bundestag, government membership correlates with positive sentiment with a correlation coefficient of 0.98 and the number of seats correlates with 0.89.
Note that there is one party, the social democrats (SPD), which has many seats and switched from opposition to government with the 18th Bundestag: With its participation in the government the average sentiment of this party switched sign from negative to positive, suggesting that positive sentiment is a strong indicator of government membership.
An example web application
To show an example use case of the above models a web application was implemented that downloads regularly all articles from some major german news paper websites and applies some simple topic modelling to them. For each news article topic, headlines of articles are plotted along with the predictions of the political view of an article and two labels derived deterministically from the 56 class output, a left right index and the political domain of a text, see BIBREF8 . Within each topic it is then possible to get an ordered (from left to right) overview of the articles on that topic. An example of one topic that emerged on March 31st is shown in fig:fipi. A preliminary demo is live at BIBREF13 and the code is available on github BIBREF14 .
Conclusions, Limitations and Outlook
This study presents a simple approach for automated political bias prediction. The results of these experiments show that automated political bias prediction is possible with above chance accuracy in some cases. It is worth noting that even if the accuracies are not perfect, they are above chance and comparable with results of comparable studies BIBREF0 , BIBREF1 . While these results do not allow for usage in production systems for classification, it is well possible to use such a system as assistive technology for human annotators in an active learning setting.
One of the main limiting factors of an automated political bias prediction system is the availability of training data. Most training data sets that are publicly available have an inherent bias as they are sampled from a different domain. This study tried to quantify the impact of this effect. For the cases in which evaluation data from two domains was available there was a pronounced drop in prediction accuracy between the in domain evaluation set and the out of domain evaluation set. This effect was reported previously for similar data, see e.g. BIBREF0 . Also the finding that shorter texts are more difficult to classify than longer texts is in line with previous studies BIBREF1 . When considering texts of sufficient length (for instance by aggregating all texts of a given political topic) classification performance improved and in some cases reliable predictions could be obtained even beyond the training text domain.
Some aspects of these analyses could be interesting for social science researchers; three of these are highlighted here. First the misclassifications of a model can be related to the changes in policy of a party. Such analyses could be helpful to quantitatively investigate a change in policy. Second analysing the word-party correlations shows that some discriminative words can be related to the political views of a party; this allows for validation of the models by human experts. Third when correlating the sentiment of a speech with measures of political power there is a strong positive correlation between political power and positive sentiment. While such an insight in itself might seem not very surprising this quantifiable link between power and sentiment could be useful nonetheless: Sentiment analysis is a rather domain independent measure, it can be easily automated and scaled up to massive amounts of text data. Combining sentiment features with other measures of political bias could potentially help to alleviate some of the domain-adaptation problems encountered when applying models trained on parliament data to data from other domains.
All data sets used in this study were publicly available, all code for experiments and the link to a live web application can be found online BIBREF14 .
Acknowledgements
I would like to thank Friedrich Lindenberg for factoring out the https://github.com/bundestag/plpr-scraper from his bundestag project. Some backend configurations for the web application were taken from an earlier collaboration with Daniel Kirsch. Pola Lehmann and Michael Gaebler provided helpful feedback on an earlier version of the manuscript. Pola Lehman also helped with getting access to and documentation on the Manifestoproject data. | Yes |
5816ebf15e31bdf70e1de8234132e146d64e31eb | 5816ebf15e31bdf70e1de8234132e146d64e31eb_0 | Q: What model are the text features used in to provide predictions?
Text: Introduction
Modern media generate a large amount of content at an ever increasing rate. Keeping an unbiased view on what media report on requires to understand the political bias of texts. In many cases it is obvious which political bias an author has. In other cases some expertise is required to judge the political bias of a text. When dealing with large amounts of text however there are simply not enough experts to examine all possible sources and publications. Assistive technology can help in this context to try and obtain a more unbiased sample of information.
Ideally one would choose for each topic a sample of reports from the entire political spectrum in order to form an unbiased opinion. But ordering media content with respect to the political spectrum at scale requires automated prediction of political bias. The aim of this study is to provide empirical evidence indicating that leveraging open data sources of german texts, automated political bias prediction is possible with above chance accuracy. These experimental results confirm and extend previous findings BIBREF0 , BIBREF1 ; a novel contribution of this work is a proof of concept which applies this technology to sort news article recommendations according to their political bias.
When human experts determine political bias of texts they will take responsibility for what they say about a text, and they can explain their decisions. This is a key difference to many statistical learning approaches. Not only is the responsibility question problematic, it can also be difficult to interpret some of the decisions. In order to validate and explain the predictions of the models three strategies that allow for better interpretations of the models are proposed. First the model misclassifications are related to changes in party policies. Second univariate measures of correlation between text features and party affiliation allow to relate the predictions to the kind of information that political experts use for interpreting texts. Third sentiment analysis is used to investigate whether this aspect of language has discriminatory power.
In the following sec:related briefly surveys some related work, thereafter sec:data gives an overview of the data acquisition and preprocessing methods, sec:model presents the model, training and evaluation procedures; in sec:results the results are discussed and sec:conclusion concludes with some interpretations of the results and future research directions.
Related Work
Throughout the last years automated content analyses for political texts have been conducted on a variety of text data sources (parliament data blogs, tweets, news articles, party manifestos) with a variety of methods, including sentiment analysis, stylistic analyses, standard bag-of-word (BOW) text feature classifiers and more advanced natural language processing tools. While a complete overview is beyond the scope of this work, the following paragraphs list similarities and differences between this study and previous work. For a more complete overview we refer the reader to BIBREF2 , BIBREF3 .
A similar approach to the one presented here was taken in BIBREF0 . The authors extracted BOW feature vectors and applied linear classifiers to predict political party affiliation of US congress speeches. They used data from the two chambers of the US congress, House and Senat, in order to assess generalization performance of a classifier trained on data from one chamber and tested on data from another. They found that accuracies of the model when trained on one domain and tested on another were significantly decreased. Generalization was also affected by the time difference between the political speeches used for training and those used for testing.
Other work has focused on developing dedicated methods for predicting political bias. Two popular methods are WordFish BIBREF4 and WordScores BIBREF5 , or improved versions thereof, see e.g. BIBREF6 . These approaches have been very valuable for a posteriori analysis of historical data but they do not seem to be used as much for analyses of new data in a predictive analytics setting. Moreover direct comparisons of the results obtained with these so called scaling methods with the results of the present study or those of studies as BIBREF0 are difficult, due to the different modeling and evaluation approaches: Validations of WordFish/WordScore based analyses often compare parameter estimates of the different models rather than predictions of these models on held-out data with respect to the same type of labels used to train the models.
Finally Hirst et al conducted a large number of experiments on data from the Canadian parliament and the European parliament; these experiments can be directly compared to the present study both in terms of methodology but also with respect to their results BIBREF1 . The authors show that a linear classifier trained on parliament speeches uses language elements of defense and attack to classify speeches, rather than ideological vocabulary. The authors also argue that emotional content plays an important role in automatic analysis of political texts. Furthermore their results show a clear dependency between length of a political text and the accuracy with which it can be classified correctly.
Taken together, there is a large body of literature in this expanding field in which scientists from quantitative empirical disciplines as well as political science experts collaborate on the challenging topic of automated analysis of political texts. Except for few exceptions most previous work has focused on binary classification or on assignment of a one dimensional policy position (mostly left vs right). Yet many applications require to take into account more subtle differences in political policies. This work focuses on more fine grained political view prediction: for one, the case of the german parliament is more diverse than two parliament systems, allowing for a distinction between more policies; second the political view labels considered are more fine grained than in previous studies. While previous studies used such labels only for partitioning training data BIBREF4 (which is not possible at test time in real-world applications where these labels are not known) the experiments presented in this study directly predict these labels. Another important contribution of this work is that many existing studies are primarily concerned with a posteriori analysis of historical data. This work aims at prediction of political bias on out-of-domain data with a focus on the practical application of the model on new data, for which a prototypical web application is provided. The experiments on out-of-domain generalization complement the work of BIBREF0 , BIBREF1 with results from data of the german parliament and novel sentiment analyses.
Data Sets and Feature Extraction
All experiments were run on publicly available data sets of german political texts and standard libraries for processing the text. The following sections describe the details of data acquisition and feature extraction.
Data
Annotated political text data was obtained from two sources: a) the discussions and speeches held in the german parliament (Bundestag) and b) all manifesto texts of parties running for election in the german parliament in the current 18th and the last, 17th, legislation period.
Parliament texts are annotated with the respective party label, which we take here as a proxy for political bias. The texts of parliament protocols are available through the website of the german bundestag; an open source API was used to query the data in a cleaned and structured format. In total 22784 speeches were extracted for the 17th legislative period and 11317 speeches for the 18th period, queried until March 2016.
For party manifestos another openly accessible API was used, provided by the Wissenschaftszentrum Berlin (WZB). The API is released as part of the Manifestoproject BIBREF7 . The data released in this project comprises the complete manifestos for each party that ran for election enriched with annotations by political experts. Each sentence (in some cases also parts of sentences) is annotated with one of 56 political labels. Examples of these labels are pro/contra protectionism, decentralism, centralism, pro/contra welfare; for a complete list and detailed explanations on how the annotators were instructed see BIBREF8 . The set of labels was developed by political scientists at the WZB and released for public use. All manifestos of parties that were running for election in this and the last legislative period were obtained. In total this resulted in 29451 political statements that had two types of labels: First the party affiliation of each political statement; this label was used to evaluate the party evaluation classifiers trained on the parliament speeches. For this purpose the data acquisition was constrained to only those parties that were elected into the parliament. Next to the party affiliation the political view labels were extracted. For the analyses based on political view labels all parties were considered, also those that did not make it into the parliament.
The length of each annotated statement in the party manifestos was rather short. The longest statement was 522 characters long, the 25%/50%/75% percentiles were 63/95/135 characters. Measured in words the longest data point was 65 words and the 25%/50%/75% percentiles were 8/12/17 words, respectively. This can be considered as a very valuable property of the data set, because it allows a fine grained resolution of party manifestos. However for a classifier (as well as for humans) such short sentences can be rather difficult to classify. In order to obtain less 'noisy' data points from each party – for the party affiliation task only – all statements were aggregated into political topics using the manifesto code labels. Each political view label is a three digit code, the first digit represents the political domain. In total there were eight political domains (topics): External Relations, Freedom and Democracy, Political System, Economy, Welfare and Quality of Life, Fabric of Society, Social Groups and a topic undefined, for a complete list see also BIBREF8 . These 8 topics were used to aggregate all statements in each manifesto into topics. Most party manifestos covered all eight of them, some party manifestos in the 17th Bundestag only covered seven.
Bag-of-Words Vectorization
First each data set was segmented into semantic units; in the case of parliament discussions this were the speeches, in the case of the party manifesto data semantic units were the sentences or sentence parts associated with one of the 56 political view labels. Parliament speeches were often interrupted; in this case each uninterrupted part of a speech was considered a semantic unit. Strings of each semantic unit were tokenised and transformed into bag-of-word vectors as implemented in scikit-learn BIBREF9 . The general idea of bag-of-words vectors is to simply count occurrences of words (or word sequences, also called n-grams) for each data point. A data point is usually a document, here it is the semantic units of parliament speeches and manifesto sentences, respectively. The text of each semantic unit is transformed into a vector INLINEFORM0 where INLINEFORM1 is the size of the dictionary; the INLINEFORM2 th entry of INLINEFORM3 contains the (normalized) count of the INLINEFORM4 th word (or sequence of words) in our dictionary. Several options for vectorizing the speeches were tried, including term-frequency-inverse-document-frequency normalisation, n-gram patterns up to size INLINEFORM5 and several cutoffs for discarding too frequent and too infrequent words. All of these hyperparameters were subjected to hyperparameter optimization as explained in sec:crossvalidation.
Classification Model and Training Procedure
Bag-of-words feature vectors were used to train a multinomial logistic regression model. Let INLINEFORM0 be the true label, where INLINEFORM1 is the total number of labels and INLINEFORM2 is the concatenation of the weight vectors INLINEFORM3 associated with the INLINEFORM4 th party then DISPLAYFORM0
We estimated INLINEFORM0 using quasi-newton gradient descent. The optimization function was obtained by adding a penalization term to the negative log-likelihood of the multinomial logistic regression objective and the optimization hence found the INLINEFORM1 that minimized DISPLAYFORM0
Where INLINEFORM0 denotes the Frobenius Norm and INLINEFORM1 is a regularization parameter controlling the complexity of the model. The regularization parameter was optimized on a log-scaled grid from INLINEFORM2 . The performance of the model was optimized using the classification accuracy, but we also report all other standard measures, precision ( INLINEFORM3 ), recall ( INLINEFORM4 ) and f1-score ( INLINEFORM5 ).
Three different classification problems were considered:
Party affiliation is a five class problem for the 17th legislation period, and a four class problem for the 18th legislation period. Political view classification is based on the labels of the manifesto project, see sec:data and BIBREF8 . For each of first two problems, party affiliation and government membership prediction, classifiers were trained on the parliament speeches. For the third problem classifiers were trained only on the manifesto data for which political view labels were available.
Optimisation of Model Parameters
The model pipeline contained a number of hyperparameters that were optimised using cross-validation. We first split the training data into a training data set that was used for optimisation of hyperparameters and an held-out test data set for evaluating how well the model performs on in-domain data; wherever possible the generalisation performance of the models was also evaluated on out-of domain data. Hyperparameters were optimised using grid search and 3-fold cross-validation within the training set only: A cross-validation split was made to obtain train/test data for the grid search and for each setting of hyperparameters the entire pipeline was trained and evaluated – no data from the in-domain evaluation data or the out-of-domain evaluation data were used for hyperparameter optimisation. For the best setting of all hyperparameters the pipeline was trained again on all training data and evaluated on the evaluation data sets. For party affiliation prediction and government membership prediction the training and test set were 90% and 10%, respectively, of all data in a given legislative period. Out-of-domain evaluation data were the texts from party manifestos. For the political view prediction setting there was no out-of-domain evaluation data, so all labeled manifesto sentences in both legislative periods were split into a training and evaluation set of 90% (train) and 10% (evaluation).
Sentiment analysis
A publicly available key word list was used to extract sentiments BIBREF10 . A sentiment vector INLINEFORM0 was constructed from the sentiment polarity values in the sentiment dictionary. The sentiment index used for attributing positive or negative sentiment to a text was computed as the cosine similarity between BOW vectors INLINEFORM1 and INLINEFORM2 DISPLAYFORM0
Analysis of bag-of-words features
While interpretability of linear models is often propagated as one of their main advantages, doing so naively without modelling the noise covariances can lead to wrong conclusions, see e.g. BIBREF11 , BIBREF12 ; interpreting coefficients of linear models (independent of the regularizer used) implicitly assumes uncorrelated features; this assumption is violated by the text data used in this study. Thus direct interpretation of the model coefficients INLINEFORM0 is problematic. In order to allow for better interpretation of the predictions and to assess which features are discriminative correlation coefficients between each word and the party affiliation label were computed. The words corresponding to the top positive and negative correlations are shown in sec:wordpartycorrelations.
Results
The following sections give an overview of the results for all political bias prediction tasks. Some interpretations of the results are highlighted and a web application of the models is presented at the end of the section.
Predicting political party affiliation
The results for the political party affiliation prediction on held-out parliament data and on evaluation data are listed in tab:results17 for the 17th Bundestag and in tab:results18 for the 18th Bundestag, respectively. Shown are the evaluation results for in-domain data (held-out parliament speech texts) as well as the out-of-domain data; the party manifesto out-of-domain predictions were made on the sentence level.
When predicting party affiliation on text data from the same domain that was used for training the model, average precision and recall values of above 0.6 are obtained. These results are comparable to those of BIBREF1 who report a classification accuracy of 0.61 on a five class problem of prediction party affiliation in the European parliament; the accuracy for the 17th Bundestag is 0.63, results of the 18th Bundestag are difficult to compare as the number of parties is four and the legislation period is not finished yet. For out-of domain data the models yield significantly lower precision and recall values between 0.3 and 0.4. This drop in out of domain prediction accuracy is in line with previous findings BIBREF0 . A main factor that made the prediction on the out-of-domain prediction task particularly difficult is the short length of the strings to be classified, see also sec:data. In order to investigate whether this low out-of-domain prediction performance was due the domain difference (parliament speech vs manifesto data) or due to the short length of the data points, the manifesto data was aggregated based on the topic. The manifesto code political topics labels were used to concatenate texts of each party to one of eight topics, see sec:data. The topic level results are shown in tab:resultstopic and tab:confusiontopic and demonstrate that when the texts to be classified are sufficiently long and the word count statistics are sufficiently dense the classification performance on out of domain data can achieve in the case of some parties reliably precision and recall values close to 1.0. This increase is in line with previous findings on the influence of text length on political bias prediction accuracy BIBREF1 .
In order to investigate the errors the models made confusion matrices were extracted for the predictions on the out-of-domain evaluation data for sentence level predictions (see tab:confusion) as well as topic level predictions (see tab:confusiontopic). One example illustrates that the mistakes the model makes can be associated with changes in the party policy. The green party has been promoting policies for renewable energy and against nuclear energy in their manifestos prior to both legislative periods. Yet the statements of the green party are more often predicted to be from the government parties than from the party that originally promoted these green ideas, reflecting the trend that these legislative periods governing parties took over policies from the green party. This effect is even more pronounced in the topic level predictions: a model trained on data from the 18th Bundestag predicts all manifesto topics of the green party to be from one of the parties of the governing coalition, CDU/CSU or SPD.
Next to the party affiliation labels also government membership labels were used to train models that predict whether or not a text is from a party that belonged to a governing coalition of the Bundestag. In tab:resultsbinary17 and tab:resultsbinary18 the results are shown for the 17th and the 18th Bundestag, respectively. While the in-domain evaluation precision and recall values reach values close to 0.9, the out-of-domain evaluation drops again to values between 0.6 and 0.7. This is in line with the results on binary classification of political bias in the Canadian parliament BIBREF0 . The authors report classification accuracies between 0.8 and 0.87, the accuracy in the 17th Bundestag was 0.85. While topic-level predictions were not performed in this binary setting, the party affiliation results in tab:resultstopic suggest that a similar increase in out-of-domain prediction accuracy could be achieved when aggregating texts to longer segments.
Predicting political views
Parties change their policies and positions in the political spectrum. More reliable categories for political bias are party independent labels for political views, see sec:data. A separate suite of experiments was run to train and test the prediction performance of the text classifiers models described in sec:model. As there was no out-of-domain evaluation set available in this setting only evaluation error on in-domain data is reported. Note however that also in this experiment the evaluation data was never seen by any model during training time. In tab:resultsavgpoliticalview results for the best and worst classes, in terms of predictability, are listed along with the average performance metrics on all classes. Precision and recall values of close to 0.5 on average can be considered rather high considering the large number of labels.
Correlations between words and parties
The 10 highest and lowest correlations between individual words and the party affiliation label are shown for each party in fig:partywordcorrelations. Correlations were computed on the data from the current, 18th, legislative period. Some unspecific stopwords are excluded. The following paragraphs highlight some examples of words that appear to be preferentially used or avoided by each respective party. Even though interpretations of these results are problematic in that they neglect the context in which these words were mentioned some interesting patterns can be found and related to the actual policies the parties are promoting.
The left party mostly criticises measures that affect social welfare negatively, such as the Hartz IV program. Main actors that are blamed for decisions of the conservative governments by the left party are big companies (konzerne). Rarely the party addresses concerns related to security (sicherheit).
The green party heavily criticised the secret negotiations about the TiSA agreement and insists in formal inquiries that the representatives of the green party put forward in this matter (fragen, anfragen). They also often ask questions related to army projects (Rüstungsprojekte, Wehrbericht) or the military development in east europe (Jalta).
The social democrats often use words related to rights of the working class, as reflected by the heavy use of the International Labour Organisation (ILO) or rights of employes (Arbeitnehmerrechte). They rarely talk about competition (Wettbewerb) or climate change (klimapolitik).
The conservative christian party often uses words related to a pro-economy attitude, such as competitiveness or (economic) development (Wettbewerbsfähigkeit, Entwicklung) and words related to security (Sicherheit). The latter could be related to the ongoing debates about whether or not the governments should be allowed to collect data and thus restrict fundamental civil rights in order to better secure the population. In contrast to the parties of the opposition, the conservatives rarely mention the word war (krieg) or related words.
Speech sentiment correlates with political power
In order to investigate the features that give rise to the classifiers' performance the bag-of-words features were analysed with respect to their sentiment. The average sentiment of each political party is shown in fig:partysentiments. High values indicate more pronounced usage of positive words, whereas negative values indicate more pronounced usage of words associated with negative emotional content.
The results show an interesting relationship between political power and sentiment. Political power was evaluated in two ways: a) in terms of the number of seats a party has and b) in terms of membership of the government. Correlating either of these two indicators of political power with the mean sentiment of a party shows a strong positive correlation between speech sentiment and political power. This pattern is evident from the data in fig:partysentiments and in tab:sentiments: In the current Bundestag, government membership correlates with positive sentiment with a correlation coefficient of 0.98 and the number of seats correlates with 0.89.
Note that there is one party, the social democrats (SPD), which has many seats and switched from opposition to government with the 18th Bundestag: With its participation in the government the average sentiment of this party switched sign from negative to positive, suggesting that positive sentiment is a strong indicator of government membership.
An example web application
To show an example use case of the above models a web application was implemented that downloads regularly all articles from some major german news paper websites and applies some simple topic modelling to them. For each news article topic, headlines of articles are plotted along with the predictions of the political view of an article and two labels derived deterministically from the 56 class output, a left right index and the political domain of a text, see BIBREF8 . Within each topic it is then possible to get an ordered (from left to right) overview of the articles on that topic. An example of one topic that emerged on March 31st is shown in fig:fipi. A preliminary demo is live at BIBREF13 and the code is available on github BIBREF14 .
Conclusions, Limitations and Outlook
This study presents a simple approach for automated political bias prediction. The results of these experiments show that automated political bias prediction is possible with above chance accuracy in some cases. It is worth noting that even if the accuracies are not perfect, they are above chance and comparable with results of comparable studies BIBREF0 , BIBREF1 . While these results do not allow for usage in production systems for classification, it is well possible to use such a system as assistive technology for human annotators in an active learning setting.
One of the main limiting factors of an automated political bias prediction system is the availability of training data. Most training data sets that are publicly available have an inherent bias as they are sampled from a different domain. This study tried to quantify the impact of this effect. For the cases in which evaluation data from two domains was available there was a pronounced drop in prediction accuracy between the in domain evaluation set and the out of domain evaluation set. This effect was reported previously for similar data, see e.g. BIBREF0 . Also the finding that shorter texts are more difficult to classify than longer texts is in line with previous studies BIBREF1 . When considering texts of sufficient length (for instance by aggregating all texts of a given political topic) classification performance improved and in some cases reliable predictions could be obtained even beyond the training text domain.
Some aspects of these analyses could be interesting for social science researchers; three of these are highlighted here. First the misclassifications of a model can be related to the changes in policy of a party. Such analyses could be helpful to quantitatively investigate a change in policy. Second analysing the word-party correlations shows that some discriminative words can be related to the political views of a party; this allows for validation of the models by human experts. Third when correlating the sentiment of a speech with measures of political power there is a strong positive correlation between political power and positive sentiment. While such an insight in itself might seem not very surprising this quantifiable link between power and sentiment could be useful nonetheless: Sentiment analysis is a rather domain independent measure, it can be easily automated and scaled up to massive amounts of text data. Combining sentiment features with other measures of political bias could potentially help to alleviate some of the domain-adaptation problems encountered when applying models trained on parliament data to data from other domains.
All data sets used in this study were publicly available, all code for experiments and the link to a live web application can be found online BIBREF14 .
Acknowledgements
I would like to thank Friedrich Lindenberg for factoring out the https://github.com/bundestag/plpr-scraper from his bundestag project. Some backend configurations for the web application were taken from an earlier collaboration with Daniel Kirsch. Pola Lehmann and Michael Gaebler provided helpful feedback on an earlier version of the manuscript. Pola Lehman also helped with getting access to and documentation on the Manifestoproject data. | multinomial logistic regression |
5a9f94ae296dda06c8aec0fb389ce2f68940ea88 | 5a9f94ae296dda06c8aec0fb389ce2f68940ea88_0 | Q: By how much does their method outperform the multi-head attention model?
Text: Introduction
Automatic speech recognition (ASR) is the task to convert a continuous speech signal into a sequence of discrete characters, and it is a key technology to realize the interaction between human and machine. ASR has a great potential for various applications such as voice search and voice input, making our lives more rich. Typical ASR systems BIBREF0 consist of many modules such as an acoustic model, a lexicon model, and a language model. Factorizing the ASR system into these modules makes it possible to deal with each module as a separate problem. Over the past decades, this factorization has been the basis of the ASR system, however, it makes the system much more complex.
With the improvement of deep learning techniques, end-to-end approaches have been proposed BIBREF1 . In the end-to-end approach, a continuous acoustic signal or a sequence of acoustic features is directly converted into a sequence of characters with a single neural network. Therefore, the end-to-end approach does not require the factorization into several modules, as described above, making it easy to optimize the whole system. Furthermore, it does not require lexicon information, which is handcrafted by human experts in general.
The end-to-end approach is classified into two types. One approach is based on connectionist temporal classification (CTC) BIBREF2 , BIBREF3 , BIBREF1 , which makes it possible to handle the difference in the length of input and output sequences with dynamic programming. The CTC-based approach can efficiently solve the sequential problem, however, CTC uses Markov assumptions to perform dynamic programming and predicts output symbols such as characters or phonemes for each frame independently. Consequently, except in the case of huge training data BIBREF4 , BIBREF5 , it requires the language model and graph-based decoding BIBREF6 .
The other approach utilizes attention-based method BIBREF7 . In this approach, encoder-decoder architecture BIBREF8 , BIBREF9 is used to perform a direct mapping from a sequence of input features into text. The encoder network converts the sequence of input features to that of discriminative hidden states, and the decoder network uses attention mechanism to get an alignment between each element of the output sequence and the encoder hidden states. And then it estimates the output symbol using weighted averaged hidden states, which is based on the alignment, as the inputs of the decoder network. Compared with the CTC-based approach, the attention-based method does not require any conditional independence assumptions including the Markov assumption, language models, and complex decoding. However, non-causal alignment problem is caused by a too flexible alignment of the attention mechanism BIBREF10 . To address this issue, the study BIBREF10 combines the objective function of the attention-based model with that of CTC to constrain flexible alignments of the attention. Another study BIBREF11 uses a multi-head attention (MHA) to get more suitable alignments. In MHA, multiple attentions are calculated, and then, they are integrated into a single attention. Using MHA enables the model to jointly focus on information from different representation subspaces at different positions BIBREF12 , leading to the improvement of the recognition performance.
Inspired by the idea of MHA, in this study we present a new network architecture called multi-head decoder for end-to-end speech recognition as an extension of a multi-head attention model. Instead of the integration in the attention level, our proposed method uses multiple decoders for each attention and integrates their outputs to generate a final output. Furthermore, in order to make each head to capture the different modalities, different attention functions are used for each head, leading to the improvement of the recognition performance with an ensemble effect. To evaluate the effectiveness of our proposed method, we conduct an experimental evaluation using Corpus of Spontaneous Japanese. Experimental results demonstrate that our proposed method outperforms the conventional methods such as location-based and multi-head attention models, and that it can capture different speech/linguistic contexts within the attention-based encoder-decoder framework.
Attention-Based End-to-End ASR
The overview of attention-based network architecture is shown in Fig. FIGREF1 .
The attention-based method directly estimates a posterior INLINEFORM0 , where INLINEFORM1 represents a sequence of input features, INLINEFORM2 represents a sequence of output characters. The posterior INLINEFORM3 is factorized with a probabilistic chain rule as follows: DISPLAYFORM0
where INLINEFORM0 represents a subsequence INLINEFORM1 , and INLINEFORM2 is calculated as follows: DISPLAYFORM0 DISPLAYFORM1
where Eq. ( EQREF3 ) and Eq. () represent encoder and decoder networks, respectively, INLINEFORM0 represents an attention weight, INLINEFORM1 represents an attention weight vector, which is a sequence of attention weights INLINEFORM2 , INLINEFORM3 represents a subsequence of attention vectors INLINEFORM4 , INLINEFORM5 and INLINEFORM6 represent hidden states of encoder and decoder networks, respectively, and INLINEFORM7 represents the letter-wise hidden vector, which is a weighted summarization of hidden vectors with the attention weight vector INLINEFORM8 .
The encoder network in Eq. ( EQREF3 ) converts a sequence of input features INLINEFORM0 into frame-wise discriminative hidden states INLINEFORM1 , and it is typically modeled by a bidirectional long short-term memory recurrent neural network (BLSTM): DISPLAYFORM0
In the case of ASR, the length of the input sequence is significantly different from the length of the output sequence. Hence, basically outputs of BLSTM are often subsampled to reduce the computational cost BIBREF7 , BIBREF13 .
The attention weight INLINEFORM0 in Eq. ( EQREF4 ) represents a soft alignment between each element of the output sequence INLINEFORM1 and the encoder hidden states INLINEFORM2 .
The decoder network in Eq. () estimates the next character INLINEFORM0 from the previous character INLINEFORM1 , hidden state vector of itself INLINEFORM2 and the letter-wise hidden state vector INLINEFORM3 , similar to RNN language model (RNNLM) BIBREF17 . It is typically modeled using LSTM as follows: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 represent trainable matrix and vector parameters, respectively.
Finally, the whole of above networks are optimized using back-propagation through time (BPTT) BIBREF18 to minimize the following objective function: DISPLAYFORM0
where INLINEFORM0 represents the ground truth of the previous characters.
Multi-Head Decoder
The overview of our proposed multi-head decoder (MHD) architecture is shown in Fig. FIGREF19 . In MHD architecture, multiple attentions are calculated with the same manner in the conventional multi-head attention (MHA) BIBREF12 . We first describe the conventional MHA, and extend it to our proposed multi-head decoder (MHD).
Multi-head attention (MHA)
The layer-wise hidden vector at the head INLINEFORM0 is calculated as follows: DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 represent trainable matrix parameters, and any types of attention in Eq. ( EQREF4 ) can be used for INLINEFORM3 in Eq. ( EQREF21 ).
In the case of MHA, the layer-wise hidden vectors of each head are integrated into a single vector with a trainable linear transformation: DISPLAYFORM0
where INLINEFORM0 is a trainable matrix parameter, INLINEFORM1 represents the number of heads.
Multi-head decoder (MHD)
On the other hand, in the case of MHD, instead of the integration at attention level, we assign multiple decoders for each head and then integrate their outputs to get a final output. Since each attention decoder captures different modalities, it is expected to improve the recognition performance with an ensemble effect. The calculation of the attention weight at the head INLINEFORM0 in Eq. ( EQREF21 ) is replaced with following equation: DISPLAYFORM0
Instead of the integration of the letter-wise hidden vectors INLINEFORM0 with linear transformation, each letter-wise hidden vector INLINEFORM1 is fed to INLINEFORM2 -th decoder LSTM: DISPLAYFORM0
Note that each LSTM has its own hidden state INLINEFORM0 which is used for the calculation of the attention weight INLINEFORM1 , while the input character INLINEFORM2 is the same among all of the LSTMs. Finally, all of the outputs are integrated as follows: DISPLAYFORM0
where INLINEFORM0 represents a trainable matrix parameter, and INLINEFORM1 represents a trainable vector parameter.
Heterogeneous multi-head decoder (HMHD)
As a further extension, we propose heterogeneous multi-head decoder (HMHD). Original MHA methods BIBREF12 , BIBREF11 use the same attention function such as dot-product or additive attention for each head. On the other hand, HMHD uses different attention functions for each head. We expect that this extension enables to capture the further different context in speech within the attention-based encoder-decoder framework.
Experimental Evaluation
To evaluate the performance of our proposed method, we conducted experimental evaluation using Corpus of Spontaneous Japanese (CSJ) BIBREF20 , including 581 hours of training data, and three types of evaluation data. To compare the performance, we used following dot, additive, location, and three variants of multi-head attention methods:
We used the input feature vector consisting of 80 dimensional log Mel filter bank and three dimensional pitch feature, which is extracted using open-source speech recognition toolkit Kaldi BIBREF21 . Encoder and decoder networks were six-layered BLSTM with projection layer BIBREF22 (BLSTMP) and one-layered LSTM, respectively. In the second and third bottom layers in the encoder, subsampling was performed to reduce the length of utterance, yielding the length INLINEFORM0 . For MHA/MHD, we set the number of heads to four. For HMHD, we used two kind of settings: (1) dot-product attention + additive attention + location-based attention + coverage mechanism attention (Dot+Add+Loc+Cov), and (2) two location-based attentions + two coverage mechanism attentions (2 INLINEFORM1 Loc+2 INLINEFORM2 Cov). The number of distinct output characters was 3,315 including Kanji, Hiragana, Katakana, alphabets, Arabic number and sos/eos symbols. In decoding, we used beam search algorithm BIBREF9 with beam size 20. We manually set maximum and minimum lengths of the output sequence to 0.1 and 0.5 times the length of the subsampled input sequence, respectively, and the length penalty to 0.1 times the length of the output sequence. All of the networks were trained using end-to-end speech processing toolkit ESPnet BIBREF23 with a single GPU (Titan X pascal). Character error rate (CER) was used as a metric. The detail of experimental condition is shown in Table TABREF28 .
Experimental results are shown in Table TABREF35 .
First, we focus on the results of the conventional methods. Basically, it is known that location-based attention yields better performance than additive attention BIBREF10 . However, in the case of Japanese sentence, its length is much shorter than that of English sentence, which makes the use of location-based attention less effective. In most of the cases, the use of MHA brings the improvement of the recognition performance. Next, we focus on the effectiveness of our proposed MHD architecture. By comparing with the MHA-Loc, MHD-Loc (proposed method) improved the performance in Tasks 1 and 2, while we observed the degradation in Task 3. However, the heterogeneous extension (HMHD), as introduced in Section SECREF27 , brings the further improvement for the performance of MHD, achieving the best performance among all of the methods for all test sets.
Finally, Figure FIGREF36 shows the alignment information of each head of HMHD (2 INLINEFORM0 Loc+2 INLINEFORM1 Cov), which was obtained by visualizing the attention weights.
Interestingly, the alignments of the right and left ends seem to capture more abstracted dynamics of speech, while the rest of two alignments behave like normal alignments obtained by a standard attention mechanism. Thus, we can see that the attention weights of each head have a different tendency, and it supports our hypothesis that HMHD can capture different speech/linguistic contexts within its framework.
Conclusions
In this paper, we proposed a new network architecture called multi-head decoder for end-to-end speech recognition as an extension of a multi-head attention model. Instead of the integration in the attention level, our proposed method utilized multiple decoders for each attention and integrated their outputs to generate a final output. Furthermore, in order to make each head to capture the different modalities, we used different attention functions for each head. To evaluate the effectiveness of our proposed method, we conducted an experimental evaluation using Corpus of Spontaneous Japanese. Experimental results demonstrated that our proposed methods outperformed the conventional methods such as location-based and multi-head attention models, and that it could capture different speech/linguistic contexts within the attention-based encoder-decoder framework.
In the future work, we will combine the multi-head decoder architecture with Joint CTC/Attention architecture BIBREF10 , and evaluate the performance using other databases. | Their average improvement in Character Error Rate over the best MHA model was 0.33 percent points. |
85912b87b16b45cde79039447a70bd1f6f1f8361 | 85912b87b16b45cde79039447a70bd1f6f1f8361_0 | Q: How large is the corpus they use?
Text: Introduction
Automatic speech recognition (ASR) is the task to convert a continuous speech signal into a sequence of discrete characters, and it is a key technology to realize the interaction between human and machine. ASR has a great potential for various applications such as voice search and voice input, making our lives more rich. Typical ASR systems BIBREF0 consist of many modules such as an acoustic model, a lexicon model, and a language model. Factorizing the ASR system into these modules makes it possible to deal with each module as a separate problem. Over the past decades, this factorization has been the basis of the ASR system, however, it makes the system much more complex.
With the improvement of deep learning techniques, end-to-end approaches have been proposed BIBREF1 . In the end-to-end approach, a continuous acoustic signal or a sequence of acoustic features is directly converted into a sequence of characters with a single neural network. Therefore, the end-to-end approach does not require the factorization into several modules, as described above, making it easy to optimize the whole system. Furthermore, it does not require lexicon information, which is handcrafted by human experts in general.
The end-to-end approach is classified into two types. One approach is based on connectionist temporal classification (CTC) BIBREF2 , BIBREF3 , BIBREF1 , which makes it possible to handle the difference in the length of input and output sequences with dynamic programming. The CTC-based approach can efficiently solve the sequential problem, however, CTC uses Markov assumptions to perform dynamic programming and predicts output symbols such as characters or phonemes for each frame independently. Consequently, except in the case of huge training data BIBREF4 , BIBREF5 , it requires the language model and graph-based decoding BIBREF6 .
The other approach utilizes attention-based method BIBREF7 . In this approach, encoder-decoder architecture BIBREF8 , BIBREF9 is used to perform a direct mapping from a sequence of input features into text. The encoder network converts the sequence of input features to that of discriminative hidden states, and the decoder network uses attention mechanism to get an alignment between each element of the output sequence and the encoder hidden states. And then it estimates the output symbol using weighted averaged hidden states, which is based on the alignment, as the inputs of the decoder network. Compared with the CTC-based approach, the attention-based method does not require any conditional independence assumptions including the Markov assumption, language models, and complex decoding. However, non-causal alignment problem is caused by a too flexible alignment of the attention mechanism BIBREF10 . To address this issue, the study BIBREF10 combines the objective function of the attention-based model with that of CTC to constrain flexible alignments of the attention. Another study BIBREF11 uses a multi-head attention (MHA) to get more suitable alignments. In MHA, multiple attentions are calculated, and then, they are integrated into a single attention. Using MHA enables the model to jointly focus on information from different representation subspaces at different positions BIBREF12 , leading to the improvement of the recognition performance.
Inspired by the idea of MHA, in this study we present a new network architecture called multi-head decoder for end-to-end speech recognition as an extension of a multi-head attention model. Instead of the integration in the attention level, our proposed method uses multiple decoders for each attention and integrates their outputs to generate a final output. Furthermore, in order to make each head to capture the different modalities, different attention functions are used for each head, leading to the improvement of the recognition performance with an ensemble effect. To evaluate the effectiveness of our proposed method, we conduct an experimental evaluation using Corpus of Spontaneous Japanese. Experimental results demonstrate that our proposed method outperforms the conventional methods such as location-based and multi-head attention models, and that it can capture different speech/linguistic contexts within the attention-based encoder-decoder framework.
Attention-Based End-to-End ASR
The overview of attention-based network architecture is shown in Fig. FIGREF1 .
The attention-based method directly estimates a posterior INLINEFORM0 , where INLINEFORM1 represents a sequence of input features, INLINEFORM2 represents a sequence of output characters. The posterior INLINEFORM3 is factorized with a probabilistic chain rule as follows: DISPLAYFORM0
where INLINEFORM0 represents a subsequence INLINEFORM1 , and INLINEFORM2 is calculated as follows: DISPLAYFORM0 DISPLAYFORM1
where Eq. ( EQREF3 ) and Eq. () represent encoder and decoder networks, respectively, INLINEFORM0 represents an attention weight, INLINEFORM1 represents an attention weight vector, which is a sequence of attention weights INLINEFORM2 , INLINEFORM3 represents a subsequence of attention vectors INLINEFORM4 , INLINEFORM5 and INLINEFORM6 represent hidden states of encoder and decoder networks, respectively, and INLINEFORM7 represents the letter-wise hidden vector, which is a weighted summarization of hidden vectors with the attention weight vector INLINEFORM8 .
The encoder network in Eq. ( EQREF3 ) converts a sequence of input features INLINEFORM0 into frame-wise discriminative hidden states INLINEFORM1 , and it is typically modeled by a bidirectional long short-term memory recurrent neural network (BLSTM): DISPLAYFORM0
In the case of ASR, the length of the input sequence is significantly different from the length of the output sequence. Hence, basically outputs of BLSTM are often subsampled to reduce the computational cost BIBREF7 , BIBREF13 .
The attention weight INLINEFORM0 in Eq. ( EQREF4 ) represents a soft alignment between each element of the output sequence INLINEFORM1 and the encoder hidden states INLINEFORM2 .
The decoder network in Eq. () estimates the next character INLINEFORM0 from the previous character INLINEFORM1 , hidden state vector of itself INLINEFORM2 and the letter-wise hidden state vector INLINEFORM3 , similar to RNN language model (RNNLM) BIBREF17 . It is typically modeled using LSTM as follows: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 represent trainable matrix and vector parameters, respectively.
Finally, the whole of above networks are optimized using back-propagation through time (BPTT) BIBREF18 to minimize the following objective function: DISPLAYFORM0
where INLINEFORM0 represents the ground truth of the previous characters.
Multi-Head Decoder
The overview of our proposed multi-head decoder (MHD) architecture is shown in Fig. FIGREF19 . In MHD architecture, multiple attentions are calculated with the same manner in the conventional multi-head attention (MHA) BIBREF12 . We first describe the conventional MHA, and extend it to our proposed multi-head decoder (MHD).
Multi-head attention (MHA)
The layer-wise hidden vector at the head INLINEFORM0 is calculated as follows: DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 represent trainable matrix parameters, and any types of attention in Eq. ( EQREF4 ) can be used for INLINEFORM3 in Eq. ( EQREF21 ).
In the case of MHA, the layer-wise hidden vectors of each head are integrated into a single vector with a trainable linear transformation: DISPLAYFORM0
where INLINEFORM0 is a trainable matrix parameter, INLINEFORM1 represents the number of heads.
Multi-head decoder (MHD)
On the other hand, in the case of MHD, instead of the integration at attention level, we assign multiple decoders for each head and then integrate their outputs to get a final output. Since each attention decoder captures different modalities, it is expected to improve the recognition performance with an ensemble effect. The calculation of the attention weight at the head INLINEFORM0 in Eq. ( EQREF21 ) is replaced with following equation: DISPLAYFORM0
Instead of the integration of the letter-wise hidden vectors INLINEFORM0 with linear transformation, each letter-wise hidden vector INLINEFORM1 is fed to INLINEFORM2 -th decoder LSTM: DISPLAYFORM0
Note that each LSTM has its own hidden state INLINEFORM0 which is used for the calculation of the attention weight INLINEFORM1 , while the input character INLINEFORM2 is the same among all of the LSTMs. Finally, all of the outputs are integrated as follows: DISPLAYFORM0
where INLINEFORM0 represents a trainable matrix parameter, and INLINEFORM1 represents a trainable vector parameter.
Heterogeneous multi-head decoder (HMHD)
As a further extension, we propose heterogeneous multi-head decoder (HMHD). Original MHA methods BIBREF12 , BIBREF11 use the same attention function such as dot-product or additive attention for each head. On the other hand, HMHD uses different attention functions for each head. We expect that this extension enables to capture the further different context in speech within the attention-based encoder-decoder framework.
Experimental Evaluation
To evaluate the performance of our proposed method, we conducted experimental evaluation using Corpus of Spontaneous Japanese (CSJ) BIBREF20 , including 581 hours of training data, and three types of evaluation data. To compare the performance, we used following dot, additive, location, and three variants of multi-head attention methods:
We used the input feature vector consisting of 80 dimensional log Mel filter bank and three dimensional pitch feature, which is extracted using open-source speech recognition toolkit Kaldi BIBREF21 . Encoder and decoder networks were six-layered BLSTM with projection layer BIBREF22 (BLSTMP) and one-layered LSTM, respectively. In the second and third bottom layers in the encoder, subsampling was performed to reduce the length of utterance, yielding the length INLINEFORM0 . For MHA/MHD, we set the number of heads to four. For HMHD, we used two kind of settings: (1) dot-product attention + additive attention + location-based attention + coverage mechanism attention (Dot+Add+Loc+Cov), and (2) two location-based attentions + two coverage mechanism attentions (2 INLINEFORM1 Loc+2 INLINEFORM2 Cov). The number of distinct output characters was 3,315 including Kanji, Hiragana, Katakana, alphabets, Arabic number and sos/eos symbols. In decoding, we used beam search algorithm BIBREF9 with beam size 20. We manually set maximum and minimum lengths of the output sequence to 0.1 and 0.5 times the length of the subsampled input sequence, respectively, and the length penalty to 0.1 times the length of the output sequence. All of the networks were trained using end-to-end speech processing toolkit ESPnet BIBREF23 with a single GPU (Titan X pascal). Character error rate (CER) was used as a metric. The detail of experimental condition is shown in Table TABREF28 .
Experimental results are shown in Table TABREF35 .
First, we focus on the results of the conventional methods. Basically, it is known that location-based attention yields better performance than additive attention BIBREF10 . However, in the case of Japanese sentence, its length is much shorter than that of English sentence, which makes the use of location-based attention less effective. In most of the cases, the use of MHA brings the improvement of the recognition performance. Next, we focus on the effectiveness of our proposed MHD architecture. By comparing with the MHA-Loc, MHD-Loc (proposed method) improved the performance in Tasks 1 and 2, while we observed the degradation in Task 3. However, the heterogeneous extension (HMHD), as introduced in Section SECREF27 , brings the further improvement for the performance of MHD, achieving the best performance among all of the methods for all test sets.
Finally, Figure FIGREF36 shows the alignment information of each head of HMHD (2 INLINEFORM0 Loc+2 INLINEFORM1 Cov), which was obtained by visualizing the attention weights.
Interestingly, the alignments of the right and left ends seem to capture more abstracted dynamics of speech, while the rest of two alignments behave like normal alignments obtained by a standard attention mechanism. Thus, we can see that the attention weights of each head have a different tendency, and it supports our hypothesis that HMHD can capture different speech/linguistic contexts within its framework.
Conclusions
In this paper, we proposed a new network architecture called multi-head decoder for end-to-end speech recognition as an extension of a multi-head attention model. Instead of the integration in the attention level, our proposed method utilized multiple decoders for each attention and integrated their outputs to generate a final output. Furthermore, in order to make each head to capture the different modalities, we used different attention functions for each head. To evaluate the effectiveness of our proposed method, we conducted an experimental evaluation using Corpus of Spontaneous Japanese. Experimental results demonstrated that our proposed methods outperformed the conventional methods such as location-based and multi-head attention models, and that it could capture different speech/linguistic contexts within the attention-based encoder-decoder framework.
In the future work, we will combine the multi-head decoder architecture with Joint CTC/Attention architecture BIBREF10 , and evaluate the performance using other databases. | 449050 |
948327d7aa9f85943aac59e3f8613765861f97ff | 948327d7aa9f85943aac59e3f8613765861f97ff_0 | Q: Does each attention head in the decoder calculate the same output?
Text: Introduction
Automatic speech recognition (ASR) is the task to convert a continuous speech signal into a sequence of discrete characters, and it is a key technology to realize the interaction between human and machine. ASR has a great potential for various applications such as voice search and voice input, making our lives more rich. Typical ASR systems BIBREF0 consist of many modules such as an acoustic model, a lexicon model, and a language model. Factorizing the ASR system into these modules makes it possible to deal with each module as a separate problem. Over the past decades, this factorization has been the basis of the ASR system, however, it makes the system much more complex.
With the improvement of deep learning techniques, end-to-end approaches have been proposed BIBREF1 . In the end-to-end approach, a continuous acoustic signal or a sequence of acoustic features is directly converted into a sequence of characters with a single neural network. Therefore, the end-to-end approach does not require the factorization into several modules, as described above, making it easy to optimize the whole system. Furthermore, it does not require lexicon information, which is handcrafted by human experts in general.
The end-to-end approach is classified into two types. One approach is based on connectionist temporal classification (CTC) BIBREF2 , BIBREF3 , BIBREF1 , which makes it possible to handle the difference in the length of input and output sequences with dynamic programming. The CTC-based approach can efficiently solve the sequential problem, however, CTC uses Markov assumptions to perform dynamic programming and predicts output symbols such as characters or phonemes for each frame independently. Consequently, except in the case of huge training data BIBREF4 , BIBREF5 , it requires the language model and graph-based decoding BIBREF6 .
The other approach utilizes attention-based method BIBREF7 . In this approach, encoder-decoder architecture BIBREF8 , BIBREF9 is used to perform a direct mapping from a sequence of input features into text. The encoder network converts the sequence of input features to that of discriminative hidden states, and the decoder network uses attention mechanism to get an alignment between each element of the output sequence and the encoder hidden states. And then it estimates the output symbol using weighted averaged hidden states, which is based on the alignment, as the inputs of the decoder network. Compared with the CTC-based approach, the attention-based method does not require any conditional independence assumptions including the Markov assumption, language models, and complex decoding. However, non-causal alignment problem is caused by a too flexible alignment of the attention mechanism BIBREF10 . To address this issue, the study BIBREF10 combines the objective function of the attention-based model with that of CTC to constrain flexible alignments of the attention. Another study BIBREF11 uses a multi-head attention (MHA) to get more suitable alignments. In MHA, multiple attentions are calculated, and then, they are integrated into a single attention. Using MHA enables the model to jointly focus on information from different representation subspaces at different positions BIBREF12 , leading to the improvement of the recognition performance.
Inspired by the idea of MHA, in this study we present a new network architecture called multi-head decoder for end-to-end speech recognition as an extension of a multi-head attention model. Instead of the integration in the attention level, our proposed method uses multiple decoders for each attention and integrates their outputs to generate a final output. Furthermore, in order to make each head to capture the different modalities, different attention functions are used for each head, leading to the improvement of the recognition performance with an ensemble effect. To evaluate the effectiveness of our proposed method, we conduct an experimental evaluation using Corpus of Spontaneous Japanese. Experimental results demonstrate that our proposed method outperforms the conventional methods such as location-based and multi-head attention models, and that it can capture different speech/linguistic contexts within the attention-based encoder-decoder framework.
Attention-Based End-to-End ASR
The overview of attention-based network architecture is shown in Fig. FIGREF1 .
The attention-based method directly estimates a posterior INLINEFORM0 , where INLINEFORM1 represents a sequence of input features, INLINEFORM2 represents a sequence of output characters. The posterior INLINEFORM3 is factorized with a probabilistic chain rule as follows: DISPLAYFORM0
where INLINEFORM0 represents a subsequence INLINEFORM1 , and INLINEFORM2 is calculated as follows: DISPLAYFORM0 DISPLAYFORM1
where Eq. ( EQREF3 ) and Eq. () represent encoder and decoder networks, respectively, INLINEFORM0 represents an attention weight, INLINEFORM1 represents an attention weight vector, which is a sequence of attention weights INLINEFORM2 , INLINEFORM3 represents a subsequence of attention vectors INLINEFORM4 , INLINEFORM5 and INLINEFORM6 represent hidden states of encoder and decoder networks, respectively, and INLINEFORM7 represents the letter-wise hidden vector, which is a weighted summarization of hidden vectors with the attention weight vector INLINEFORM8 .
The encoder network in Eq. ( EQREF3 ) converts a sequence of input features INLINEFORM0 into frame-wise discriminative hidden states INLINEFORM1 , and it is typically modeled by a bidirectional long short-term memory recurrent neural network (BLSTM): DISPLAYFORM0
In the case of ASR, the length of the input sequence is significantly different from the length of the output sequence. Hence, basically outputs of BLSTM are often subsampled to reduce the computational cost BIBREF7 , BIBREF13 .
The attention weight INLINEFORM0 in Eq. ( EQREF4 ) represents a soft alignment between each element of the output sequence INLINEFORM1 and the encoder hidden states INLINEFORM2 .
The decoder network in Eq. () estimates the next character INLINEFORM0 from the previous character INLINEFORM1 , hidden state vector of itself INLINEFORM2 and the letter-wise hidden state vector INLINEFORM3 , similar to RNN language model (RNNLM) BIBREF17 . It is typically modeled using LSTM as follows: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 represent trainable matrix and vector parameters, respectively.
Finally, the whole of above networks are optimized using back-propagation through time (BPTT) BIBREF18 to minimize the following objective function: DISPLAYFORM0
where INLINEFORM0 represents the ground truth of the previous characters.
Multi-Head Decoder
The overview of our proposed multi-head decoder (MHD) architecture is shown in Fig. FIGREF19 . In MHD architecture, multiple attentions are calculated with the same manner in the conventional multi-head attention (MHA) BIBREF12 . We first describe the conventional MHA, and extend it to our proposed multi-head decoder (MHD).
Multi-head attention (MHA)
The layer-wise hidden vector at the head INLINEFORM0 is calculated as follows: DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 represent trainable matrix parameters, and any types of attention in Eq. ( EQREF4 ) can be used for INLINEFORM3 in Eq. ( EQREF21 ).
In the case of MHA, the layer-wise hidden vectors of each head are integrated into a single vector with a trainable linear transformation: DISPLAYFORM0
where INLINEFORM0 is a trainable matrix parameter, INLINEFORM1 represents the number of heads.
Multi-head decoder (MHD)
On the other hand, in the case of MHD, instead of the integration at attention level, we assign multiple decoders for each head and then integrate their outputs to get a final output. Since each attention decoder captures different modalities, it is expected to improve the recognition performance with an ensemble effect. The calculation of the attention weight at the head INLINEFORM0 in Eq. ( EQREF21 ) is replaced with following equation: DISPLAYFORM0
Instead of the integration of the letter-wise hidden vectors INLINEFORM0 with linear transformation, each letter-wise hidden vector INLINEFORM1 is fed to INLINEFORM2 -th decoder LSTM: DISPLAYFORM0
Note that each LSTM has its own hidden state INLINEFORM0 which is used for the calculation of the attention weight INLINEFORM1 , while the input character INLINEFORM2 is the same among all of the LSTMs. Finally, all of the outputs are integrated as follows: DISPLAYFORM0
where INLINEFORM0 represents a trainable matrix parameter, and INLINEFORM1 represents a trainable vector parameter.
Heterogeneous multi-head decoder (HMHD)
As a further extension, we propose heterogeneous multi-head decoder (HMHD). Original MHA methods BIBREF12 , BIBREF11 use the same attention function such as dot-product or additive attention for each head. On the other hand, HMHD uses different attention functions for each head. We expect that this extension enables to capture the further different context in speech within the attention-based encoder-decoder framework.
Experimental Evaluation
To evaluate the performance of our proposed method, we conducted experimental evaluation using Corpus of Spontaneous Japanese (CSJ) BIBREF20 , including 581 hours of training data, and three types of evaluation data. To compare the performance, we used following dot, additive, location, and three variants of multi-head attention methods:
We used the input feature vector consisting of 80 dimensional log Mel filter bank and three dimensional pitch feature, which is extracted using open-source speech recognition toolkit Kaldi BIBREF21 . Encoder and decoder networks were six-layered BLSTM with projection layer BIBREF22 (BLSTMP) and one-layered LSTM, respectively. In the second and third bottom layers in the encoder, subsampling was performed to reduce the length of utterance, yielding the length INLINEFORM0 . For MHA/MHD, we set the number of heads to four. For HMHD, we used two kind of settings: (1) dot-product attention + additive attention + location-based attention + coverage mechanism attention (Dot+Add+Loc+Cov), and (2) two location-based attentions + two coverage mechanism attentions (2 INLINEFORM1 Loc+2 INLINEFORM2 Cov). The number of distinct output characters was 3,315 including Kanji, Hiragana, Katakana, alphabets, Arabic number and sos/eos symbols. In decoding, we used beam search algorithm BIBREF9 with beam size 20. We manually set maximum and minimum lengths of the output sequence to 0.1 and 0.5 times the length of the subsampled input sequence, respectively, and the length penalty to 0.1 times the length of the output sequence. All of the networks were trained using end-to-end speech processing toolkit ESPnet BIBREF23 with a single GPU (Titan X pascal). Character error rate (CER) was used as a metric. The detail of experimental condition is shown in Table TABREF28 .
Experimental results are shown in Table TABREF35 .
First, we focus on the results of the conventional methods. Basically, it is known that location-based attention yields better performance than additive attention BIBREF10 . However, in the case of Japanese sentence, its length is much shorter than that of English sentence, which makes the use of location-based attention less effective. In most of the cases, the use of MHA brings the improvement of the recognition performance. Next, we focus on the effectiveness of our proposed MHD architecture. By comparing with the MHA-Loc, MHD-Loc (proposed method) improved the performance in Tasks 1 and 2, while we observed the degradation in Task 3. However, the heterogeneous extension (HMHD), as introduced in Section SECREF27 , brings the further improvement for the performance of MHD, achieving the best performance among all of the methods for all test sets.
Finally, Figure FIGREF36 shows the alignment information of each head of HMHD (2 INLINEFORM0 Loc+2 INLINEFORM1 Cov), which was obtained by visualizing the attention weights.
Interestingly, the alignments of the right and left ends seem to capture more abstracted dynamics of speech, while the rest of two alignments behave like normal alignments obtained by a standard attention mechanism. Thus, we can see that the attention weights of each head have a different tendency, and it supports our hypothesis that HMHD can capture different speech/linguistic contexts within its framework.
Conclusions
In this paper, we proposed a new network architecture called multi-head decoder for end-to-end speech recognition as an extension of a multi-head attention model. Instead of the integration in the attention level, our proposed method utilized multiple decoders for each attention and integrated their outputs to generate a final output. Furthermore, in order to make each head to capture the different modalities, we used different attention functions for each head. To evaluate the effectiveness of our proposed method, we conducted an experimental evaluation using Corpus of Spontaneous Japanese. Experimental results demonstrated that our proposed methods outperformed the conventional methods such as location-based and multi-head attention models, and that it could capture different speech/linguistic contexts within the attention-based encoder-decoder framework.
In the future work, we will combine the multi-head decoder architecture with Joint CTC/Attention architecture BIBREF10 , and evaluate the performance using other databases. | No |
cdf7e60150a166d41baed9dad539e3b93b544624 | cdf7e60150a166d41baed9dad539e3b93b544624_0 | Q: Which distributional methods did they consider?
Text: Introduction
Hierarchical relationships play a central role in knowledge representation and reasoning. Hypernym detection, i.e., the modeling of word-level hierarchies, has long been an important task in natural language processing. Starting with BIBREF0 , pattern-based methods have been one of the most influential approaches to this problem. Their key idea is to exploit certain lexico-syntactic patterns to detect is-a relations in text. For instance, patterns like “ INLINEFORM0 such as INLINEFORM1 ”, or “ INLINEFORM2 and other INLINEFORM3 ” often indicate hypernymy relations of the form INLINEFORM4 is-a INLINEFORM5 . Such patterns may be predefined, or they may be learned automatically BIBREF1 , BIBREF2 . However, a well-known problem of Hearst-like patterns is their extreme sparsity: words must co-occur in exactly the right configuration, or else no relation can be detected.
To alleviate the sparsity issue, the focus in hypernymy detection has recently shifted to distributional representations, wherein words are represented as vectors based on their distribution across large corpora. Such methods offer rich representations of lexical meaning, alleviating the sparsity problem, but require specialized similarity measures to distinguish different lexical relationships. The most successful measures to date are generally inspired by the Distributional Inclusion Hypothesis (DIH) BIBREF3 , which states roughly that contexts in which a narrow term INLINEFORM0 may appear (“cat”) should be a subset of the contexts in which a broader term INLINEFORM1 (“animal”) may appear. Intuitively, the DIH states that we should be able to replace any occurrence of “cat” with “animal” and still have a valid utterance. An important insight from work on distributional methods is that the definition of context is often critical to the success of a system BIBREF4 . Some distributional representations, like positional or dependency-based contexts, may even capture crude Hearst pattern-like features BIBREF5 , BIBREF6 .
While both approaches for hypernym detection rely on co-occurrences within certain contexts, they differ in their context selection strategy: pattern-based methods use predefined manually-curated patterns to generate high-precision extractions while DIH methods rely on unconstrained word co-occurrences in large corpora.
Here, we revisit the idea of using pattern-based methods for hypernym detection. We evaluate several pattern-based models on modern, large corpora and compare them to methods based on the DIH. We find that simple pattern-based methods consistently outperform specialized DIH methods on several difficult hypernymy tasks, including detection, direction prediction, and graded entailment ranking. Moreover, we find that taking low-rank embeddings of pattern-based models substantially improves performance by remedying the sparsity issue. Overall, our results show that Hearst patterns provide high-quality and robust predictions on large corpora by capturing important contextual constraints, which are not yet modeled in distributional methods.
Models
In the following, we discuss pattern-based and distributional methods to detect hypernymy relations. We explicitly consider only relatively simple pattern-based approaches that allow us to directly compare their performance to DIH-based methods.
Pattern-based Hypernym Detection
First, let INLINEFORM0 denote the set of hypernymy relations that have been extracted via Hearst patterns from a text corpus INLINEFORM1 . Furthermore let INLINEFORM2 denote the count of how often INLINEFORM3 has been extracted and let INLINEFORM4 denote the total number extractions. In the first, most direct application of Hearst patterns, we then simply use the counts INLINEFORM5 or, equivalently, the extraction probability DISPLAYFORM0
to predict hypernymy relations from INLINEFORM0 .
However, simple extraction probabilities as in eq:prob are skewed by the occurrence probabilities of their constituent words. For instance, it is more likely that we extract (France, country) over (France, republic), just because the word country is more likely to occur than republic. This skew in word distributions is well-known for natural language and also translates to Hearst patterns (see also fig:dist). For this reason, we also consider predicting hypernymy relations based on the Pointwise Mutual Information of Hearst patterns: First, let INLINEFORM0 and INLINEFORM1 denote the probability that INLINEFORM2 occurs as a hyponym and hypernym, respectively. We then define the Positive Pointwise Mutual Information for INLINEFORM3 as DISPLAYFORM0
While eq:pmi can correct for different word occurrence probabilities, it cannot handle missing data. However, sparsity is one of the main issues when using Hearst patterns, as a necessarily incomplete set of extraction rules will lead inevitably to missing extractions. For this purpose, we also study low-rank embeddings of the PPMI matrix, which allow us to make predictions for unseen pairs. In particular, let INLINEFORM0 denote the number of unique terms in INLINEFORM1 . Furthermore, let INLINEFORM2 be the PPMI matrix with entries INLINEFORM3 and let INLINEFORM4 be its Singular Value Decomposition (SVD). We can then predict hypernymy relations based on the truncated SVD of INLINEFORM5 via DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 denote the INLINEFORM2 -th and INLINEFORM3 -th row of INLINEFORM4 and INLINEFORM5 , respectively, and where INLINEFORM6 is the diagonal matrix of truncated singular values (in which all but the INLINEFORM7 largest singular values are set to zero).
eq:spmi can be interpreted as a smoothed version of the observed PPMI matrix. Due to the truncation of singular values, eq:spmi computes a low-rank embedding of INLINEFORM0 where similar words (in terms of their Hearst patterns) have similar representations. Since eq:spmi is defined for all pairs INLINEFORM1 , it allows us to make hypernymy predictions based on the similarity of words. We also consider factorizing a matrix that is constructed from occurrence probabilities as in eq:prob, denoted by INLINEFORM2 . This approach is then closely related to the method of BIBREF7 , which has been proposed to improve precision and recall for hypernymy detection from Hearst patterns.
Distributional Hypernym Detection
Most unsupervised distributional approaches for hypernymy detection are based on variants of the Distributional Inclusion Hypothesis BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF4 . Here, we compare to two methods with strong empirical results. As with most DIH measures, they are only defined for large, sparse, positively-valued distributional spaces. First, we consider WeedsPrec BIBREF8 which captures the features of INLINEFORM0 which are included in the set of a broader term's features, INLINEFORM1 : DISPLAYFORM0
Second, we consider invCL BIBREF11 which introduces a notion of distributional exclusion by also measuring the degree to which the broader term contains contexts not used by the narrower term. In particular, let INLINEFORM0
denote the degree of inclusion of INLINEFORM0 in INLINEFORM1 as proposed by BIBREF12 . To measure both the inclusion of INLINEFORM2 in INLINEFORM3 and the non-inclusion of INLINEFORM4 in INLINEFORM5 , invCL is then defined as INLINEFORM6
Although most unsupervised distributional approaches are based on the DIH, we also consider the distributional SLQS model based on on an alternative informativeness hypothesis BIBREF10 , BIBREF4 . Intuitively, the SLQS model presupposes that general words appear mostly in uninformative contexts, as measured by entropy. Specifically, SLQS depends on the median entropy of a term's top INLINEFORM0 contexts, defined as INLINEFORM1
where INLINEFORM0 is the Shannon entropy of context INLINEFORM1 across all terms, and INLINEFORM2 is chosen in hyperparameter selection. Finally, SLQS is defined using the ratio between the two terms: INLINEFORM3
Since the SLQS model only compares the relative generality of two terms, but does not make judgment about the terms' relatedness, we report SLQS-cos, which multiplies the SLQS measure by cosine similarity of INLINEFORM0 and INLINEFORM1 BIBREF10 .
For completeness, we also include cosine similarity as a baseline in our evaluation.
Evaluation
To evaluate the relative performance of pattern-based and distributional models, we apply them to several challenging hypernymy tasks.
Tasks
Detection: In hypernymy detection, the task is to classify whether pairs of words are in a hypernymy relation. For this task, we evaluate all models on five benchmark datasets: First, we employ the noun-noun subset of bless, which contains hypernymy annotations for 200 concrete, mostly unambiguous nouns. Negative pairs contain a mixture of co-hyponymy, meronymy, and random pairs. This version contains 14,542 total pairs with 1,337 positive examples. Second, we evaluate on leds BIBREF13 , which consists of 2,770 noun pairs balanced between positive hypernymy examples, and randomly shuffled negative pairs. We also consider eval BIBREF14 , containing 7,378 pairs in a mixture of hypernymy, synonymy, antonymy, meronymy, and adjectival relations. eval is notable for its absence of random pairs. The largest dataset is shwartz BIBREF2 , which was collected from a mixture of WordNet, DBPedia, and other resources. We limit ourselves to a 52,578 pair subset excluding multiword expressions. Finally, we evaluate on wbless BIBREF15 , a 1,668 pair subset of bless, with negative pairs being selected from co-hyponymy, random, and hyponymy relations. Previous work has used different metrics for evaluating on BLESS BIBREF11 , BIBREF5 , BIBREF6 . We chose to evaluate the global ranking using Average Precision. This allowed us to use the same metric on all detection benchmarks, and is consistent with evaluations in BIBREF4 .
Direction: In direction prediction, the task is to identify which term is broader in a given pair of words. For this task, we evaluate all models on three datasets described by BIBREF16 : On bless, the task is to predict the direction for all 1337 positive pairs in the dataset. Pairs are only counted correct if the hypernymy direction scores higher than the reverse direction, i.e. INLINEFORM0 . We reserve 10% of the data for validation, and test on the remaining 90%. On wbless, we follow prior work BIBREF17 , BIBREF18 and perform 1000 random iterations in which 2% of the data is used as a validation set to learn a classification threshold, and test on the remainder of the data. We report average accuracy across all iterations. Finally, we evaluate on bibless BIBREF16 , a variant of wbless with hypernymy and hyponymy pairs explicitly annotated for their direction. Since this task requires three-way classification (hypernymy, hyponymy, and other), we perform two-stage classification. First, a threshold is tuned using 2% of the data, identifying whether a pair exhibits hypernymy in either direction. Second, the relative comparison of scores determines which direction is predicted. As with wbless, we report the average accuracy over 1000 iterations.
Graded Entailment: In graded entailment, the task is to quantify the degree to which a hypernymy relation holds. For this task, we follow prior work BIBREF19 , BIBREF18 and use the noun part of hyperlex BIBREF20 , consisting of 2,163 noun pairs which are annotated to what degree INLINEFORM0 is-a INLINEFORM1 holds on a scale of INLINEFORM2 . For all models, we report Spearman's rank correlation INLINEFORM3 . We handle out-of-vocabulary (OOV) words by assigning the median of the scores (computed across the training set) to pairs with OOV words.
Experimental Setup
Pattern-based models: We extract Hearst patterns from the concatenation of Gigaword and Wikipedia, and prepare our corpus by tokenizing, lemmatizing, and POS tagging using CoreNLP 3.8.0. The full set of Hearst patterns is provided in Table TABREF8 . Our selected patterns match prototypical Hearst patterns, like “animals such as cats,” but also include broader patterns like “New Year is the most important holiday.” Leading and following noun phrases are allowed to match limited modifiers (compound nouns, adjectives, etc.), in which case we also generate a hit for the head of the noun phrase. During postprocessing, we remove pairs which were not extracted by at least two distinct patterns. We also remove any pair INLINEFORM0 if INLINEFORM1 . The final corpus contains roughly 4.5M matched pairs, 431K unique pairs, and 243K unique terms. For SVD-based models, we select the rank from INLINEFORM2 {5, 10, 15, 20, 25, 50, 100, 150, 200, 250, 300, 500, 1000} on the validation set. The other pattern-based models do not have any hyperparameters.
Distributional models: For the distributional baselines, we employ the large, sparse distributional space of BIBREF4 , which is computed from UkWaC and Wikipedia, and is known to have strong performance on several of the detection tasks. The corpus was POS tagged and dependency parsed. Distributional contexts were constructed from adjacent words in dependency parses BIBREF21 , BIBREF22 . Targets and contexts which appeared fewer than 100 times in the corpus were filtered, and the resulting co-occurrence matrix was PPMI transformed. The resulting space contains representations for 218K words over 732K context dimensions. For the SLQS model, we selected the number of contexts INLINEFORM0 from the same set of options as the SVD rank in pattern-based models.
Results
Table TABREF13 shows the results from all three experimental settings. In nearly all cases, we find that pattern-based approaches substantially outperform all three distributional models. Particularly strong improvements can be observed on bless (0.76 average precision vs 0.19) and wbless (0.96 vs. 0.69) for the detection tasks and on all directionality tasks. For directionality prediction on bless, the SVD models surpass even the state-of-the-art supervised model of BIBREF18 . Moreover, both SVD models perform generally better than their sparse counterparts on all tasks and datasets except on hyperlex. We performed a posthoc analysis of the validation sets comparing the ppmi and spmi models, and found that the truncated SVD improved recall via its matrix completion properties. We also found that the spmi model downweighted many high-scoring outlier pairs composed of rare terms.
When comparing the INLINEFORM0 and ppmi models to distributional models, we observe mixed results. The shwartz dataset is difficult for sparse models due to its very long tail of low frequency words that are hard to cover using Hearst patterns. On eval, Hearst-pattern based methods get penalized by OOV words, due to the large number of verbs and adjectives in the dataset, which are not captured by our patterns. However, in 7 of the 9 datasets, at least one of the sparse models outperforms all distributional measures, showing that Hearst patterns can provide strong performance on large corpora.
Conclusion
We studied the relative performance of Hearst pattern-based methods and DIH-based methods for hypernym detection. Our results show that the pattern-based methods substantially outperform DIH-based methods on several challenging benchmarks. We find that embedding methods alleviate sparsity concerns of pattern-based approaches and substantially improve coverage. We conclude that Hearst patterns provide important contexts for the detection of hypernymy relations that are not yet captured in DIH models. Our code is available at https://github.com/facebookresearch/hypernymysuite.
Acknowledgments
We would like to thank the anonymous reviewers for their helpful suggestions. We also thank Vered Shwartz, Enrico Santus, and Dominik Schlechtweg for providing us with their distributional spaces and baseline implementations. | WeedsPrec BIBREF8, invCL BIBREF11, SLQS model, cosine similarity |
c06b5623c35b6fa7938340fa340269dc81d061e1 | c06b5623c35b6fa7938340fa340269dc81d061e1_0 | Q: Which benchmark datasets are used?
Text: Introduction
Hierarchical relationships play a central role in knowledge representation and reasoning. Hypernym detection, i.e., the modeling of word-level hierarchies, has long been an important task in natural language processing. Starting with BIBREF0 , pattern-based methods have been one of the most influential approaches to this problem. Their key idea is to exploit certain lexico-syntactic patterns to detect is-a relations in text. For instance, patterns like “ INLINEFORM0 such as INLINEFORM1 ”, or “ INLINEFORM2 and other INLINEFORM3 ” often indicate hypernymy relations of the form INLINEFORM4 is-a INLINEFORM5 . Such patterns may be predefined, or they may be learned automatically BIBREF1 , BIBREF2 . However, a well-known problem of Hearst-like patterns is their extreme sparsity: words must co-occur in exactly the right configuration, or else no relation can be detected.
To alleviate the sparsity issue, the focus in hypernymy detection has recently shifted to distributional representations, wherein words are represented as vectors based on their distribution across large corpora. Such methods offer rich representations of lexical meaning, alleviating the sparsity problem, but require specialized similarity measures to distinguish different lexical relationships. The most successful measures to date are generally inspired by the Distributional Inclusion Hypothesis (DIH) BIBREF3 , which states roughly that contexts in which a narrow term INLINEFORM0 may appear (“cat”) should be a subset of the contexts in which a broader term INLINEFORM1 (“animal”) may appear. Intuitively, the DIH states that we should be able to replace any occurrence of “cat” with “animal” and still have a valid utterance. An important insight from work on distributional methods is that the definition of context is often critical to the success of a system BIBREF4 . Some distributional representations, like positional or dependency-based contexts, may even capture crude Hearst pattern-like features BIBREF5 , BIBREF6 .
While both approaches for hypernym detection rely on co-occurrences within certain contexts, they differ in their context selection strategy: pattern-based methods use predefined manually-curated patterns to generate high-precision extractions while DIH methods rely on unconstrained word co-occurrences in large corpora.
Here, we revisit the idea of using pattern-based methods for hypernym detection. We evaluate several pattern-based models on modern, large corpora and compare them to methods based on the DIH. We find that simple pattern-based methods consistently outperform specialized DIH methods on several difficult hypernymy tasks, including detection, direction prediction, and graded entailment ranking. Moreover, we find that taking low-rank embeddings of pattern-based models substantially improves performance by remedying the sparsity issue. Overall, our results show that Hearst patterns provide high-quality and robust predictions on large corpora by capturing important contextual constraints, which are not yet modeled in distributional methods.
Models
In the following, we discuss pattern-based and distributional methods to detect hypernymy relations. We explicitly consider only relatively simple pattern-based approaches that allow us to directly compare their performance to DIH-based methods.
Pattern-based Hypernym Detection
First, let INLINEFORM0 denote the set of hypernymy relations that have been extracted via Hearst patterns from a text corpus INLINEFORM1 . Furthermore let INLINEFORM2 denote the count of how often INLINEFORM3 has been extracted and let INLINEFORM4 denote the total number extractions. In the first, most direct application of Hearst patterns, we then simply use the counts INLINEFORM5 or, equivalently, the extraction probability DISPLAYFORM0
to predict hypernymy relations from INLINEFORM0 .
However, simple extraction probabilities as in eq:prob are skewed by the occurrence probabilities of their constituent words. For instance, it is more likely that we extract (France, country) over (France, republic), just because the word country is more likely to occur than republic. This skew in word distributions is well-known for natural language and also translates to Hearst patterns (see also fig:dist). For this reason, we also consider predicting hypernymy relations based on the Pointwise Mutual Information of Hearst patterns: First, let INLINEFORM0 and INLINEFORM1 denote the probability that INLINEFORM2 occurs as a hyponym and hypernym, respectively. We then define the Positive Pointwise Mutual Information for INLINEFORM3 as DISPLAYFORM0
While eq:pmi can correct for different word occurrence probabilities, it cannot handle missing data. However, sparsity is one of the main issues when using Hearst patterns, as a necessarily incomplete set of extraction rules will lead inevitably to missing extractions. For this purpose, we also study low-rank embeddings of the PPMI matrix, which allow us to make predictions for unseen pairs. In particular, let INLINEFORM0 denote the number of unique terms in INLINEFORM1 . Furthermore, let INLINEFORM2 be the PPMI matrix with entries INLINEFORM3 and let INLINEFORM4 be its Singular Value Decomposition (SVD). We can then predict hypernymy relations based on the truncated SVD of INLINEFORM5 via DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 denote the INLINEFORM2 -th and INLINEFORM3 -th row of INLINEFORM4 and INLINEFORM5 , respectively, and where INLINEFORM6 is the diagonal matrix of truncated singular values (in which all but the INLINEFORM7 largest singular values are set to zero).
eq:spmi can be interpreted as a smoothed version of the observed PPMI matrix. Due to the truncation of singular values, eq:spmi computes a low-rank embedding of INLINEFORM0 where similar words (in terms of their Hearst patterns) have similar representations. Since eq:spmi is defined for all pairs INLINEFORM1 , it allows us to make hypernymy predictions based on the similarity of words. We also consider factorizing a matrix that is constructed from occurrence probabilities as in eq:prob, denoted by INLINEFORM2 . This approach is then closely related to the method of BIBREF7 , which has been proposed to improve precision and recall for hypernymy detection from Hearst patterns.
Distributional Hypernym Detection
Most unsupervised distributional approaches for hypernymy detection are based on variants of the Distributional Inclusion Hypothesis BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF4 . Here, we compare to two methods with strong empirical results. As with most DIH measures, they are only defined for large, sparse, positively-valued distributional spaces. First, we consider WeedsPrec BIBREF8 which captures the features of INLINEFORM0 which are included in the set of a broader term's features, INLINEFORM1 : DISPLAYFORM0
Second, we consider invCL BIBREF11 which introduces a notion of distributional exclusion by also measuring the degree to which the broader term contains contexts not used by the narrower term. In particular, let INLINEFORM0
denote the degree of inclusion of INLINEFORM0 in INLINEFORM1 as proposed by BIBREF12 . To measure both the inclusion of INLINEFORM2 in INLINEFORM3 and the non-inclusion of INLINEFORM4 in INLINEFORM5 , invCL is then defined as INLINEFORM6
Although most unsupervised distributional approaches are based on the DIH, we also consider the distributional SLQS model based on on an alternative informativeness hypothesis BIBREF10 , BIBREF4 . Intuitively, the SLQS model presupposes that general words appear mostly in uninformative contexts, as measured by entropy. Specifically, SLQS depends on the median entropy of a term's top INLINEFORM0 contexts, defined as INLINEFORM1
where INLINEFORM0 is the Shannon entropy of context INLINEFORM1 across all terms, and INLINEFORM2 is chosen in hyperparameter selection. Finally, SLQS is defined using the ratio between the two terms: INLINEFORM3
Since the SLQS model only compares the relative generality of two terms, but does not make judgment about the terms' relatedness, we report SLQS-cos, which multiplies the SLQS measure by cosine similarity of INLINEFORM0 and INLINEFORM1 BIBREF10 .
For completeness, we also include cosine similarity as a baseline in our evaluation.
Evaluation
To evaluate the relative performance of pattern-based and distributional models, we apply them to several challenging hypernymy tasks.
Tasks
Detection: In hypernymy detection, the task is to classify whether pairs of words are in a hypernymy relation. For this task, we evaluate all models on five benchmark datasets: First, we employ the noun-noun subset of bless, which contains hypernymy annotations for 200 concrete, mostly unambiguous nouns. Negative pairs contain a mixture of co-hyponymy, meronymy, and random pairs. This version contains 14,542 total pairs with 1,337 positive examples. Second, we evaluate on leds BIBREF13 , which consists of 2,770 noun pairs balanced between positive hypernymy examples, and randomly shuffled negative pairs. We also consider eval BIBREF14 , containing 7,378 pairs in a mixture of hypernymy, synonymy, antonymy, meronymy, and adjectival relations. eval is notable for its absence of random pairs. The largest dataset is shwartz BIBREF2 , which was collected from a mixture of WordNet, DBPedia, and other resources. We limit ourselves to a 52,578 pair subset excluding multiword expressions. Finally, we evaluate on wbless BIBREF15 , a 1,668 pair subset of bless, with negative pairs being selected from co-hyponymy, random, and hyponymy relations. Previous work has used different metrics for evaluating on BLESS BIBREF11 , BIBREF5 , BIBREF6 . We chose to evaluate the global ranking using Average Precision. This allowed us to use the same metric on all detection benchmarks, and is consistent with evaluations in BIBREF4 .
Direction: In direction prediction, the task is to identify which term is broader in a given pair of words. For this task, we evaluate all models on three datasets described by BIBREF16 : On bless, the task is to predict the direction for all 1337 positive pairs in the dataset. Pairs are only counted correct if the hypernymy direction scores higher than the reverse direction, i.e. INLINEFORM0 . We reserve 10% of the data for validation, and test on the remaining 90%. On wbless, we follow prior work BIBREF17 , BIBREF18 and perform 1000 random iterations in which 2% of the data is used as a validation set to learn a classification threshold, and test on the remainder of the data. We report average accuracy across all iterations. Finally, we evaluate on bibless BIBREF16 , a variant of wbless with hypernymy and hyponymy pairs explicitly annotated for their direction. Since this task requires three-way classification (hypernymy, hyponymy, and other), we perform two-stage classification. First, a threshold is tuned using 2% of the data, identifying whether a pair exhibits hypernymy in either direction. Second, the relative comparison of scores determines which direction is predicted. As with wbless, we report the average accuracy over 1000 iterations.
Graded Entailment: In graded entailment, the task is to quantify the degree to which a hypernymy relation holds. For this task, we follow prior work BIBREF19 , BIBREF18 and use the noun part of hyperlex BIBREF20 , consisting of 2,163 noun pairs which are annotated to what degree INLINEFORM0 is-a INLINEFORM1 holds on a scale of INLINEFORM2 . For all models, we report Spearman's rank correlation INLINEFORM3 . We handle out-of-vocabulary (OOV) words by assigning the median of the scores (computed across the training set) to pairs with OOV words.
Experimental Setup
Pattern-based models: We extract Hearst patterns from the concatenation of Gigaword and Wikipedia, and prepare our corpus by tokenizing, lemmatizing, and POS tagging using CoreNLP 3.8.0. The full set of Hearst patterns is provided in Table TABREF8 . Our selected patterns match prototypical Hearst patterns, like “animals such as cats,” but also include broader patterns like “New Year is the most important holiday.” Leading and following noun phrases are allowed to match limited modifiers (compound nouns, adjectives, etc.), in which case we also generate a hit for the head of the noun phrase. During postprocessing, we remove pairs which were not extracted by at least two distinct patterns. We also remove any pair INLINEFORM0 if INLINEFORM1 . The final corpus contains roughly 4.5M matched pairs, 431K unique pairs, and 243K unique terms. For SVD-based models, we select the rank from INLINEFORM2 {5, 10, 15, 20, 25, 50, 100, 150, 200, 250, 300, 500, 1000} on the validation set. The other pattern-based models do not have any hyperparameters.
Distributional models: For the distributional baselines, we employ the large, sparse distributional space of BIBREF4 , which is computed from UkWaC and Wikipedia, and is known to have strong performance on several of the detection tasks. The corpus was POS tagged and dependency parsed. Distributional contexts were constructed from adjacent words in dependency parses BIBREF21 , BIBREF22 . Targets and contexts which appeared fewer than 100 times in the corpus were filtered, and the resulting co-occurrence matrix was PPMI transformed. The resulting space contains representations for 218K words over 732K context dimensions. For the SLQS model, we selected the number of contexts INLINEFORM0 from the same set of options as the SVD rank in pattern-based models.
Results
Table TABREF13 shows the results from all three experimental settings. In nearly all cases, we find that pattern-based approaches substantially outperform all three distributional models. Particularly strong improvements can be observed on bless (0.76 average precision vs 0.19) and wbless (0.96 vs. 0.69) for the detection tasks and on all directionality tasks. For directionality prediction on bless, the SVD models surpass even the state-of-the-art supervised model of BIBREF18 . Moreover, both SVD models perform generally better than their sparse counterparts on all tasks and datasets except on hyperlex. We performed a posthoc analysis of the validation sets comparing the ppmi and spmi models, and found that the truncated SVD improved recall via its matrix completion properties. We also found that the spmi model downweighted many high-scoring outlier pairs composed of rare terms.
When comparing the INLINEFORM0 and ppmi models to distributional models, we observe mixed results. The shwartz dataset is difficult for sparse models due to its very long tail of low frequency words that are hard to cover using Hearst patterns. On eval, Hearst-pattern based methods get penalized by OOV words, due to the large number of verbs and adjectives in the dataset, which are not captured by our patterns. However, in 7 of the 9 datasets, at least one of the sparse models outperforms all distributional measures, showing that Hearst patterns can provide strong performance on large corpora.
Conclusion
We studied the relative performance of Hearst pattern-based methods and DIH-based methods for hypernym detection. Our results show that the pattern-based methods substantially outperform DIH-based methods on several challenging benchmarks. We find that embedding methods alleviate sparsity concerns of pattern-based approaches and substantially improve coverage. We conclude that Hearst patterns provide important contexts for the detection of hypernymy relations that are not yet captured in DIH models. Our code is available at https://github.com/facebookresearch/hypernymysuite.
Acknowledgments
We would like to thank the anonymous reviewers for their helpful suggestions. We also thank Vered Shwartz, Enrico Santus, and Dominik Schlechtweg for providing us with their distributional spaces and baseline implementations. | noun-noun subset of bless, leds BIBREF13, bless, wbless, bibless, hyperlex BIBREF20 |
d325a3c21660dbc481b4e839ff1a2d37dcc7ca46 | d325a3c21660dbc481b4e839ff1a2d37dcc7ca46_0 | Q: What hypernymy tasks do they study?
Text: Introduction
Hierarchical relationships play a central role in knowledge representation and reasoning. Hypernym detection, i.e., the modeling of word-level hierarchies, has long been an important task in natural language processing. Starting with BIBREF0 , pattern-based methods have been one of the most influential approaches to this problem. Their key idea is to exploit certain lexico-syntactic patterns to detect is-a relations in text. For instance, patterns like “ INLINEFORM0 such as INLINEFORM1 ”, or “ INLINEFORM2 and other INLINEFORM3 ” often indicate hypernymy relations of the form INLINEFORM4 is-a INLINEFORM5 . Such patterns may be predefined, or they may be learned automatically BIBREF1 , BIBREF2 . However, a well-known problem of Hearst-like patterns is their extreme sparsity: words must co-occur in exactly the right configuration, or else no relation can be detected.
To alleviate the sparsity issue, the focus in hypernymy detection has recently shifted to distributional representations, wherein words are represented as vectors based on their distribution across large corpora. Such methods offer rich representations of lexical meaning, alleviating the sparsity problem, but require specialized similarity measures to distinguish different lexical relationships. The most successful measures to date are generally inspired by the Distributional Inclusion Hypothesis (DIH) BIBREF3 , which states roughly that contexts in which a narrow term INLINEFORM0 may appear (“cat”) should be a subset of the contexts in which a broader term INLINEFORM1 (“animal”) may appear. Intuitively, the DIH states that we should be able to replace any occurrence of “cat” with “animal” and still have a valid utterance. An important insight from work on distributional methods is that the definition of context is often critical to the success of a system BIBREF4 . Some distributional representations, like positional or dependency-based contexts, may even capture crude Hearst pattern-like features BIBREF5 , BIBREF6 .
While both approaches for hypernym detection rely on co-occurrences within certain contexts, they differ in their context selection strategy: pattern-based methods use predefined manually-curated patterns to generate high-precision extractions while DIH methods rely on unconstrained word co-occurrences in large corpora.
Here, we revisit the idea of using pattern-based methods for hypernym detection. We evaluate several pattern-based models on modern, large corpora and compare them to methods based on the DIH. We find that simple pattern-based methods consistently outperform specialized DIH methods on several difficult hypernymy tasks, including detection, direction prediction, and graded entailment ranking. Moreover, we find that taking low-rank embeddings of pattern-based models substantially improves performance by remedying the sparsity issue. Overall, our results show that Hearst patterns provide high-quality and robust predictions on large corpora by capturing important contextual constraints, which are not yet modeled in distributional methods.
Models
In the following, we discuss pattern-based and distributional methods to detect hypernymy relations. We explicitly consider only relatively simple pattern-based approaches that allow us to directly compare their performance to DIH-based methods.
Pattern-based Hypernym Detection
First, let INLINEFORM0 denote the set of hypernymy relations that have been extracted via Hearst patterns from a text corpus INLINEFORM1 . Furthermore let INLINEFORM2 denote the count of how often INLINEFORM3 has been extracted and let INLINEFORM4 denote the total number extractions. In the first, most direct application of Hearst patterns, we then simply use the counts INLINEFORM5 or, equivalently, the extraction probability DISPLAYFORM0
to predict hypernymy relations from INLINEFORM0 .
However, simple extraction probabilities as in eq:prob are skewed by the occurrence probabilities of their constituent words. For instance, it is more likely that we extract (France, country) over (France, republic), just because the word country is more likely to occur than republic. This skew in word distributions is well-known for natural language and also translates to Hearst patterns (see also fig:dist). For this reason, we also consider predicting hypernymy relations based on the Pointwise Mutual Information of Hearst patterns: First, let INLINEFORM0 and INLINEFORM1 denote the probability that INLINEFORM2 occurs as a hyponym and hypernym, respectively. We then define the Positive Pointwise Mutual Information for INLINEFORM3 as DISPLAYFORM0
While eq:pmi can correct for different word occurrence probabilities, it cannot handle missing data. However, sparsity is one of the main issues when using Hearst patterns, as a necessarily incomplete set of extraction rules will lead inevitably to missing extractions. For this purpose, we also study low-rank embeddings of the PPMI matrix, which allow us to make predictions for unseen pairs. In particular, let INLINEFORM0 denote the number of unique terms in INLINEFORM1 . Furthermore, let INLINEFORM2 be the PPMI matrix with entries INLINEFORM3 and let INLINEFORM4 be its Singular Value Decomposition (SVD). We can then predict hypernymy relations based on the truncated SVD of INLINEFORM5 via DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 denote the INLINEFORM2 -th and INLINEFORM3 -th row of INLINEFORM4 and INLINEFORM5 , respectively, and where INLINEFORM6 is the diagonal matrix of truncated singular values (in which all but the INLINEFORM7 largest singular values are set to zero).
eq:spmi can be interpreted as a smoothed version of the observed PPMI matrix. Due to the truncation of singular values, eq:spmi computes a low-rank embedding of INLINEFORM0 where similar words (in terms of their Hearst patterns) have similar representations. Since eq:spmi is defined for all pairs INLINEFORM1 , it allows us to make hypernymy predictions based on the similarity of words. We also consider factorizing a matrix that is constructed from occurrence probabilities as in eq:prob, denoted by INLINEFORM2 . This approach is then closely related to the method of BIBREF7 , which has been proposed to improve precision and recall for hypernymy detection from Hearst patterns.
Distributional Hypernym Detection
Most unsupervised distributional approaches for hypernymy detection are based on variants of the Distributional Inclusion Hypothesis BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF4 . Here, we compare to two methods with strong empirical results. As with most DIH measures, they are only defined for large, sparse, positively-valued distributional spaces. First, we consider WeedsPrec BIBREF8 which captures the features of INLINEFORM0 which are included in the set of a broader term's features, INLINEFORM1 : DISPLAYFORM0
Second, we consider invCL BIBREF11 which introduces a notion of distributional exclusion by also measuring the degree to which the broader term contains contexts not used by the narrower term. In particular, let INLINEFORM0
denote the degree of inclusion of INLINEFORM0 in INLINEFORM1 as proposed by BIBREF12 . To measure both the inclusion of INLINEFORM2 in INLINEFORM3 and the non-inclusion of INLINEFORM4 in INLINEFORM5 , invCL is then defined as INLINEFORM6
Although most unsupervised distributional approaches are based on the DIH, we also consider the distributional SLQS model based on on an alternative informativeness hypothesis BIBREF10 , BIBREF4 . Intuitively, the SLQS model presupposes that general words appear mostly in uninformative contexts, as measured by entropy. Specifically, SLQS depends on the median entropy of a term's top INLINEFORM0 contexts, defined as INLINEFORM1
where INLINEFORM0 is the Shannon entropy of context INLINEFORM1 across all terms, and INLINEFORM2 is chosen in hyperparameter selection. Finally, SLQS is defined using the ratio between the two terms: INLINEFORM3
Since the SLQS model only compares the relative generality of two terms, but does not make judgment about the terms' relatedness, we report SLQS-cos, which multiplies the SLQS measure by cosine similarity of INLINEFORM0 and INLINEFORM1 BIBREF10 .
For completeness, we also include cosine similarity as a baseline in our evaluation.
Evaluation
To evaluate the relative performance of pattern-based and distributional models, we apply them to several challenging hypernymy tasks.
Tasks
Detection: In hypernymy detection, the task is to classify whether pairs of words are in a hypernymy relation. For this task, we evaluate all models on five benchmark datasets: First, we employ the noun-noun subset of bless, which contains hypernymy annotations for 200 concrete, mostly unambiguous nouns. Negative pairs contain a mixture of co-hyponymy, meronymy, and random pairs. This version contains 14,542 total pairs with 1,337 positive examples. Second, we evaluate on leds BIBREF13 , which consists of 2,770 noun pairs balanced between positive hypernymy examples, and randomly shuffled negative pairs. We also consider eval BIBREF14 , containing 7,378 pairs in a mixture of hypernymy, synonymy, antonymy, meronymy, and adjectival relations. eval is notable for its absence of random pairs. The largest dataset is shwartz BIBREF2 , which was collected from a mixture of WordNet, DBPedia, and other resources. We limit ourselves to a 52,578 pair subset excluding multiword expressions. Finally, we evaluate on wbless BIBREF15 , a 1,668 pair subset of bless, with negative pairs being selected from co-hyponymy, random, and hyponymy relations. Previous work has used different metrics for evaluating on BLESS BIBREF11 , BIBREF5 , BIBREF6 . We chose to evaluate the global ranking using Average Precision. This allowed us to use the same metric on all detection benchmarks, and is consistent with evaluations in BIBREF4 .
Direction: In direction prediction, the task is to identify which term is broader in a given pair of words. For this task, we evaluate all models on three datasets described by BIBREF16 : On bless, the task is to predict the direction for all 1337 positive pairs in the dataset. Pairs are only counted correct if the hypernymy direction scores higher than the reverse direction, i.e. INLINEFORM0 . We reserve 10% of the data for validation, and test on the remaining 90%. On wbless, we follow prior work BIBREF17 , BIBREF18 and perform 1000 random iterations in which 2% of the data is used as a validation set to learn a classification threshold, and test on the remainder of the data. We report average accuracy across all iterations. Finally, we evaluate on bibless BIBREF16 , a variant of wbless with hypernymy and hyponymy pairs explicitly annotated for their direction. Since this task requires three-way classification (hypernymy, hyponymy, and other), we perform two-stage classification. First, a threshold is tuned using 2% of the data, identifying whether a pair exhibits hypernymy in either direction. Second, the relative comparison of scores determines which direction is predicted. As with wbless, we report the average accuracy over 1000 iterations.
Graded Entailment: In graded entailment, the task is to quantify the degree to which a hypernymy relation holds. For this task, we follow prior work BIBREF19 , BIBREF18 and use the noun part of hyperlex BIBREF20 , consisting of 2,163 noun pairs which are annotated to what degree INLINEFORM0 is-a INLINEFORM1 holds on a scale of INLINEFORM2 . For all models, we report Spearman's rank correlation INLINEFORM3 . We handle out-of-vocabulary (OOV) words by assigning the median of the scores (computed across the training set) to pairs with OOV words.
Experimental Setup
Pattern-based models: We extract Hearst patterns from the concatenation of Gigaword and Wikipedia, and prepare our corpus by tokenizing, lemmatizing, and POS tagging using CoreNLP 3.8.0. The full set of Hearst patterns is provided in Table TABREF8 . Our selected patterns match prototypical Hearst patterns, like “animals such as cats,” but also include broader patterns like “New Year is the most important holiday.” Leading and following noun phrases are allowed to match limited modifiers (compound nouns, adjectives, etc.), in which case we also generate a hit for the head of the noun phrase. During postprocessing, we remove pairs which were not extracted by at least two distinct patterns. We also remove any pair INLINEFORM0 if INLINEFORM1 . The final corpus contains roughly 4.5M matched pairs, 431K unique pairs, and 243K unique terms. For SVD-based models, we select the rank from INLINEFORM2 {5, 10, 15, 20, 25, 50, 100, 150, 200, 250, 300, 500, 1000} on the validation set. The other pattern-based models do not have any hyperparameters.
Distributional models: For the distributional baselines, we employ the large, sparse distributional space of BIBREF4 , which is computed from UkWaC and Wikipedia, and is known to have strong performance on several of the detection tasks. The corpus was POS tagged and dependency parsed. Distributional contexts were constructed from adjacent words in dependency parses BIBREF21 , BIBREF22 . Targets and contexts which appeared fewer than 100 times in the corpus were filtered, and the resulting co-occurrence matrix was PPMI transformed. The resulting space contains representations for 218K words over 732K context dimensions. For the SLQS model, we selected the number of contexts INLINEFORM0 from the same set of options as the SVD rank in pattern-based models.
Results
Table TABREF13 shows the results from all three experimental settings. In nearly all cases, we find that pattern-based approaches substantially outperform all three distributional models. Particularly strong improvements can be observed on bless (0.76 average precision vs 0.19) and wbless (0.96 vs. 0.69) for the detection tasks and on all directionality tasks. For directionality prediction on bless, the SVD models surpass even the state-of-the-art supervised model of BIBREF18 . Moreover, both SVD models perform generally better than their sparse counterparts on all tasks and datasets except on hyperlex. We performed a posthoc analysis of the validation sets comparing the ppmi and spmi models, and found that the truncated SVD improved recall via its matrix completion properties. We also found that the spmi model downweighted many high-scoring outlier pairs composed of rare terms.
When comparing the INLINEFORM0 and ppmi models to distributional models, we observe mixed results. The shwartz dataset is difficult for sparse models due to its very long tail of low frequency words that are hard to cover using Hearst patterns. On eval, Hearst-pattern based methods get penalized by OOV words, due to the large number of verbs and adjectives in the dataset, which are not captured by our patterns. However, in 7 of the 9 datasets, at least one of the sparse models outperforms all distributional measures, showing that Hearst patterns can provide strong performance on large corpora.
Conclusion
We studied the relative performance of Hearst pattern-based methods and DIH-based methods for hypernym detection. Our results show that the pattern-based methods substantially outperform DIH-based methods on several challenging benchmarks. We find that embedding methods alleviate sparsity concerns of pattern-based approaches and substantially improve coverage. We conclude that Hearst patterns provide important contexts for the detection of hypernymy relations that are not yet captured in DIH models. Our code is available at https://github.com/facebookresearch/hypernymysuite.
Acknowledgments
We would like to thank the anonymous reviewers for their helpful suggestions. We also thank Vered Shwartz, Enrico Santus, and Dominik Schlechtweg for providing us with their distributional spaces and baseline implementations. | Detection, Direction, Graded Entailment |
eae13c9693ace504eab1f96c91b16a0627cd1f75 | eae13c9693ace504eab1f96c91b16a0627cd1f75_0 | Q: Do they repot results only on English data?
Text: Introduction
Multi-task learning (MTL) refers to machine learning approaches in which information and representations are shared to solve multiple, related tasks. Relative to single-task learning approaches, MTL often shows improved performance on some or all sub-tasks and can be more computationally efficient BIBREF0, BIBREF1, BIBREF2, BIBREF3. We focus here on a form of MTL known as hard parameter sharing. Hard parameter sharing refers to the use of deep learning models in which inputs to models first pass through a number of shared layers. The hidden representations produced by these shared layers are then fed as inputs to a number of task-specific layers.
Within the domain of natural language processing (NLP), MTL approaches have been applied to a wide range of problems BIBREF3. In recent years, one particularly fruitful application of MTL to NLP has been joint solving of named entity recognition (NER) and relation extraction (RE), two important information extraction tasks with applications in search, question answering, and knowledge base construction BIBREF4. NER consists in the identification of spans of text as corresponding to named entities and the classification of each span's entity type. RE consists in the identification of all triples $(e_i, e_j, r)$, where $e_i$ and $e_j$ are named entities and $r$ is a relation that holds between $e_i$ and $e_j$ according to the text. For example, in Figure FIGREF1, Edgar Allan Poe and Boston are named entities of the types People and Location, respectively. In addition, the text indicates that the Lives-In relation obtains between Edgar Allan Poe and Boston.
One option for solving these two problems is a pipeline approach using two independent models, each designed to solve a single task, with the output of the NER model serving as an input to the RE model. However, MTL approaches offer a number of advantages over the pipeline approach. First, the pipeline approach is more susceptible to error prorogation wherein prediction errors from the NER model enter the RE model as inputs that the latter model cannot correct. Second, the pipeline approach only allows solutions to the NER task to inform the RE task, but not vice versa. In contrast, the joint approach allows for solutions to either task to inform the other. For example, learning that there is a Lives-In relation between Edgar Allan Poe and Boston can be useful for determining the types of these entities. Finally, the joint approach can be computationally more efficient than the pipeline approach. As mentioned above, MTL approaches are generally more efficient than single-task learning alternatives. This is due to the fact that solutions to related tasks often rely on similar information, which in an MTL setting only needs to be represented in one model in order to solve all tasks. For example, the fact that Edgar Allan Poe is followed by was born can help a model determine both that Edgar Allan Poe is an instance of a People entity and that the sentence expresses a Lives-In relation.
While the choice as to which and how many layers to share between tasks is known to be an important factor relevant to the performance of MTL models BIBREF5, BIBREF2, this issue has received relatively little attention within the context of joint NER and RE. As we show below in Section 2, prior proposals for jointly solving NER and RE have typically made use of very few task-specific parameters or have mostly used task-specific parameters only for the RE task. We seek to correct for this oversight by proposing a novel neural architecture for joint NER and RE. In particular, we make the following contributions:
We allow for deeper task-specificity than does previous work via the use of additional task-specific bidirectional recurrent neural networks (BiRNNs) for both tasks.
Because the relatedness between the NER and RE tasks is not constant across all textual domains, we take the number of shared and task-specific layers to be an explicit hyperparameter of the model that can be tuned separately for different datasets.
We evaluate the proposed architecture on two publicly available datasets: the Adverse Drug Events (ADE) dataset BIBREF6 and the CoNLL04 dataset BIBREF7. We show that our architecture is able to outperform the current state-of-the-art (SOTA) results on both the NER and RE tasks in the case of ADE. In the case of CoNLL04, our proposed architecture achieves SOTA performance on the NER task and achieves near SOTA performance on the RE task. On both datasets, our results are SOTA when averaging performance across both tasks. Moreover, we achieve these results using an order of magnitude fewer trainable parameters than the current SOTA architecture.
Related Work
We focus in this section on previous deep learning approaches to solving the tasks of NER and RE, as this work is most directly comparable to our proposal. Most work on joint NER and RE has adopted a BIO or BILOU scheme for the NER task, where each token is labeled to indicate whether it is the (B)eginning of an entity, (I)nside an entity, or (O)utside an entity. The BILOU scheme extends these labels to indicate if a token is the (L)ast token of an entity or is a (U)nit, i.e. the only token within an entity span.
Several approaches treat the NER and RE tasks as if they were a single task. For example, Gupta et al. gupta-etal-2016-table, following Miwa and Sasaki miwa-sasaki-2014-modeling, treat the two tasks as a table-filling problem where each cell in the table corresponds to a pair of tokens $(t_i, t_j)$ in the input text. For the diagonal of the table, the cell label is the BILOU tag for $t_i$. All other cells are labeled with the relation $r$, if it exists, such that $(e_i, e_j, r)$, where $e_i$ is the entity whose span's final token is $t_i$, is in the set of true relations. A BiRNN is trained to fill the cells of the table. Zheng et al. Zheng2017 introduce a BILOU tagging scheme that incorporates relation information into the tags, allowing them to treat both tasks as if they were a single NER task. A series of two bidirectional LSTM (BiLSTM) layers and a final softmax layer are used to produce output tags. Li et al. li2019entity solve both tasks as a form of multi-turn question answering in which the input text is queried with question templates first to detect entities and then, given the detected entities, to detect any relations between these entities. Li et al. use BERT BIBREF8 as the backbone of their question-answering model and produce answers by tagging the input text with BILOU tags to identify the span corresponding to the answer(s).
The above approaches allow for very little task-specificity, since both the NER task and the RE task are coerced into a single task. Other approaches incorporate greater task-specificity in one of two ways. First, several models share the majority of model parameters between the NER and RE tasks, but also have separate scoring and/or output layers used to produce separate outputs for each task. For example, Katiyar and Cardie katiyar-cardie-2017-going and Bekoulis et al. bekoulis2018joint propose models in which token representations first pass through one or more shared BiLSTM layers. Katiyar and Cardie use a softmax layer to tag tokens with BILOU tags to solve the NER task and use an attention layer to detect relations between each pair of entities. Bekoulis et al., following Lample et al. Lample2016, use a conditional random field (CRF) layer to produce BIO tags for the NER task. The output from the shared BiLSTM layer for every pair of tokens is passed through relation scoring and sigmoid layers to predict relations.
A second method of incorporating greater task-specificity into these models is via deeper layers for solving the RE task. Miwa and Bansal miwa-bansal-2016-end and Li et al. li2017neural pass token representations through a BiLSTM layer and then use a softmax layer to label each token with the appropriate BILOU label. Both proposals then use a type of tree-structured bidirectional LSTM layer stacked on top of the shared BiLSTM to solve the RE task. Nguyen and Verspoor nguyen2019end use BiLSTM and CRF layers to perform the NER task. Label embeddings are created from predicted NER labels, concatenated with token representations, and then passed through a RE-specific BiLSTM. A biaffine attention layer BIBREF9 operates on the output of this BiLSTM to predict relations.
An alternative to the BIO/BILOU scheme is the span-based approach, wherein spans of the input text are directly labeled as to whether they correspond to any entity and, if so, their entity types. Luan et al. Luan2018 adopt a span-based approach in which token representations are first passed through a BiLSTM layer. The output from the BiLSTM is used to construct representations of candidate entity spans, which are then scored for both the NER and RE tasks via feed forward layers. Luan et al. Luan2019 follow a similar approach, but construct coreference and relation graphs between entities to propagate information between entities connected in these graphs. The resulting entity representations are then classified for NER and RE via feed forward layers. To the best of our knowledge, the current SOTA model for joint NER and RE is the span-based proposal of Eberts and Ulges eberts2019span. In this architecture, token representations are obtained using a pre-trained BERT model that is fine-tuned during training. Representations for candidate entity spans are obtained by max pooling over all tokens in each span. Span representations are passed through an entity classification layer to solve the NER task. Representations of all pairs of spans that are predicted to be entities and representations of the contexts between these pairs are then passed through a final layer with sigmoid activation to predict relations between entities. With respect to their degrees of task-specificity, these span-based approaches resemble the BIO/BILOU approaches in which the majority of model parameters are shared, but each task possesses independent scoring and/or output layers.
Overall, previous approaches to joint NER and RE have experimented little with deep task-specificity, with the exception of those models that include additional layers for the RE task. To our knowledge, no work has considered including additional NER-specific layers beyond scoring and/or output layers. This may reflect a residual influence of the pipeline approach in which the NER task must be solved first before additional layers are used to solve the RE task. However, there is no a priori reason to think that the RE task would benefit more from additional task-specific layers than the NER task. We also note that while previous work has tackled joint NER and RE in variety of textual domains, in all cases the number of shared and task-specific parameters is held constant across these domains.
Model
The architecture proposed here is inspired by several previous proposals BIBREF10, BIBREF11, BIBREF12. We treat the NER task as a sequence labeling problem using BIO labels. Token representations are first passed through a series of shared, BiRNN layers. Stacked on top of these shared BiRNN layers is a sequence of task-specific BiRNN layers for both the NER and RE tasks. We take the number of shared and task-specific layers to be a hyperparameter of the model. Both sets of task-specific BiRNN layers are followed by task-specific scoring and output layers. Figure FIGREF4 illustrates this architecture. Below, we use superscript $e$ for NER-specific variables and layers and superscript $r$ for RE-specific variables and layers.
Model ::: Shared Layers
We obtain contextual token embeddings using the pre-trained ELMo 5.5B model BIBREF13. For each token in the input text $t_i$, this model returns three vectors, which we combine via a weighted averaging layer. Each token $t_i$'s weighted ELMo embedding $\mathbf {t}^{elmo}_{i}$ is concatenated to a pre-trained GloVe embedding BIBREF14 $\mathbf {t}^{glove}_{i}$, a character-level word embedding $\mathbf {t}^{char}_i$ learned via a single BiRNN layer BIBREF15 and a one-hot encoded casing vector $\mathbf {t}^{casing}_i$. The full representation of $t_i$ is given by $\mathbf {v}_i$ (where $\circ $ denotes concatenation):
For an input text with $n$ tokens, $\mathbf {v}_{1:n}$ are fed as input to a sequence of one or more shared BiRNN layers, with the output sequence from the $i$th shared BiRNN layer serving as the input sequence to the $i + 1$st shared BiRNN layer.
Model ::: NER-Specific Layers
The final shared BiRNN layer is followed by a sequence of zero or more NER-specific BiRNN layers; the output of the final shared BiRNN layer serves as input to the first NER-specific BiRNN layer, if such a layer exists, and the output from from the $i$th NER-specific BiRNN layer serves as input to the $i + 1$st NER-specific BiRNN layer. For every token $t_i$, let $\mathbf {h}^{e}_i$ denote an NER-specific hidden representation for $t_i$ corresponding to the $i$th element of the output sequence from the final NER-specific BiRNN layer or the final shared BiRNN layer if there are zero NER-specific BiRNN layers. An NER score for token $t_i$, $\mathbf {s}^{e}_i$, is obtained by passing $\mathbf {h}^{e}_i$ through a series of two feed forward layers:
The activation function of $\text{FFNN}^{(e1)}$ and its output size are treated as hyperparameters. $\text{FFNN}^{(e2)}$ uses linear activation and its output size is $|\mathcal {E}|$, where $\mathcal {E}$ is the set of possible entity types. The sequence of NER scores for all tokens, $\mathbf {s}^{e}_{1:n}$, is then passed as input to a linear-chain CRF layer to produce the final BIO tag predictions, $\hat{\mathbf {y}}^e_{1:n}$. During inference, Viterbi decoding is used to determine the most likely sequence $\hat{\mathbf {y}}^e_{1:n}$.
Model ::: RE-Specific Layers
Similar to the NER-specific layers, the output sequence from the final shared BiRNN layer is fed through zero or more RE-specific BiRNN layers. Let $\mathbf {h}^{r}_i$ denote the $i$th output from the final RE-specific BiRNN layer or the final shared BiRNN layer if there are no RE-specific BiRNN layers.
Following previous work BIBREF16, BIBREF11, BIBREF12, we predict relations between entities $e_i$ and $e_j$ using learned representations from the final tokens of the spans corresponding to $e_i$ and $e_j$. To this end, we filter the sequence $\mathbf {h}^{r}_{1:n}$ to include only elements $\mathbf {h}^{r}_{i}$ such that token $t_i$ is the final token in an entity span. During training, ground truth entity spans are used for filtering. During inference, predicted entity spans derived from $\hat{\mathbf {y}}^e_{1:n}$ are used. Each $\mathbf {h}^{r}_{i}$ is concatenated to a learned NER label embedding for $t_i$, $\mathbf {l}^{e}_{i}$:
Ground truth NER labels are used to obtain $\mathbf {l}^{e}_{1:n}$ during training, and predicted NER labels are used during inference.
Next, RE scores are computed for every pair $(\mathbf {g}^{r}_i, \mathbf {g}^{r}_j)$. If $\mathcal {R}$ is the set of possible relations, we calculate the DistMult score BIBREF17 for every relation $r_k \in \mathcal {R}$ and every pair $(\mathbf {g}^{r}_i, \mathbf {g}^{r}_j)$ as follows:
$M^{r_k}$ is a diagonal matrix such that $M^{r_k} \in \mathbb {R}^{p \times p}$, where $p$ is the dimensionality of $\mathbf {g}^r_i$. We also pass each RE-specific hidden representation $\mathbf {g}^{r}_i$ through a single feed forward layer:
As in the case of $\text{FFNN}^{(e1)}$, the activation function of $\text{FFNN}^{(r1)}$ and its output size are treated as hyperparameters.
Let $\textsc {DistMult}^r_{i,j}$ denote the concatenation of $\textsc {DistMult}^{r_k}(\mathbf {g}^r_i, \mathbf {g}^r_j)$ for all $r_k \in \mathcal {R}$ and let $\cos _{i,j}$ denote the cosine distance between vectors $\mathbf {f}^{r}_i$ and $\mathbf {f}^{r}_j$. We obtain RE scores for $(t_i, t_j)$ via a feed forward layer:
$\text{FFNN}^{(r2)}$ uses linear activation, and its output size is $|\mathcal {R}|$. Final relation predictions for a pair of tokens $(t_i, t_j)$, $\hat{\mathbf {y}}^r_{i,j}$, are obtained by passing $\mathbf {s}^r_{i,j}$ through an elementwise sigmoid layer. A relation is predicted for all outputs from this sigmoid layer exceeding $\theta ^r$, which we treat as a hyperparameter.
Model ::: Training
During training, character embeddings, label embeddings, and weights for the weighted average layer, all BiRNN weights, all feed forward networks, and $M^{r_k}$ for all $r_k \in \mathcal {R}$ are trained in a supervised manner. As mentioned above, BIO tags for all tokens are used as labels for the NER task. For the the RE task, binary outputs are used. For every relation $r_k \in R$ and for every pair of tokens $(t_i, t_j)$ such that $t_i$ is the final token of entity $e_i$ and $t_j$ is the final token of entity $e_j$, the RE label $y^{r_k}_{i,j} = 1$ if $(e_i, e_j, r_k)$ is a true relation. Otherwise, we have $y^{r_k}_{i,j} = 0$.
For both output layers, we compute the cross-entropy loss. If $\mathcal {L}_{NER}$ and $\mathcal {L}_{RE}$ denote the cross-entropy loss for the NER and RE outputs, respectively, then the total model loss is given by $\mathcal {L} = \mathcal {L}_{NER} + \lambda ^r \mathcal {L}_{RE}$. The weight $\lambda ^r$ is treated as a hyperparameter and allows for tuning the relative importance of the NER and RE tasks during training. Final training for both datasets used a value of 5 for $\lambda ^r$.
For the ADE dataset, we trained using the Adam optimizer with a mini-batch size of 16. For the CoNLL04 dataset, we used the Nesterov Adam optimizer with and a mini-batch size of 2. For both datasets, we used a learning rate of $5\times 10^{-4}$, During training, dropout was applied before each BiRNN layer, other than the character BiRNN layer, and before the RE scoring layer.
Experiments
We evaluate the architecture described above using the following two publicly available datasets.
Experiments ::: ADE
The Adverse Drug Events (ADE) dataset BIBREF6 consists of 4,272 sentences describing adverse effects from the use of particular drugs. The text is annotated using two entity types (Adverse-Effect and Drug) and a single relation type (Adverse-Effect). Of the entity instances in the dataset, 120 overlap with other entities. Similar to prior work using BIO/BILOU tagging, we remove overlapping entities. We preserve the entity with the longer span and remove any relations involving a removed entity.
There are no official training, dev, and test splits for the ADE dataset, leading previous researchers to use some form of cross-validation when evaluating their models on this dataset. We split out 10% of the data to use as a held-out dev set. Final results are obtained via 10-fold cross-validation using the remaining 90% of the data and the hyperparameters obtained from tuning on the dev set. Following previous work, we report macro-averaged performance metrics averaged across each of the 10 folds.
Experiments ::: CoNLL04
The CoNLL04 dataset BIBREF7 consists of 1,441 sentences from news articles annotated with four entity types (Location, Organization, People, and Other) and five relation types (Works-For, Kill, Organization-Based-In, Lives-In, and Located-In). This dataset contains no overlapping entities.
We use the three-way split of BIBREF16, which contains 910 training, 243 dev, and 288 test sentences. All hyperparameters are tuned against the dev set. Final results are obtained by averaging results from five trials with random weight initializations in which we trained on the combined training and dev sets and evaluated on the test set. As previous work using the CoNLL04 dataset has reported both micro- and macro-averages, we report both sets of metrics.
In evaluating NER performance on these datasets, a predicted entity is only considered a true positive if both the entity's span and span type are correctly predicted. In evaluating RE performance, we follow previous work in adopting a strict evaluation method wherein a predicted relation is only considered correct if the spans corresponding to the two arguments of this relation and the entity types of these spans are also predicted correctly. We experimented with LSTMs and GRUs for all BiRNN layers in the model and experimented with using $1-3$ shared BiRNN layers and $0-3$ task-specific BiRNN layers for each task. Hyperparameters used for final training are listed in Table TABREF17.
Experiments ::: Results
Full results for the performance of our model, as well as other recent work, are shown in Table TABREF18. In addition to precision, recall, and F1 scores for both tasks, we show the average of the F1 scores across both tasks. On the ADE dataset, we achieve SOTA results for both the NER and RE tasks. On the CoNLL04 dataset, we achieve SOTA results on the NER task, while our performance on the RE task is competitive with other recent models. On both datasets, we achieve SOTA results when considering the average F1 score across both tasks. The largest gain relative to the previous SOTA performance is on the RE task of the ADE dataset, where we see an absolute improvement of 4.5 on the macro-average F1 score.
While the model of Eberts and Ulges eberts2019span outperforms our proposed architecture on the CoNLL04 RE task, their results come at the cost of greater model complexity. As mentioned above, Eberts and Ulges fine-tune the BERTBASE model, which has 110 million trainable parameters. In contrast, given the hyperparameters used for final training on the CoNLL04 dataset, our proposed architecture has approximately 6 million trainable parameters.
The fact that the optimal number of task-specific layers differed between the two datasets demonstrates the value of taking the number of shared and task-specific layers to be a hyperparameter of our model architecture. As shown in Table TABREF17, the final hyperparameters used for the CoNLL04 dataset included an additional RE-specific BiRNN layer than did the final hyperparameters used for the ADE dataset. We suspect that this is due to the limited number of relations and entities in the ADE dataset. For most examples in this dataset, it is sufficient to correctly identify a single Drug entity, a single Adverse-Effect entity, and an Adverse-Effect relation between the two entities. Thus, the NER and RE tasks for this dataset are more closely related than they are in the case of the CoNLL04 dataset. Intuitively, cases in which the NER and RE problems can be solved by relying on more shared information should require fewer task-specific layers.
Experiments ::: Ablation Study
To further demonstrate the effectiveness of the additional task-specific BiRNN layers in our architecture, we conducted an ablation study using the CoNLL04 dataset. We trained and evaluated in the same manner described above, using the same hyperparameters, with the following exceptions:
We used either (i) zero NER-specific BiRNN layers, (ii) zero RE-specific BiRNN layers, or (iii) zero task-specific BiRNN layers of any kind.
We increased the number of shared BiRNN layers to keep the total number of model parameters consistent with the number of parameters in the baseline model.
We average the results for each set of hyperparameter across three trials with random weight initializations.
Table TABREF26 contains the results from the ablation study. These results show that the proposed architecture benefits from the inclusion of both NER- and RE-specific layers. However, the RE task benefits much more from the inclusion of these task-specific layers than does the NER task. We take this to reflect the fact that the RE task is more difficult than the NER task for the CoNLL04 dataset, and therefore benefits the most from its own task-specific layers. This is consistent with the fact that the hyperparameter setting that performs best on the RE task is that with no NER-specific BiRNN layers, i.e. the setting that retained RE-specific BiRNN layers. In contrast, the inclusion of task-specific BiRNN layers of any kind had relatively little impact on the performance on the NER task.
Note that the setting with no NER-specific layers is somewhat similar to the setup of Nguyen and Verspoor's nguyen2019end model, but includes an additional shared and an additional RE-specific layer. That this setting outperforms Nguyen et al.'s model reflects the contribution of having deeper shared and RE-specific layers, separate from the contribution of NER-specific layers.
Conclusion
Our results demonstrate the utility of using deeper task-specificity in models for joint NER and RE and of tuning the level of task-specificity separately for different datasets. We conclude that prior work on joint NER and RE undervalues the importance of task-specificity. More generally, these results underscore the importance of correctly balancing the number of shared and task-specific parameters in MTL.
We note that other approaches that employ a single model architecture across different datasets are laudable insofar as we should prefer models that can generalize well across domains with little domain-specific hyperparameter tuning. On the other hand, the similarity between the NER and RE tasks varies across domains, and improved performance can be achieved on these tasks by tuning the number of shared and task-specific parameters. In our work, we treated the number of shared and task-specific layers as a hyperparameter to be tuned for each dataset, but future work may explore ways to select this aspect of the model architecture in a more principled way. For example, Vandenhende et al. vandenhende2019branched propose using a measure of affinity between tasks to determine how many layers to share in MTL networks. Task affinity scores of NER and RE could be computed for different textual domains or datasets, which could then guide the decision regarding the number of shared and task-specific layers to employ for joint NER and RE models deployed on these domains.
Other extensions to the present work could include fine-tuning the model used to obtain contextual word embeddings, e.g. ELMo or BERT, during training. In order to minimize the number of trainable parameters, we did not employ such fine-tuning in our model, but we suspect a fine-tuning approach could lead to improved performance relative to our results. An additional opportunity for future work would be an extension of this work to other related NLP tasks, such as co-reference resolution and cross-sentential relation extraction. | Yes |
bcec22a75c1f899e9fcea4996457cf177c50c4c5 | bcec22a75c1f899e9fcea4996457cf177c50c4c5_0 | Q: What were the variables in the ablation study?
Text: Introduction
Multi-task learning (MTL) refers to machine learning approaches in which information and representations are shared to solve multiple, related tasks. Relative to single-task learning approaches, MTL often shows improved performance on some or all sub-tasks and can be more computationally efficient BIBREF0, BIBREF1, BIBREF2, BIBREF3. We focus here on a form of MTL known as hard parameter sharing. Hard parameter sharing refers to the use of deep learning models in which inputs to models first pass through a number of shared layers. The hidden representations produced by these shared layers are then fed as inputs to a number of task-specific layers.
Within the domain of natural language processing (NLP), MTL approaches have been applied to a wide range of problems BIBREF3. In recent years, one particularly fruitful application of MTL to NLP has been joint solving of named entity recognition (NER) and relation extraction (RE), two important information extraction tasks with applications in search, question answering, and knowledge base construction BIBREF4. NER consists in the identification of spans of text as corresponding to named entities and the classification of each span's entity type. RE consists in the identification of all triples $(e_i, e_j, r)$, where $e_i$ and $e_j$ are named entities and $r$ is a relation that holds between $e_i$ and $e_j$ according to the text. For example, in Figure FIGREF1, Edgar Allan Poe and Boston are named entities of the types People and Location, respectively. In addition, the text indicates that the Lives-In relation obtains between Edgar Allan Poe and Boston.
One option for solving these two problems is a pipeline approach using two independent models, each designed to solve a single task, with the output of the NER model serving as an input to the RE model. However, MTL approaches offer a number of advantages over the pipeline approach. First, the pipeline approach is more susceptible to error prorogation wherein prediction errors from the NER model enter the RE model as inputs that the latter model cannot correct. Second, the pipeline approach only allows solutions to the NER task to inform the RE task, but not vice versa. In contrast, the joint approach allows for solutions to either task to inform the other. For example, learning that there is a Lives-In relation between Edgar Allan Poe and Boston can be useful for determining the types of these entities. Finally, the joint approach can be computationally more efficient than the pipeline approach. As mentioned above, MTL approaches are generally more efficient than single-task learning alternatives. This is due to the fact that solutions to related tasks often rely on similar information, which in an MTL setting only needs to be represented in one model in order to solve all tasks. For example, the fact that Edgar Allan Poe is followed by was born can help a model determine both that Edgar Allan Poe is an instance of a People entity and that the sentence expresses a Lives-In relation.
While the choice as to which and how many layers to share between tasks is known to be an important factor relevant to the performance of MTL models BIBREF5, BIBREF2, this issue has received relatively little attention within the context of joint NER and RE. As we show below in Section 2, prior proposals for jointly solving NER and RE have typically made use of very few task-specific parameters or have mostly used task-specific parameters only for the RE task. We seek to correct for this oversight by proposing a novel neural architecture for joint NER and RE. In particular, we make the following contributions:
We allow for deeper task-specificity than does previous work via the use of additional task-specific bidirectional recurrent neural networks (BiRNNs) for both tasks.
Because the relatedness between the NER and RE tasks is not constant across all textual domains, we take the number of shared and task-specific layers to be an explicit hyperparameter of the model that can be tuned separately for different datasets.
We evaluate the proposed architecture on two publicly available datasets: the Adverse Drug Events (ADE) dataset BIBREF6 and the CoNLL04 dataset BIBREF7. We show that our architecture is able to outperform the current state-of-the-art (SOTA) results on both the NER and RE tasks in the case of ADE. In the case of CoNLL04, our proposed architecture achieves SOTA performance on the NER task and achieves near SOTA performance on the RE task. On both datasets, our results are SOTA when averaging performance across both tasks. Moreover, we achieve these results using an order of magnitude fewer trainable parameters than the current SOTA architecture.
Related Work
We focus in this section on previous deep learning approaches to solving the tasks of NER and RE, as this work is most directly comparable to our proposal. Most work on joint NER and RE has adopted a BIO or BILOU scheme for the NER task, where each token is labeled to indicate whether it is the (B)eginning of an entity, (I)nside an entity, or (O)utside an entity. The BILOU scheme extends these labels to indicate if a token is the (L)ast token of an entity or is a (U)nit, i.e. the only token within an entity span.
Several approaches treat the NER and RE tasks as if they were a single task. For example, Gupta et al. gupta-etal-2016-table, following Miwa and Sasaki miwa-sasaki-2014-modeling, treat the two tasks as a table-filling problem where each cell in the table corresponds to a pair of tokens $(t_i, t_j)$ in the input text. For the diagonal of the table, the cell label is the BILOU tag for $t_i$. All other cells are labeled with the relation $r$, if it exists, such that $(e_i, e_j, r)$, where $e_i$ is the entity whose span's final token is $t_i$, is in the set of true relations. A BiRNN is trained to fill the cells of the table. Zheng et al. Zheng2017 introduce a BILOU tagging scheme that incorporates relation information into the tags, allowing them to treat both tasks as if they were a single NER task. A series of two bidirectional LSTM (BiLSTM) layers and a final softmax layer are used to produce output tags. Li et al. li2019entity solve both tasks as a form of multi-turn question answering in which the input text is queried with question templates first to detect entities and then, given the detected entities, to detect any relations between these entities. Li et al. use BERT BIBREF8 as the backbone of their question-answering model and produce answers by tagging the input text with BILOU tags to identify the span corresponding to the answer(s).
The above approaches allow for very little task-specificity, since both the NER task and the RE task are coerced into a single task. Other approaches incorporate greater task-specificity in one of two ways. First, several models share the majority of model parameters between the NER and RE tasks, but also have separate scoring and/or output layers used to produce separate outputs for each task. For example, Katiyar and Cardie katiyar-cardie-2017-going and Bekoulis et al. bekoulis2018joint propose models in which token representations first pass through one or more shared BiLSTM layers. Katiyar and Cardie use a softmax layer to tag tokens with BILOU tags to solve the NER task and use an attention layer to detect relations between each pair of entities. Bekoulis et al., following Lample et al. Lample2016, use a conditional random field (CRF) layer to produce BIO tags for the NER task. The output from the shared BiLSTM layer for every pair of tokens is passed through relation scoring and sigmoid layers to predict relations.
A second method of incorporating greater task-specificity into these models is via deeper layers for solving the RE task. Miwa and Bansal miwa-bansal-2016-end and Li et al. li2017neural pass token representations through a BiLSTM layer and then use a softmax layer to label each token with the appropriate BILOU label. Both proposals then use a type of tree-structured bidirectional LSTM layer stacked on top of the shared BiLSTM to solve the RE task. Nguyen and Verspoor nguyen2019end use BiLSTM and CRF layers to perform the NER task. Label embeddings are created from predicted NER labels, concatenated with token representations, and then passed through a RE-specific BiLSTM. A biaffine attention layer BIBREF9 operates on the output of this BiLSTM to predict relations.
An alternative to the BIO/BILOU scheme is the span-based approach, wherein spans of the input text are directly labeled as to whether they correspond to any entity and, if so, their entity types. Luan et al. Luan2018 adopt a span-based approach in which token representations are first passed through a BiLSTM layer. The output from the BiLSTM is used to construct representations of candidate entity spans, which are then scored for both the NER and RE tasks via feed forward layers. Luan et al. Luan2019 follow a similar approach, but construct coreference and relation graphs between entities to propagate information between entities connected in these graphs. The resulting entity representations are then classified for NER and RE via feed forward layers. To the best of our knowledge, the current SOTA model for joint NER and RE is the span-based proposal of Eberts and Ulges eberts2019span. In this architecture, token representations are obtained using a pre-trained BERT model that is fine-tuned during training. Representations for candidate entity spans are obtained by max pooling over all tokens in each span. Span representations are passed through an entity classification layer to solve the NER task. Representations of all pairs of spans that are predicted to be entities and representations of the contexts between these pairs are then passed through a final layer with sigmoid activation to predict relations between entities. With respect to their degrees of task-specificity, these span-based approaches resemble the BIO/BILOU approaches in which the majority of model parameters are shared, but each task possesses independent scoring and/or output layers.
Overall, previous approaches to joint NER and RE have experimented little with deep task-specificity, with the exception of those models that include additional layers for the RE task. To our knowledge, no work has considered including additional NER-specific layers beyond scoring and/or output layers. This may reflect a residual influence of the pipeline approach in which the NER task must be solved first before additional layers are used to solve the RE task. However, there is no a priori reason to think that the RE task would benefit more from additional task-specific layers than the NER task. We also note that while previous work has tackled joint NER and RE in variety of textual domains, in all cases the number of shared and task-specific parameters is held constant across these domains.
Model
The architecture proposed here is inspired by several previous proposals BIBREF10, BIBREF11, BIBREF12. We treat the NER task as a sequence labeling problem using BIO labels. Token representations are first passed through a series of shared, BiRNN layers. Stacked on top of these shared BiRNN layers is a sequence of task-specific BiRNN layers for both the NER and RE tasks. We take the number of shared and task-specific layers to be a hyperparameter of the model. Both sets of task-specific BiRNN layers are followed by task-specific scoring and output layers. Figure FIGREF4 illustrates this architecture. Below, we use superscript $e$ for NER-specific variables and layers and superscript $r$ for RE-specific variables and layers.
Model ::: Shared Layers
We obtain contextual token embeddings using the pre-trained ELMo 5.5B model BIBREF13. For each token in the input text $t_i$, this model returns three vectors, which we combine via a weighted averaging layer. Each token $t_i$'s weighted ELMo embedding $\mathbf {t}^{elmo}_{i}$ is concatenated to a pre-trained GloVe embedding BIBREF14 $\mathbf {t}^{glove}_{i}$, a character-level word embedding $\mathbf {t}^{char}_i$ learned via a single BiRNN layer BIBREF15 and a one-hot encoded casing vector $\mathbf {t}^{casing}_i$. The full representation of $t_i$ is given by $\mathbf {v}_i$ (where $\circ $ denotes concatenation):
For an input text with $n$ tokens, $\mathbf {v}_{1:n}$ are fed as input to a sequence of one or more shared BiRNN layers, with the output sequence from the $i$th shared BiRNN layer serving as the input sequence to the $i + 1$st shared BiRNN layer.
Model ::: NER-Specific Layers
The final shared BiRNN layer is followed by a sequence of zero or more NER-specific BiRNN layers; the output of the final shared BiRNN layer serves as input to the first NER-specific BiRNN layer, if such a layer exists, and the output from from the $i$th NER-specific BiRNN layer serves as input to the $i + 1$st NER-specific BiRNN layer. For every token $t_i$, let $\mathbf {h}^{e}_i$ denote an NER-specific hidden representation for $t_i$ corresponding to the $i$th element of the output sequence from the final NER-specific BiRNN layer or the final shared BiRNN layer if there are zero NER-specific BiRNN layers. An NER score for token $t_i$, $\mathbf {s}^{e}_i$, is obtained by passing $\mathbf {h}^{e}_i$ through a series of two feed forward layers:
The activation function of $\text{FFNN}^{(e1)}$ and its output size are treated as hyperparameters. $\text{FFNN}^{(e2)}$ uses linear activation and its output size is $|\mathcal {E}|$, where $\mathcal {E}$ is the set of possible entity types. The sequence of NER scores for all tokens, $\mathbf {s}^{e}_{1:n}$, is then passed as input to a linear-chain CRF layer to produce the final BIO tag predictions, $\hat{\mathbf {y}}^e_{1:n}$. During inference, Viterbi decoding is used to determine the most likely sequence $\hat{\mathbf {y}}^e_{1:n}$.
Model ::: RE-Specific Layers
Similar to the NER-specific layers, the output sequence from the final shared BiRNN layer is fed through zero or more RE-specific BiRNN layers. Let $\mathbf {h}^{r}_i$ denote the $i$th output from the final RE-specific BiRNN layer or the final shared BiRNN layer if there are no RE-specific BiRNN layers.
Following previous work BIBREF16, BIBREF11, BIBREF12, we predict relations between entities $e_i$ and $e_j$ using learned representations from the final tokens of the spans corresponding to $e_i$ and $e_j$. To this end, we filter the sequence $\mathbf {h}^{r}_{1:n}$ to include only elements $\mathbf {h}^{r}_{i}$ such that token $t_i$ is the final token in an entity span. During training, ground truth entity spans are used for filtering. During inference, predicted entity spans derived from $\hat{\mathbf {y}}^e_{1:n}$ are used. Each $\mathbf {h}^{r}_{i}$ is concatenated to a learned NER label embedding for $t_i$, $\mathbf {l}^{e}_{i}$:
Ground truth NER labels are used to obtain $\mathbf {l}^{e}_{1:n}$ during training, and predicted NER labels are used during inference.
Next, RE scores are computed for every pair $(\mathbf {g}^{r}_i, \mathbf {g}^{r}_j)$. If $\mathcal {R}$ is the set of possible relations, we calculate the DistMult score BIBREF17 for every relation $r_k \in \mathcal {R}$ and every pair $(\mathbf {g}^{r}_i, \mathbf {g}^{r}_j)$ as follows:
$M^{r_k}$ is a diagonal matrix such that $M^{r_k} \in \mathbb {R}^{p \times p}$, where $p$ is the dimensionality of $\mathbf {g}^r_i$. We also pass each RE-specific hidden representation $\mathbf {g}^{r}_i$ through a single feed forward layer:
As in the case of $\text{FFNN}^{(e1)}$, the activation function of $\text{FFNN}^{(r1)}$ and its output size are treated as hyperparameters.
Let $\textsc {DistMult}^r_{i,j}$ denote the concatenation of $\textsc {DistMult}^{r_k}(\mathbf {g}^r_i, \mathbf {g}^r_j)$ for all $r_k \in \mathcal {R}$ and let $\cos _{i,j}$ denote the cosine distance between vectors $\mathbf {f}^{r}_i$ and $\mathbf {f}^{r}_j$. We obtain RE scores for $(t_i, t_j)$ via a feed forward layer:
$\text{FFNN}^{(r2)}$ uses linear activation, and its output size is $|\mathcal {R}|$. Final relation predictions for a pair of tokens $(t_i, t_j)$, $\hat{\mathbf {y}}^r_{i,j}$, are obtained by passing $\mathbf {s}^r_{i,j}$ through an elementwise sigmoid layer. A relation is predicted for all outputs from this sigmoid layer exceeding $\theta ^r$, which we treat as a hyperparameter.
Model ::: Training
During training, character embeddings, label embeddings, and weights for the weighted average layer, all BiRNN weights, all feed forward networks, and $M^{r_k}$ for all $r_k \in \mathcal {R}$ are trained in a supervised manner. As mentioned above, BIO tags for all tokens are used as labels for the NER task. For the the RE task, binary outputs are used. For every relation $r_k \in R$ and for every pair of tokens $(t_i, t_j)$ such that $t_i$ is the final token of entity $e_i$ and $t_j$ is the final token of entity $e_j$, the RE label $y^{r_k}_{i,j} = 1$ if $(e_i, e_j, r_k)$ is a true relation. Otherwise, we have $y^{r_k}_{i,j} = 0$.
For both output layers, we compute the cross-entropy loss. If $\mathcal {L}_{NER}$ and $\mathcal {L}_{RE}$ denote the cross-entropy loss for the NER and RE outputs, respectively, then the total model loss is given by $\mathcal {L} = \mathcal {L}_{NER} + \lambda ^r \mathcal {L}_{RE}$. The weight $\lambda ^r$ is treated as a hyperparameter and allows for tuning the relative importance of the NER and RE tasks during training. Final training for both datasets used a value of 5 for $\lambda ^r$.
For the ADE dataset, we trained using the Adam optimizer with a mini-batch size of 16. For the CoNLL04 dataset, we used the Nesterov Adam optimizer with and a mini-batch size of 2. For both datasets, we used a learning rate of $5\times 10^{-4}$, During training, dropout was applied before each BiRNN layer, other than the character BiRNN layer, and before the RE scoring layer.
Experiments
We evaluate the architecture described above using the following two publicly available datasets.
Experiments ::: ADE
The Adverse Drug Events (ADE) dataset BIBREF6 consists of 4,272 sentences describing adverse effects from the use of particular drugs. The text is annotated using two entity types (Adverse-Effect and Drug) and a single relation type (Adverse-Effect). Of the entity instances in the dataset, 120 overlap with other entities. Similar to prior work using BIO/BILOU tagging, we remove overlapping entities. We preserve the entity with the longer span and remove any relations involving a removed entity.
There are no official training, dev, and test splits for the ADE dataset, leading previous researchers to use some form of cross-validation when evaluating their models on this dataset. We split out 10% of the data to use as a held-out dev set. Final results are obtained via 10-fold cross-validation using the remaining 90% of the data and the hyperparameters obtained from tuning on the dev set. Following previous work, we report macro-averaged performance metrics averaged across each of the 10 folds.
Experiments ::: CoNLL04
The CoNLL04 dataset BIBREF7 consists of 1,441 sentences from news articles annotated with four entity types (Location, Organization, People, and Other) and five relation types (Works-For, Kill, Organization-Based-In, Lives-In, and Located-In). This dataset contains no overlapping entities.
We use the three-way split of BIBREF16, which contains 910 training, 243 dev, and 288 test sentences. All hyperparameters are tuned against the dev set. Final results are obtained by averaging results from five trials with random weight initializations in which we trained on the combined training and dev sets and evaluated on the test set. As previous work using the CoNLL04 dataset has reported both micro- and macro-averages, we report both sets of metrics.
In evaluating NER performance on these datasets, a predicted entity is only considered a true positive if both the entity's span and span type are correctly predicted. In evaluating RE performance, we follow previous work in adopting a strict evaluation method wherein a predicted relation is only considered correct if the spans corresponding to the two arguments of this relation and the entity types of these spans are also predicted correctly. We experimented with LSTMs and GRUs for all BiRNN layers in the model and experimented with using $1-3$ shared BiRNN layers and $0-3$ task-specific BiRNN layers for each task. Hyperparameters used for final training are listed in Table TABREF17.
Experiments ::: Results
Full results for the performance of our model, as well as other recent work, are shown in Table TABREF18. In addition to precision, recall, and F1 scores for both tasks, we show the average of the F1 scores across both tasks. On the ADE dataset, we achieve SOTA results for both the NER and RE tasks. On the CoNLL04 dataset, we achieve SOTA results on the NER task, while our performance on the RE task is competitive with other recent models. On both datasets, we achieve SOTA results when considering the average F1 score across both tasks. The largest gain relative to the previous SOTA performance is on the RE task of the ADE dataset, where we see an absolute improvement of 4.5 on the macro-average F1 score.
While the model of Eberts and Ulges eberts2019span outperforms our proposed architecture on the CoNLL04 RE task, their results come at the cost of greater model complexity. As mentioned above, Eberts and Ulges fine-tune the BERTBASE model, which has 110 million trainable parameters. In contrast, given the hyperparameters used for final training on the CoNLL04 dataset, our proposed architecture has approximately 6 million trainable parameters.
The fact that the optimal number of task-specific layers differed between the two datasets demonstrates the value of taking the number of shared and task-specific layers to be a hyperparameter of our model architecture. As shown in Table TABREF17, the final hyperparameters used for the CoNLL04 dataset included an additional RE-specific BiRNN layer than did the final hyperparameters used for the ADE dataset. We suspect that this is due to the limited number of relations and entities in the ADE dataset. For most examples in this dataset, it is sufficient to correctly identify a single Drug entity, a single Adverse-Effect entity, and an Adverse-Effect relation between the two entities. Thus, the NER and RE tasks for this dataset are more closely related than they are in the case of the CoNLL04 dataset. Intuitively, cases in which the NER and RE problems can be solved by relying on more shared information should require fewer task-specific layers.
Experiments ::: Ablation Study
To further demonstrate the effectiveness of the additional task-specific BiRNN layers in our architecture, we conducted an ablation study using the CoNLL04 dataset. We trained and evaluated in the same manner described above, using the same hyperparameters, with the following exceptions:
We used either (i) zero NER-specific BiRNN layers, (ii) zero RE-specific BiRNN layers, or (iii) zero task-specific BiRNN layers of any kind.
We increased the number of shared BiRNN layers to keep the total number of model parameters consistent with the number of parameters in the baseline model.
We average the results for each set of hyperparameter across three trials with random weight initializations.
Table TABREF26 contains the results from the ablation study. These results show that the proposed architecture benefits from the inclusion of both NER- and RE-specific layers. However, the RE task benefits much more from the inclusion of these task-specific layers than does the NER task. We take this to reflect the fact that the RE task is more difficult than the NER task for the CoNLL04 dataset, and therefore benefits the most from its own task-specific layers. This is consistent with the fact that the hyperparameter setting that performs best on the RE task is that with no NER-specific BiRNN layers, i.e. the setting that retained RE-specific BiRNN layers. In contrast, the inclusion of task-specific BiRNN layers of any kind had relatively little impact on the performance on the NER task.
Note that the setting with no NER-specific layers is somewhat similar to the setup of Nguyen and Verspoor's nguyen2019end model, but includes an additional shared and an additional RE-specific layer. That this setting outperforms Nguyen et al.'s model reflects the contribution of having deeper shared and RE-specific layers, separate from the contribution of NER-specific layers.
Conclusion
Our results demonstrate the utility of using deeper task-specificity in models for joint NER and RE and of tuning the level of task-specificity separately for different datasets. We conclude that prior work on joint NER and RE undervalues the importance of task-specificity. More generally, these results underscore the importance of correctly balancing the number of shared and task-specific parameters in MTL.
We note that other approaches that employ a single model architecture across different datasets are laudable insofar as we should prefer models that can generalize well across domains with little domain-specific hyperparameter tuning. On the other hand, the similarity between the NER and RE tasks varies across domains, and improved performance can be achieved on these tasks by tuning the number of shared and task-specific parameters. In our work, we treated the number of shared and task-specific layers as a hyperparameter to be tuned for each dataset, but future work may explore ways to select this aspect of the model architecture in a more principled way. For example, Vandenhende et al. vandenhende2019branched propose using a measure of affinity between tasks to determine how many layers to share in MTL networks. Task affinity scores of NER and RE could be computed for different textual domains or datasets, which could then guide the decision regarding the number of shared and task-specific layers to employ for joint NER and RE models deployed on these domains.
Other extensions to the present work could include fine-tuning the model used to obtain contextual word embeddings, e.g. ELMo or BERT, during training. In order to minimize the number of trainable parameters, we did not employ such fine-tuning in our model, but we suspect a fine-tuning approach could lead to improved performance relative to our results. An additional opportunity for future work would be an extension of this work to other related NLP tasks, such as co-reference resolution and cross-sentential relation extraction. | (i) zero NER-specific BiRNN layers, (ii) zero RE-specific BiRNN layers, or (iii) zero task-specific BiRNN layers of any kind |
58f50397a075f128b45c6b824edb7a955ee8cba1 | 58f50397a075f128b45c6b824edb7a955ee8cba1_0 | Q: How many shared layers are in the system?
Text: Introduction
Multi-task learning (MTL) refers to machine learning approaches in which information and representations are shared to solve multiple, related tasks. Relative to single-task learning approaches, MTL often shows improved performance on some or all sub-tasks and can be more computationally efficient BIBREF0, BIBREF1, BIBREF2, BIBREF3. We focus here on a form of MTL known as hard parameter sharing. Hard parameter sharing refers to the use of deep learning models in which inputs to models first pass through a number of shared layers. The hidden representations produced by these shared layers are then fed as inputs to a number of task-specific layers.
Within the domain of natural language processing (NLP), MTL approaches have been applied to a wide range of problems BIBREF3. In recent years, one particularly fruitful application of MTL to NLP has been joint solving of named entity recognition (NER) and relation extraction (RE), two important information extraction tasks with applications in search, question answering, and knowledge base construction BIBREF4. NER consists in the identification of spans of text as corresponding to named entities and the classification of each span's entity type. RE consists in the identification of all triples $(e_i, e_j, r)$, where $e_i$ and $e_j$ are named entities and $r$ is a relation that holds between $e_i$ and $e_j$ according to the text. For example, in Figure FIGREF1, Edgar Allan Poe and Boston are named entities of the types People and Location, respectively. In addition, the text indicates that the Lives-In relation obtains between Edgar Allan Poe and Boston.
One option for solving these two problems is a pipeline approach using two independent models, each designed to solve a single task, with the output of the NER model serving as an input to the RE model. However, MTL approaches offer a number of advantages over the pipeline approach. First, the pipeline approach is more susceptible to error prorogation wherein prediction errors from the NER model enter the RE model as inputs that the latter model cannot correct. Second, the pipeline approach only allows solutions to the NER task to inform the RE task, but not vice versa. In contrast, the joint approach allows for solutions to either task to inform the other. For example, learning that there is a Lives-In relation between Edgar Allan Poe and Boston can be useful for determining the types of these entities. Finally, the joint approach can be computationally more efficient than the pipeline approach. As mentioned above, MTL approaches are generally more efficient than single-task learning alternatives. This is due to the fact that solutions to related tasks often rely on similar information, which in an MTL setting only needs to be represented in one model in order to solve all tasks. For example, the fact that Edgar Allan Poe is followed by was born can help a model determine both that Edgar Allan Poe is an instance of a People entity and that the sentence expresses a Lives-In relation.
While the choice as to which and how many layers to share between tasks is known to be an important factor relevant to the performance of MTL models BIBREF5, BIBREF2, this issue has received relatively little attention within the context of joint NER and RE. As we show below in Section 2, prior proposals for jointly solving NER and RE have typically made use of very few task-specific parameters or have mostly used task-specific parameters only for the RE task. We seek to correct for this oversight by proposing a novel neural architecture for joint NER and RE. In particular, we make the following contributions:
We allow for deeper task-specificity than does previous work via the use of additional task-specific bidirectional recurrent neural networks (BiRNNs) for both tasks.
Because the relatedness between the NER and RE tasks is not constant across all textual domains, we take the number of shared and task-specific layers to be an explicit hyperparameter of the model that can be tuned separately for different datasets.
We evaluate the proposed architecture on two publicly available datasets: the Adverse Drug Events (ADE) dataset BIBREF6 and the CoNLL04 dataset BIBREF7. We show that our architecture is able to outperform the current state-of-the-art (SOTA) results on both the NER and RE tasks in the case of ADE. In the case of CoNLL04, our proposed architecture achieves SOTA performance on the NER task and achieves near SOTA performance on the RE task. On both datasets, our results are SOTA when averaging performance across both tasks. Moreover, we achieve these results using an order of magnitude fewer trainable parameters than the current SOTA architecture.
Related Work
We focus in this section on previous deep learning approaches to solving the tasks of NER and RE, as this work is most directly comparable to our proposal. Most work on joint NER and RE has adopted a BIO or BILOU scheme for the NER task, where each token is labeled to indicate whether it is the (B)eginning of an entity, (I)nside an entity, or (O)utside an entity. The BILOU scheme extends these labels to indicate if a token is the (L)ast token of an entity or is a (U)nit, i.e. the only token within an entity span.
Several approaches treat the NER and RE tasks as if they were a single task. For example, Gupta et al. gupta-etal-2016-table, following Miwa and Sasaki miwa-sasaki-2014-modeling, treat the two tasks as a table-filling problem where each cell in the table corresponds to a pair of tokens $(t_i, t_j)$ in the input text. For the diagonal of the table, the cell label is the BILOU tag for $t_i$. All other cells are labeled with the relation $r$, if it exists, such that $(e_i, e_j, r)$, where $e_i$ is the entity whose span's final token is $t_i$, is in the set of true relations. A BiRNN is trained to fill the cells of the table. Zheng et al. Zheng2017 introduce a BILOU tagging scheme that incorporates relation information into the tags, allowing them to treat both tasks as if they were a single NER task. A series of two bidirectional LSTM (BiLSTM) layers and a final softmax layer are used to produce output tags. Li et al. li2019entity solve both tasks as a form of multi-turn question answering in which the input text is queried with question templates first to detect entities and then, given the detected entities, to detect any relations between these entities. Li et al. use BERT BIBREF8 as the backbone of their question-answering model and produce answers by tagging the input text with BILOU tags to identify the span corresponding to the answer(s).
The above approaches allow for very little task-specificity, since both the NER task and the RE task are coerced into a single task. Other approaches incorporate greater task-specificity in one of two ways. First, several models share the majority of model parameters between the NER and RE tasks, but also have separate scoring and/or output layers used to produce separate outputs for each task. For example, Katiyar and Cardie katiyar-cardie-2017-going and Bekoulis et al. bekoulis2018joint propose models in which token representations first pass through one or more shared BiLSTM layers. Katiyar and Cardie use a softmax layer to tag tokens with BILOU tags to solve the NER task and use an attention layer to detect relations between each pair of entities. Bekoulis et al., following Lample et al. Lample2016, use a conditional random field (CRF) layer to produce BIO tags for the NER task. The output from the shared BiLSTM layer for every pair of tokens is passed through relation scoring and sigmoid layers to predict relations.
A second method of incorporating greater task-specificity into these models is via deeper layers for solving the RE task. Miwa and Bansal miwa-bansal-2016-end and Li et al. li2017neural pass token representations through a BiLSTM layer and then use a softmax layer to label each token with the appropriate BILOU label. Both proposals then use a type of tree-structured bidirectional LSTM layer stacked on top of the shared BiLSTM to solve the RE task. Nguyen and Verspoor nguyen2019end use BiLSTM and CRF layers to perform the NER task. Label embeddings are created from predicted NER labels, concatenated with token representations, and then passed through a RE-specific BiLSTM. A biaffine attention layer BIBREF9 operates on the output of this BiLSTM to predict relations.
An alternative to the BIO/BILOU scheme is the span-based approach, wherein spans of the input text are directly labeled as to whether they correspond to any entity and, if so, their entity types. Luan et al. Luan2018 adopt a span-based approach in which token representations are first passed through a BiLSTM layer. The output from the BiLSTM is used to construct representations of candidate entity spans, which are then scored for both the NER and RE tasks via feed forward layers. Luan et al. Luan2019 follow a similar approach, but construct coreference and relation graphs between entities to propagate information between entities connected in these graphs. The resulting entity representations are then classified for NER and RE via feed forward layers. To the best of our knowledge, the current SOTA model for joint NER and RE is the span-based proposal of Eberts and Ulges eberts2019span. In this architecture, token representations are obtained using a pre-trained BERT model that is fine-tuned during training. Representations for candidate entity spans are obtained by max pooling over all tokens in each span. Span representations are passed through an entity classification layer to solve the NER task. Representations of all pairs of spans that are predicted to be entities and representations of the contexts between these pairs are then passed through a final layer with sigmoid activation to predict relations between entities. With respect to their degrees of task-specificity, these span-based approaches resemble the BIO/BILOU approaches in which the majority of model parameters are shared, but each task possesses independent scoring and/or output layers.
Overall, previous approaches to joint NER and RE have experimented little with deep task-specificity, with the exception of those models that include additional layers for the RE task. To our knowledge, no work has considered including additional NER-specific layers beyond scoring and/or output layers. This may reflect a residual influence of the pipeline approach in which the NER task must be solved first before additional layers are used to solve the RE task. However, there is no a priori reason to think that the RE task would benefit more from additional task-specific layers than the NER task. We also note that while previous work has tackled joint NER and RE in variety of textual domains, in all cases the number of shared and task-specific parameters is held constant across these domains.
Model
The architecture proposed here is inspired by several previous proposals BIBREF10, BIBREF11, BIBREF12. We treat the NER task as a sequence labeling problem using BIO labels. Token representations are first passed through a series of shared, BiRNN layers. Stacked on top of these shared BiRNN layers is a sequence of task-specific BiRNN layers for both the NER and RE tasks. We take the number of shared and task-specific layers to be a hyperparameter of the model. Both sets of task-specific BiRNN layers are followed by task-specific scoring and output layers. Figure FIGREF4 illustrates this architecture. Below, we use superscript $e$ for NER-specific variables and layers and superscript $r$ for RE-specific variables and layers.
Model ::: Shared Layers
We obtain contextual token embeddings using the pre-trained ELMo 5.5B model BIBREF13. For each token in the input text $t_i$, this model returns three vectors, which we combine via a weighted averaging layer. Each token $t_i$'s weighted ELMo embedding $\mathbf {t}^{elmo}_{i}$ is concatenated to a pre-trained GloVe embedding BIBREF14 $\mathbf {t}^{glove}_{i}$, a character-level word embedding $\mathbf {t}^{char}_i$ learned via a single BiRNN layer BIBREF15 and a one-hot encoded casing vector $\mathbf {t}^{casing}_i$. The full representation of $t_i$ is given by $\mathbf {v}_i$ (where $\circ $ denotes concatenation):
For an input text with $n$ tokens, $\mathbf {v}_{1:n}$ are fed as input to a sequence of one or more shared BiRNN layers, with the output sequence from the $i$th shared BiRNN layer serving as the input sequence to the $i + 1$st shared BiRNN layer.
Model ::: NER-Specific Layers
The final shared BiRNN layer is followed by a sequence of zero or more NER-specific BiRNN layers; the output of the final shared BiRNN layer serves as input to the first NER-specific BiRNN layer, if such a layer exists, and the output from from the $i$th NER-specific BiRNN layer serves as input to the $i + 1$st NER-specific BiRNN layer. For every token $t_i$, let $\mathbf {h}^{e}_i$ denote an NER-specific hidden representation for $t_i$ corresponding to the $i$th element of the output sequence from the final NER-specific BiRNN layer or the final shared BiRNN layer if there are zero NER-specific BiRNN layers. An NER score for token $t_i$, $\mathbf {s}^{e}_i$, is obtained by passing $\mathbf {h}^{e}_i$ through a series of two feed forward layers:
The activation function of $\text{FFNN}^{(e1)}$ and its output size are treated as hyperparameters. $\text{FFNN}^{(e2)}$ uses linear activation and its output size is $|\mathcal {E}|$, where $\mathcal {E}$ is the set of possible entity types. The sequence of NER scores for all tokens, $\mathbf {s}^{e}_{1:n}$, is then passed as input to a linear-chain CRF layer to produce the final BIO tag predictions, $\hat{\mathbf {y}}^e_{1:n}$. During inference, Viterbi decoding is used to determine the most likely sequence $\hat{\mathbf {y}}^e_{1:n}$.
Model ::: RE-Specific Layers
Similar to the NER-specific layers, the output sequence from the final shared BiRNN layer is fed through zero or more RE-specific BiRNN layers. Let $\mathbf {h}^{r}_i$ denote the $i$th output from the final RE-specific BiRNN layer or the final shared BiRNN layer if there are no RE-specific BiRNN layers.
Following previous work BIBREF16, BIBREF11, BIBREF12, we predict relations between entities $e_i$ and $e_j$ using learned representations from the final tokens of the spans corresponding to $e_i$ and $e_j$. To this end, we filter the sequence $\mathbf {h}^{r}_{1:n}$ to include only elements $\mathbf {h}^{r}_{i}$ such that token $t_i$ is the final token in an entity span. During training, ground truth entity spans are used for filtering. During inference, predicted entity spans derived from $\hat{\mathbf {y}}^e_{1:n}$ are used. Each $\mathbf {h}^{r}_{i}$ is concatenated to a learned NER label embedding for $t_i$, $\mathbf {l}^{e}_{i}$:
Ground truth NER labels are used to obtain $\mathbf {l}^{e}_{1:n}$ during training, and predicted NER labels are used during inference.
Next, RE scores are computed for every pair $(\mathbf {g}^{r}_i, \mathbf {g}^{r}_j)$. If $\mathcal {R}$ is the set of possible relations, we calculate the DistMult score BIBREF17 for every relation $r_k \in \mathcal {R}$ and every pair $(\mathbf {g}^{r}_i, \mathbf {g}^{r}_j)$ as follows:
$M^{r_k}$ is a diagonal matrix such that $M^{r_k} \in \mathbb {R}^{p \times p}$, where $p$ is the dimensionality of $\mathbf {g}^r_i$. We also pass each RE-specific hidden representation $\mathbf {g}^{r}_i$ through a single feed forward layer:
As in the case of $\text{FFNN}^{(e1)}$, the activation function of $\text{FFNN}^{(r1)}$ and its output size are treated as hyperparameters.
Let $\textsc {DistMult}^r_{i,j}$ denote the concatenation of $\textsc {DistMult}^{r_k}(\mathbf {g}^r_i, \mathbf {g}^r_j)$ for all $r_k \in \mathcal {R}$ and let $\cos _{i,j}$ denote the cosine distance between vectors $\mathbf {f}^{r}_i$ and $\mathbf {f}^{r}_j$. We obtain RE scores for $(t_i, t_j)$ via a feed forward layer:
$\text{FFNN}^{(r2)}$ uses linear activation, and its output size is $|\mathcal {R}|$. Final relation predictions for a pair of tokens $(t_i, t_j)$, $\hat{\mathbf {y}}^r_{i,j}$, are obtained by passing $\mathbf {s}^r_{i,j}$ through an elementwise sigmoid layer. A relation is predicted for all outputs from this sigmoid layer exceeding $\theta ^r$, which we treat as a hyperparameter.
Model ::: Training
During training, character embeddings, label embeddings, and weights for the weighted average layer, all BiRNN weights, all feed forward networks, and $M^{r_k}$ for all $r_k \in \mathcal {R}$ are trained in a supervised manner. As mentioned above, BIO tags for all tokens are used as labels for the NER task. For the the RE task, binary outputs are used. For every relation $r_k \in R$ and for every pair of tokens $(t_i, t_j)$ such that $t_i$ is the final token of entity $e_i$ and $t_j$ is the final token of entity $e_j$, the RE label $y^{r_k}_{i,j} = 1$ if $(e_i, e_j, r_k)$ is a true relation. Otherwise, we have $y^{r_k}_{i,j} = 0$.
For both output layers, we compute the cross-entropy loss. If $\mathcal {L}_{NER}$ and $\mathcal {L}_{RE}$ denote the cross-entropy loss for the NER and RE outputs, respectively, then the total model loss is given by $\mathcal {L} = \mathcal {L}_{NER} + \lambda ^r \mathcal {L}_{RE}$. The weight $\lambda ^r$ is treated as a hyperparameter and allows for tuning the relative importance of the NER and RE tasks during training. Final training for both datasets used a value of 5 for $\lambda ^r$.
For the ADE dataset, we trained using the Adam optimizer with a mini-batch size of 16. For the CoNLL04 dataset, we used the Nesterov Adam optimizer with and a mini-batch size of 2. For both datasets, we used a learning rate of $5\times 10^{-4}$, During training, dropout was applied before each BiRNN layer, other than the character BiRNN layer, and before the RE scoring layer.
Experiments
We evaluate the architecture described above using the following two publicly available datasets.
Experiments ::: ADE
The Adverse Drug Events (ADE) dataset BIBREF6 consists of 4,272 sentences describing adverse effects from the use of particular drugs. The text is annotated using two entity types (Adverse-Effect and Drug) and a single relation type (Adverse-Effect). Of the entity instances in the dataset, 120 overlap with other entities. Similar to prior work using BIO/BILOU tagging, we remove overlapping entities. We preserve the entity with the longer span and remove any relations involving a removed entity.
There are no official training, dev, and test splits for the ADE dataset, leading previous researchers to use some form of cross-validation when evaluating their models on this dataset. We split out 10% of the data to use as a held-out dev set. Final results are obtained via 10-fold cross-validation using the remaining 90% of the data and the hyperparameters obtained from tuning on the dev set. Following previous work, we report macro-averaged performance metrics averaged across each of the 10 folds.
Experiments ::: CoNLL04
The CoNLL04 dataset BIBREF7 consists of 1,441 sentences from news articles annotated with four entity types (Location, Organization, People, and Other) and five relation types (Works-For, Kill, Organization-Based-In, Lives-In, and Located-In). This dataset contains no overlapping entities.
We use the three-way split of BIBREF16, which contains 910 training, 243 dev, and 288 test sentences. All hyperparameters are tuned against the dev set. Final results are obtained by averaging results from five trials with random weight initializations in which we trained on the combined training and dev sets and evaluated on the test set. As previous work using the CoNLL04 dataset has reported both micro- and macro-averages, we report both sets of metrics.
In evaluating NER performance on these datasets, a predicted entity is only considered a true positive if both the entity's span and span type are correctly predicted. In evaluating RE performance, we follow previous work in adopting a strict evaluation method wherein a predicted relation is only considered correct if the spans corresponding to the two arguments of this relation and the entity types of these spans are also predicted correctly. We experimented with LSTMs and GRUs for all BiRNN layers in the model and experimented with using $1-3$ shared BiRNN layers and $0-3$ task-specific BiRNN layers for each task. Hyperparameters used for final training are listed in Table TABREF17.
Experiments ::: Results
Full results for the performance of our model, as well as other recent work, are shown in Table TABREF18. In addition to precision, recall, and F1 scores for both tasks, we show the average of the F1 scores across both tasks. On the ADE dataset, we achieve SOTA results for both the NER and RE tasks. On the CoNLL04 dataset, we achieve SOTA results on the NER task, while our performance on the RE task is competitive with other recent models. On both datasets, we achieve SOTA results when considering the average F1 score across both tasks. The largest gain relative to the previous SOTA performance is on the RE task of the ADE dataset, where we see an absolute improvement of 4.5 on the macro-average F1 score.
While the model of Eberts and Ulges eberts2019span outperforms our proposed architecture on the CoNLL04 RE task, their results come at the cost of greater model complexity. As mentioned above, Eberts and Ulges fine-tune the BERTBASE model, which has 110 million trainable parameters. In contrast, given the hyperparameters used for final training on the CoNLL04 dataset, our proposed architecture has approximately 6 million trainable parameters.
The fact that the optimal number of task-specific layers differed between the two datasets demonstrates the value of taking the number of shared and task-specific layers to be a hyperparameter of our model architecture. As shown in Table TABREF17, the final hyperparameters used for the CoNLL04 dataset included an additional RE-specific BiRNN layer than did the final hyperparameters used for the ADE dataset. We suspect that this is due to the limited number of relations and entities in the ADE dataset. For most examples in this dataset, it is sufficient to correctly identify a single Drug entity, a single Adverse-Effect entity, and an Adverse-Effect relation between the two entities. Thus, the NER and RE tasks for this dataset are more closely related than they are in the case of the CoNLL04 dataset. Intuitively, cases in which the NER and RE problems can be solved by relying on more shared information should require fewer task-specific layers.
Experiments ::: Ablation Study
To further demonstrate the effectiveness of the additional task-specific BiRNN layers in our architecture, we conducted an ablation study using the CoNLL04 dataset. We trained and evaluated in the same manner described above, using the same hyperparameters, with the following exceptions:
We used either (i) zero NER-specific BiRNN layers, (ii) zero RE-specific BiRNN layers, or (iii) zero task-specific BiRNN layers of any kind.
We increased the number of shared BiRNN layers to keep the total number of model parameters consistent with the number of parameters in the baseline model.
We average the results for each set of hyperparameter across three trials with random weight initializations.
Table TABREF26 contains the results from the ablation study. These results show that the proposed architecture benefits from the inclusion of both NER- and RE-specific layers. However, the RE task benefits much more from the inclusion of these task-specific layers than does the NER task. We take this to reflect the fact that the RE task is more difficult than the NER task for the CoNLL04 dataset, and therefore benefits the most from its own task-specific layers. This is consistent with the fact that the hyperparameter setting that performs best on the RE task is that with no NER-specific BiRNN layers, i.e. the setting that retained RE-specific BiRNN layers. In contrast, the inclusion of task-specific BiRNN layers of any kind had relatively little impact on the performance on the NER task.
Note that the setting with no NER-specific layers is somewhat similar to the setup of Nguyen and Verspoor's nguyen2019end model, but includes an additional shared and an additional RE-specific layer. That this setting outperforms Nguyen et al.'s model reflects the contribution of having deeper shared and RE-specific layers, separate from the contribution of NER-specific layers.
Conclusion
Our results demonstrate the utility of using deeper task-specificity in models for joint NER and RE and of tuning the level of task-specificity separately for different datasets. We conclude that prior work on joint NER and RE undervalues the importance of task-specificity. More generally, these results underscore the importance of correctly balancing the number of shared and task-specific parameters in MTL.
We note that other approaches that employ a single model architecture across different datasets are laudable insofar as we should prefer models that can generalize well across domains with little domain-specific hyperparameter tuning. On the other hand, the similarity between the NER and RE tasks varies across domains, and improved performance can be achieved on these tasks by tuning the number of shared and task-specific parameters. In our work, we treated the number of shared and task-specific layers as a hyperparameter to be tuned for each dataset, but future work may explore ways to select this aspect of the model architecture in a more principled way. For example, Vandenhende et al. vandenhende2019branched propose using a measure of affinity between tasks to determine how many layers to share in MTL networks. Task affinity scores of NER and RE could be computed for different textual domains or datasets, which could then guide the decision regarding the number of shared and task-specific layers to employ for joint NER and RE models deployed on these domains.
Other extensions to the present work could include fine-tuning the model used to obtain contextual word embeddings, e.g. ELMo or BERT, during training. In order to minimize the number of trainable parameters, we did not employ such fine-tuning in our model, but we suspect a fine-tuning approach could lead to improved performance relative to our results. An additional opportunity for future work would be an extension of this work to other related NLP tasks, such as co-reference resolution and cross-sentential relation extraction. | 1 |
9adcc8c4a10fa0d58f235b740d8d495ee622d596 | 9adcc8c4a10fa0d58f235b740d8d495ee622d596_0 | Q: How many additional task-specific layers are introduced?
Text: Introduction
Multi-task learning (MTL) refers to machine learning approaches in which information and representations are shared to solve multiple, related tasks. Relative to single-task learning approaches, MTL often shows improved performance on some or all sub-tasks and can be more computationally efficient BIBREF0, BIBREF1, BIBREF2, BIBREF3. We focus here on a form of MTL known as hard parameter sharing. Hard parameter sharing refers to the use of deep learning models in which inputs to models first pass through a number of shared layers. The hidden representations produced by these shared layers are then fed as inputs to a number of task-specific layers.
Within the domain of natural language processing (NLP), MTL approaches have been applied to a wide range of problems BIBREF3. In recent years, one particularly fruitful application of MTL to NLP has been joint solving of named entity recognition (NER) and relation extraction (RE), two important information extraction tasks with applications in search, question answering, and knowledge base construction BIBREF4. NER consists in the identification of spans of text as corresponding to named entities and the classification of each span's entity type. RE consists in the identification of all triples $(e_i, e_j, r)$, where $e_i$ and $e_j$ are named entities and $r$ is a relation that holds between $e_i$ and $e_j$ according to the text. For example, in Figure FIGREF1, Edgar Allan Poe and Boston are named entities of the types People and Location, respectively. In addition, the text indicates that the Lives-In relation obtains between Edgar Allan Poe and Boston.
One option for solving these two problems is a pipeline approach using two independent models, each designed to solve a single task, with the output of the NER model serving as an input to the RE model. However, MTL approaches offer a number of advantages over the pipeline approach. First, the pipeline approach is more susceptible to error prorogation wherein prediction errors from the NER model enter the RE model as inputs that the latter model cannot correct. Second, the pipeline approach only allows solutions to the NER task to inform the RE task, but not vice versa. In contrast, the joint approach allows for solutions to either task to inform the other. For example, learning that there is a Lives-In relation between Edgar Allan Poe and Boston can be useful for determining the types of these entities. Finally, the joint approach can be computationally more efficient than the pipeline approach. As mentioned above, MTL approaches are generally more efficient than single-task learning alternatives. This is due to the fact that solutions to related tasks often rely on similar information, which in an MTL setting only needs to be represented in one model in order to solve all tasks. For example, the fact that Edgar Allan Poe is followed by was born can help a model determine both that Edgar Allan Poe is an instance of a People entity and that the sentence expresses a Lives-In relation.
While the choice as to which and how many layers to share between tasks is known to be an important factor relevant to the performance of MTL models BIBREF5, BIBREF2, this issue has received relatively little attention within the context of joint NER and RE. As we show below in Section 2, prior proposals for jointly solving NER and RE have typically made use of very few task-specific parameters or have mostly used task-specific parameters only for the RE task. We seek to correct for this oversight by proposing a novel neural architecture for joint NER and RE. In particular, we make the following contributions:
We allow for deeper task-specificity than does previous work via the use of additional task-specific bidirectional recurrent neural networks (BiRNNs) for both tasks.
Because the relatedness between the NER and RE tasks is not constant across all textual domains, we take the number of shared and task-specific layers to be an explicit hyperparameter of the model that can be tuned separately for different datasets.
We evaluate the proposed architecture on two publicly available datasets: the Adverse Drug Events (ADE) dataset BIBREF6 and the CoNLL04 dataset BIBREF7. We show that our architecture is able to outperform the current state-of-the-art (SOTA) results on both the NER and RE tasks in the case of ADE. In the case of CoNLL04, our proposed architecture achieves SOTA performance on the NER task and achieves near SOTA performance on the RE task. On both datasets, our results are SOTA when averaging performance across both tasks. Moreover, we achieve these results using an order of magnitude fewer trainable parameters than the current SOTA architecture.
Related Work
We focus in this section on previous deep learning approaches to solving the tasks of NER and RE, as this work is most directly comparable to our proposal. Most work on joint NER and RE has adopted a BIO or BILOU scheme for the NER task, where each token is labeled to indicate whether it is the (B)eginning of an entity, (I)nside an entity, or (O)utside an entity. The BILOU scheme extends these labels to indicate if a token is the (L)ast token of an entity or is a (U)nit, i.e. the only token within an entity span.
Several approaches treat the NER and RE tasks as if they were a single task. For example, Gupta et al. gupta-etal-2016-table, following Miwa and Sasaki miwa-sasaki-2014-modeling, treat the two tasks as a table-filling problem where each cell in the table corresponds to a pair of tokens $(t_i, t_j)$ in the input text. For the diagonal of the table, the cell label is the BILOU tag for $t_i$. All other cells are labeled with the relation $r$, if it exists, such that $(e_i, e_j, r)$, where $e_i$ is the entity whose span's final token is $t_i$, is in the set of true relations. A BiRNN is trained to fill the cells of the table. Zheng et al. Zheng2017 introduce a BILOU tagging scheme that incorporates relation information into the tags, allowing them to treat both tasks as if they were a single NER task. A series of two bidirectional LSTM (BiLSTM) layers and a final softmax layer are used to produce output tags. Li et al. li2019entity solve both tasks as a form of multi-turn question answering in which the input text is queried with question templates first to detect entities and then, given the detected entities, to detect any relations between these entities. Li et al. use BERT BIBREF8 as the backbone of their question-answering model and produce answers by tagging the input text with BILOU tags to identify the span corresponding to the answer(s).
The above approaches allow for very little task-specificity, since both the NER task and the RE task are coerced into a single task. Other approaches incorporate greater task-specificity in one of two ways. First, several models share the majority of model parameters between the NER and RE tasks, but also have separate scoring and/or output layers used to produce separate outputs for each task. For example, Katiyar and Cardie katiyar-cardie-2017-going and Bekoulis et al. bekoulis2018joint propose models in which token representations first pass through one or more shared BiLSTM layers. Katiyar and Cardie use a softmax layer to tag tokens with BILOU tags to solve the NER task and use an attention layer to detect relations between each pair of entities. Bekoulis et al., following Lample et al. Lample2016, use a conditional random field (CRF) layer to produce BIO tags for the NER task. The output from the shared BiLSTM layer for every pair of tokens is passed through relation scoring and sigmoid layers to predict relations.
A second method of incorporating greater task-specificity into these models is via deeper layers for solving the RE task. Miwa and Bansal miwa-bansal-2016-end and Li et al. li2017neural pass token representations through a BiLSTM layer and then use a softmax layer to label each token with the appropriate BILOU label. Both proposals then use a type of tree-structured bidirectional LSTM layer stacked on top of the shared BiLSTM to solve the RE task. Nguyen and Verspoor nguyen2019end use BiLSTM and CRF layers to perform the NER task. Label embeddings are created from predicted NER labels, concatenated with token representations, and then passed through a RE-specific BiLSTM. A biaffine attention layer BIBREF9 operates on the output of this BiLSTM to predict relations.
An alternative to the BIO/BILOU scheme is the span-based approach, wherein spans of the input text are directly labeled as to whether they correspond to any entity and, if so, their entity types. Luan et al. Luan2018 adopt a span-based approach in which token representations are first passed through a BiLSTM layer. The output from the BiLSTM is used to construct representations of candidate entity spans, which are then scored for both the NER and RE tasks via feed forward layers. Luan et al. Luan2019 follow a similar approach, but construct coreference and relation graphs between entities to propagate information between entities connected in these graphs. The resulting entity representations are then classified for NER and RE via feed forward layers. To the best of our knowledge, the current SOTA model for joint NER and RE is the span-based proposal of Eberts and Ulges eberts2019span. In this architecture, token representations are obtained using a pre-trained BERT model that is fine-tuned during training. Representations for candidate entity spans are obtained by max pooling over all tokens in each span. Span representations are passed through an entity classification layer to solve the NER task. Representations of all pairs of spans that are predicted to be entities and representations of the contexts between these pairs are then passed through a final layer with sigmoid activation to predict relations between entities. With respect to their degrees of task-specificity, these span-based approaches resemble the BIO/BILOU approaches in which the majority of model parameters are shared, but each task possesses independent scoring and/or output layers.
Overall, previous approaches to joint NER and RE have experimented little with deep task-specificity, with the exception of those models that include additional layers for the RE task. To our knowledge, no work has considered including additional NER-specific layers beyond scoring and/or output layers. This may reflect a residual influence of the pipeline approach in which the NER task must be solved first before additional layers are used to solve the RE task. However, there is no a priori reason to think that the RE task would benefit more from additional task-specific layers than the NER task. We also note that while previous work has tackled joint NER and RE in variety of textual domains, in all cases the number of shared and task-specific parameters is held constant across these domains.
Model
The architecture proposed here is inspired by several previous proposals BIBREF10, BIBREF11, BIBREF12. We treat the NER task as a sequence labeling problem using BIO labels. Token representations are first passed through a series of shared, BiRNN layers. Stacked on top of these shared BiRNN layers is a sequence of task-specific BiRNN layers for both the NER and RE tasks. We take the number of shared and task-specific layers to be a hyperparameter of the model. Both sets of task-specific BiRNN layers are followed by task-specific scoring and output layers. Figure FIGREF4 illustrates this architecture. Below, we use superscript $e$ for NER-specific variables and layers and superscript $r$ for RE-specific variables and layers.
Model ::: Shared Layers
We obtain contextual token embeddings using the pre-trained ELMo 5.5B model BIBREF13. For each token in the input text $t_i$, this model returns three vectors, which we combine via a weighted averaging layer. Each token $t_i$'s weighted ELMo embedding $\mathbf {t}^{elmo}_{i}$ is concatenated to a pre-trained GloVe embedding BIBREF14 $\mathbf {t}^{glove}_{i}$, a character-level word embedding $\mathbf {t}^{char}_i$ learned via a single BiRNN layer BIBREF15 and a one-hot encoded casing vector $\mathbf {t}^{casing}_i$. The full representation of $t_i$ is given by $\mathbf {v}_i$ (where $\circ $ denotes concatenation):
For an input text with $n$ tokens, $\mathbf {v}_{1:n}$ are fed as input to a sequence of one or more shared BiRNN layers, with the output sequence from the $i$th shared BiRNN layer serving as the input sequence to the $i + 1$st shared BiRNN layer.
Model ::: NER-Specific Layers
The final shared BiRNN layer is followed by a sequence of zero or more NER-specific BiRNN layers; the output of the final shared BiRNN layer serves as input to the first NER-specific BiRNN layer, if such a layer exists, and the output from from the $i$th NER-specific BiRNN layer serves as input to the $i + 1$st NER-specific BiRNN layer. For every token $t_i$, let $\mathbf {h}^{e}_i$ denote an NER-specific hidden representation for $t_i$ corresponding to the $i$th element of the output sequence from the final NER-specific BiRNN layer or the final shared BiRNN layer if there are zero NER-specific BiRNN layers. An NER score for token $t_i$, $\mathbf {s}^{e}_i$, is obtained by passing $\mathbf {h}^{e}_i$ through a series of two feed forward layers:
The activation function of $\text{FFNN}^{(e1)}$ and its output size are treated as hyperparameters. $\text{FFNN}^{(e2)}$ uses linear activation and its output size is $|\mathcal {E}|$, where $\mathcal {E}$ is the set of possible entity types. The sequence of NER scores for all tokens, $\mathbf {s}^{e}_{1:n}$, is then passed as input to a linear-chain CRF layer to produce the final BIO tag predictions, $\hat{\mathbf {y}}^e_{1:n}$. During inference, Viterbi decoding is used to determine the most likely sequence $\hat{\mathbf {y}}^e_{1:n}$.
Model ::: RE-Specific Layers
Similar to the NER-specific layers, the output sequence from the final shared BiRNN layer is fed through zero or more RE-specific BiRNN layers. Let $\mathbf {h}^{r}_i$ denote the $i$th output from the final RE-specific BiRNN layer or the final shared BiRNN layer if there are no RE-specific BiRNN layers.
Following previous work BIBREF16, BIBREF11, BIBREF12, we predict relations between entities $e_i$ and $e_j$ using learned representations from the final tokens of the spans corresponding to $e_i$ and $e_j$. To this end, we filter the sequence $\mathbf {h}^{r}_{1:n}$ to include only elements $\mathbf {h}^{r}_{i}$ such that token $t_i$ is the final token in an entity span. During training, ground truth entity spans are used for filtering. During inference, predicted entity spans derived from $\hat{\mathbf {y}}^e_{1:n}$ are used. Each $\mathbf {h}^{r}_{i}$ is concatenated to a learned NER label embedding for $t_i$, $\mathbf {l}^{e}_{i}$:
Ground truth NER labels are used to obtain $\mathbf {l}^{e}_{1:n}$ during training, and predicted NER labels are used during inference.
Next, RE scores are computed for every pair $(\mathbf {g}^{r}_i, \mathbf {g}^{r}_j)$. If $\mathcal {R}$ is the set of possible relations, we calculate the DistMult score BIBREF17 for every relation $r_k \in \mathcal {R}$ and every pair $(\mathbf {g}^{r}_i, \mathbf {g}^{r}_j)$ as follows:
$M^{r_k}$ is a diagonal matrix such that $M^{r_k} \in \mathbb {R}^{p \times p}$, where $p$ is the dimensionality of $\mathbf {g}^r_i$. We also pass each RE-specific hidden representation $\mathbf {g}^{r}_i$ through a single feed forward layer:
As in the case of $\text{FFNN}^{(e1)}$, the activation function of $\text{FFNN}^{(r1)}$ and its output size are treated as hyperparameters.
Let $\textsc {DistMult}^r_{i,j}$ denote the concatenation of $\textsc {DistMult}^{r_k}(\mathbf {g}^r_i, \mathbf {g}^r_j)$ for all $r_k \in \mathcal {R}$ and let $\cos _{i,j}$ denote the cosine distance between vectors $\mathbf {f}^{r}_i$ and $\mathbf {f}^{r}_j$. We obtain RE scores for $(t_i, t_j)$ via a feed forward layer:
$\text{FFNN}^{(r2)}$ uses linear activation, and its output size is $|\mathcal {R}|$. Final relation predictions for a pair of tokens $(t_i, t_j)$, $\hat{\mathbf {y}}^r_{i,j}$, are obtained by passing $\mathbf {s}^r_{i,j}$ through an elementwise sigmoid layer. A relation is predicted for all outputs from this sigmoid layer exceeding $\theta ^r$, which we treat as a hyperparameter.
Model ::: Training
During training, character embeddings, label embeddings, and weights for the weighted average layer, all BiRNN weights, all feed forward networks, and $M^{r_k}$ for all $r_k \in \mathcal {R}$ are trained in a supervised manner. As mentioned above, BIO tags for all tokens are used as labels for the NER task. For the the RE task, binary outputs are used. For every relation $r_k \in R$ and for every pair of tokens $(t_i, t_j)$ such that $t_i$ is the final token of entity $e_i$ and $t_j$ is the final token of entity $e_j$, the RE label $y^{r_k}_{i,j} = 1$ if $(e_i, e_j, r_k)$ is a true relation. Otherwise, we have $y^{r_k}_{i,j} = 0$.
For both output layers, we compute the cross-entropy loss. If $\mathcal {L}_{NER}$ and $\mathcal {L}_{RE}$ denote the cross-entropy loss for the NER and RE outputs, respectively, then the total model loss is given by $\mathcal {L} = \mathcal {L}_{NER} + \lambda ^r \mathcal {L}_{RE}$. The weight $\lambda ^r$ is treated as a hyperparameter and allows for tuning the relative importance of the NER and RE tasks during training. Final training for both datasets used a value of 5 for $\lambda ^r$.
For the ADE dataset, we trained using the Adam optimizer with a mini-batch size of 16. For the CoNLL04 dataset, we used the Nesterov Adam optimizer with and a mini-batch size of 2. For both datasets, we used a learning rate of $5\times 10^{-4}$, During training, dropout was applied before each BiRNN layer, other than the character BiRNN layer, and before the RE scoring layer.
Experiments
We evaluate the architecture described above using the following two publicly available datasets.
Experiments ::: ADE
The Adverse Drug Events (ADE) dataset BIBREF6 consists of 4,272 sentences describing adverse effects from the use of particular drugs. The text is annotated using two entity types (Adverse-Effect and Drug) and a single relation type (Adverse-Effect). Of the entity instances in the dataset, 120 overlap with other entities. Similar to prior work using BIO/BILOU tagging, we remove overlapping entities. We preserve the entity with the longer span and remove any relations involving a removed entity.
There are no official training, dev, and test splits for the ADE dataset, leading previous researchers to use some form of cross-validation when evaluating their models on this dataset. We split out 10% of the data to use as a held-out dev set. Final results are obtained via 10-fold cross-validation using the remaining 90% of the data and the hyperparameters obtained from tuning on the dev set. Following previous work, we report macro-averaged performance metrics averaged across each of the 10 folds.
Experiments ::: CoNLL04
The CoNLL04 dataset BIBREF7 consists of 1,441 sentences from news articles annotated with four entity types (Location, Organization, People, and Other) and five relation types (Works-For, Kill, Organization-Based-In, Lives-In, and Located-In). This dataset contains no overlapping entities.
We use the three-way split of BIBREF16, which contains 910 training, 243 dev, and 288 test sentences. All hyperparameters are tuned against the dev set. Final results are obtained by averaging results from five trials with random weight initializations in which we trained on the combined training and dev sets and evaluated on the test set. As previous work using the CoNLL04 dataset has reported both micro- and macro-averages, we report both sets of metrics.
In evaluating NER performance on these datasets, a predicted entity is only considered a true positive if both the entity's span and span type are correctly predicted. In evaluating RE performance, we follow previous work in adopting a strict evaluation method wherein a predicted relation is only considered correct if the spans corresponding to the two arguments of this relation and the entity types of these spans are also predicted correctly. We experimented with LSTMs and GRUs for all BiRNN layers in the model and experimented with using $1-3$ shared BiRNN layers and $0-3$ task-specific BiRNN layers for each task. Hyperparameters used for final training are listed in Table TABREF17.
Experiments ::: Results
Full results for the performance of our model, as well as other recent work, are shown in Table TABREF18. In addition to precision, recall, and F1 scores for both tasks, we show the average of the F1 scores across both tasks. On the ADE dataset, we achieve SOTA results for both the NER and RE tasks. On the CoNLL04 dataset, we achieve SOTA results on the NER task, while our performance on the RE task is competitive with other recent models. On both datasets, we achieve SOTA results when considering the average F1 score across both tasks. The largest gain relative to the previous SOTA performance is on the RE task of the ADE dataset, where we see an absolute improvement of 4.5 on the macro-average F1 score.
While the model of Eberts and Ulges eberts2019span outperforms our proposed architecture on the CoNLL04 RE task, their results come at the cost of greater model complexity. As mentioned above, Eberts and Ulges fine-tune the BERTBASE model, which has 110 million trainable parameters. In contrast, given the hyperparameters used for final training on the CoNLL04 dataset, our proposed architecture has approximately 6 million trainable parameters.
The fact that the optimal number of task-specific layers differed between the two datasets demonstrates the value of taking the number of shared and task-specific layers to be a hyperparameter of our model architecture. As shown in Table TABREF17, the final hyperparameters used for the CoNLL04 dataset included an additional RE-specific BiRNN layer than did the final hyperparameters used for the ADE dataset. We suspect that this is due to the limited number of relations and entities in the ADE dataset. For most examples in this dataset, it is sufficient to correctly identify a single Drug entity, a single Adverse-Effect entity, and an Adverse-Effect relation between the two entities. Thus, the NER and RE tasks for this dataset are more closely related than they are in the case of the CoNLL04 dataset. Intuitively, cases in which the NER and RE problems can be solved by relying on more shared information should require fewer task-specific layers.
Experiments ::: Ablation Study
To further demonstrate the effectiveness of the additional task-specific BiRNN layers in our architecture, we conducted an ablation study using the CoNLL04 dataset. We trained and evaluated in the same manner described above, using the same hyperparameters, with the following exceptions:
We used either (i) zero NER-specific BiRNN layers, (ii) zero RE-specific BiRNN layers, or (iii) zero task-specific BiRNN layers of any kind.
We increased the number of shared BiRNN layers to keep the total number of model parameters consistent with the number of parameters in the baseline model.
We average the results for each set of hyperparameter across three trials with random weight initializations.
Table TABREF26 contains the results from the ablation study. These results show that the proposed architecture benefits from the inclusion of both NER- and RE-specific layers. However, the RE task benefits much more from the inclusion of these task-specific layers than does the NER task. We take this to reflect the fact that the RE task is more difficult than the NER task for the CoNLL04 dataset, and therefore benefits the most from its own task-specific layers. This is consistent with the fact that the hyperparameter setting that performs best on the RE task is that with no NER-specific BiRNN layers, i.e. the setting that retained RE-specific BiRNN layers. In contrast, the inclusion of task-specific BiRNN layers of any kind had relatively little impact on the performance on the NER task.
Note that the setting with no NER-specific layers is somewhat similar to the setup of Nguyen and Verspoor's nguyen2019end model, but includes an additional shared and an additional RE-specific layer. That this setting outperforms Nguyen et al.'s model reflects the contribution of having deeper shared and RE-specific layers, separate from the contribution of NER-specific layers.
Conclusion
Our results demonstrate the utility of using deeper task-specificity in models for joint NER and RE and of tuning the level of task-specificity separately for different datasets. We conclude that prior work on joint NER and RE undervalues the importance of task-specificity. More generally, these results underscore the importance of correctly balancing the number of shared and task-specific parameters in MTL.
We note that other approaches that employ a single model architecture across different datasets are laudable insofar as we should prefer models that can generalize well across domains with little domain-specific hyperparameter tuning. On the other hand, the similarity between the NER and RE tasks varies across domains, and improved performance can be achieved on these tasks by tuning the number of shared and task-specific parameters. In our work, we treated the number of shared and task-specific layers as a hyperparameter to be tuned for each dataset, but future work may explore ways to select this aspect of the model architecture in a more principled way. For example, Vandenhende et al. vandenhende2019branched propose using a measure of affinity between tasks to determine how many layers to share in MTL networks. Task affinity scores of NER and RE could be computed for different textual domains or datasets, which could then guide the decision regarding the number of shared and task-specific layers to employ for joint NER and RE models deployed on these domains.
Other extensions to the present work could include fine-tuning the model used to obtain contextual word embeddings, e.g. ELMo or BERT, during training. In order to minimize the number of trainable parameters, we did not employ such fine-tuning in our model, but we suspect a fine-tuning approach could lead to improved performance relative to our results. An additional opportunity for future work would be an extension of this work to other related NLP tasks, such as co-reference resolution and cross-sentential relation extraction. | 2 for the ADE dataset and 3 for the CoNLL04 dataset |
91c81807374f2459990e5f9f8103906401abc5c2 | 91c81807374f2459990e5f9f8103906401abc5c2_0 | Q: What is barycentric Newton diagram?
Text: Introduction
With growing diversity in personal food preference and regional cuisine style, personalized information systems that can transform a recipe into any selected regional cuisine style that a user might prefer would help food companies and professional chefs create new recipes.
To achieve this goal, there are two significant challenges: 1) identifying the degree of regional cuisine style mixture of any selected recipe; and 2) developing an algorithm that shifts a recipe into any selected regional cuisine style.
As to the former challenge, with growing globalization and economic development, it is becoming difficult to identify a recipe’s regional cuisine style with specific traditional styles since regional cuisine patterns have been changing and converging in many countries throughout Asia, Europe, and elsewhere BIBREF0 . Regarding the latter challenge, to the best of our knowledge, little attention has been paid to developing algorithms which transform a recipe’s regional cuisine style into any selected regional cuisine pattern, cf. BIBREF1 , BIBREF2 . Previous studies have focused on developing an algorithm which suggests replaceable ingredients based on cooking action BIBREF3 , degree of similarity among ingredient BIBREF4 , ingredient network BIBREF5 , degree of typicality of ingredient BIBREF6 , and flavor (foodpairing.com).
The aim of this study is to propose a novel data-driven system for transformation of regional cuisine style. This system has two characteristics. First, we propose a new method for identifying a recipe’s regional cuisine style mixture by calculating the contribution of each ingredient to certain regional cuisine patterns, such as Mediterranean, French, or Japanese, by drawing on ingredient prevalence data from large recipe repositories. Also the system visualizes a recipe’s regional cuisine style mixture in two-dimensional space under barycentric coordinates using what we call a Newton diagram. Second, the system transforms a recipe’s regional cuisine pattern into any selected regional style by recommending replaceable ingredients in existing recipes.
As an example of this proposed system, we transform a traditional Japanese recipe, Sukiyaki, into French style.
Architecture of transformation system
Figure 1 shows the overall architecture of the transformation system, which consists of two steps: 1) identification and visualization of a recipe’s regional cuisine style mixture; and 2) algorithm which transforms a given recipe into any selected regional/country style. Details of the steps are described as follows.
Step 1: Identification and visualization of a recipe's regional cuisine style mixture
Using a neural network method as detailed below, we identify a recipe's regional cuisine style. The neural network model was constructed as shown in Figure 2 . The number of layers and dimension of each layer are also shown in Figure 2 .
When we enter a recipe, this model classifies which country or regional cuisine the recipe belongs to. The input is a vector with the dimension of the total number of ingredients included in the dataset, and only the indices of ingredients contained in the input recipe are 1, otherwise they are 0.
There are two hidden layers. Therefore, this model can consider a combination of ingredients to predict the country probability. Dropout is also used for the hidden layer, randomly (20%) setting the value of the node to 0. So a robust network is constructed. The final layer’s dimension is the number of countries, here 20 countries. In the final layer, we convert it to a probability value using the softmax function, which represents the probability that the recipe belongs to that country. ADAM BIBREF7 was used as an optimization technique. The number of epochs in training was 200. These network structure and parameters were chosen after preliminary experiments so that the neural network could perform the country classification task as efficiently as possible.
In this study, we used a labeled corpus of Yummly recipes to train this neural network. Yummly dataset has 39774 recipes from the 20 countries as shown in Table 1 . Each recipe has the ingredients and country information. Firstly, we randomly divided the data set into 80% for training the neural network and 20% for testing how precisely it can classify. The final neural network achieved a classification accuracy of 79% on the test set. Figure 3 shows the confusion matrix of the neural network classifficaiton. Table 2 shows the examples of ingredient classification results. Common ingredients, onions for example, that appear in many regional recipes are assigned to all countries with low probability. On the other hands some ingredients that appear only in specific country are assigned to the country with high probability. For example mirin that is a seasoning commonly used in Japan is classified into Japan with high probability.
By using the probability values that emerge from the activation function in the neural network, rather than just the final classification, we can draw a barycentric Newton diagram, as shown in Figure 4 . The basic idea of the visualization, drawing on Isaac Newton’s visualization of the color spectrum BIBREF8 , is to express a mixture in terms of its constituents as represented in barycentric coordinates. This visualization allows an intuitive interpretation of which country a recipe belongs to. If the probability of Japanese is high, the recipe is mapped near the Japanese. The countries on the Newton diagram are placed by spectral graph drawing BIBREF9 , so that similar countries are placed nearby on the circle. The calculation is as follows. First we define the adjacency matrix $W$ as the similarity between two countries. The similarity between country $i$ and $j$ is calculated by cosine similarity of county $i$ vector and $j$ vector. These vector are defined in next section. $W_{ij} = sim(vec_i, vec_j)$ . The degree matrix $D$ is a diagonal matrix where $D_{ii} = \sum _{j} W_{ij}$ . Next we calculate the eigendecomposition of $D^{-1}W$ . The second and third smallest eingenvalues and corresponded eingevectors are used for placing the countries. Eigenvectors are normalized so as to place the countries on the circle.
Step 2: Transformation algorithm for transforming regional cuisine style
If you want to change a given recipe into a recipe having high probability of a specific country by just changing one ingredient, which ingredient should be alternatively used?
When we change the one ingredient $x_i$ in the recipe to ingredient $x_j$ , the probability value of country likelihood can be calculated by using the above neural network model. If we want to change the recipe to have high probability of a specific country $c$ , we can find ingredient $x_j$ that maximizes the following probability. $P(C=c|r - x_i + x_j)$ where $r$ is the recipe. However, with this method, regardless of the ingredient $x_i$ , only specific ingredients having a high probability of country $c$ are always selected. In this system, we want to select ingredients that are similar to ingredient $x_i$ and have a high probability of country $c$ . Therefore, we propose a method of extending word2vec as a method of finding ingredients resembling ingredient $x_j$0 .
Word2vec is a technique proposed in the field of natural language processing BIBREF10 . As the name implies, it is a method to vectorize words, and similar words are represented by similar vectors. To train word2vec, skip-gram model is used. In the skip-gram model, the objective is to learn word vector representations that can predict the nearby words. The objective function is
$$\sum _{d \in D} \sum _{w_i \in d} \sum _{-n \le j \le n, j \ne 0} \log P(w_{i + j}|w_i) $$ (Eq. 10)
where $D$ is the set of documents, $d$ is a document, $w_i$ is a word, and $n$ is the window size. This model predicts the $n$ words before and after the input word, as described in left side of Figure 5 . The objective function is to maximize the likelihood of the prediction of the surrounding word $w_{i+j}$ given the center word $w_i$ . The probability is
$$P(w_j|w_i) = \frac{\exp (v_{w_i}^Tv_{w_j}^{^{\prime }})}{\sum _{w \in W} \exp (v_{w_i}^Tv_w^{^{\prime }})}$$ (Eq. 11)
where $v_w \in \mathbb {R}^K$ is an input vector of word $w$ , $v^{^{\prime }}_w \in \mathbb {R}^K$ is an output vector of word $w$ , $K$ is the dimension of the vector, and $W$ is the set of all words. To optimize this objective function, hierarchical softmax or negative sampling method BIBREF10 are used. After that we get the vectors of words and we can calculate analogies by using the vectors. For example, the analogy of “King - Man + Women = ?" yields “Queen" by using word2vec.
In this study, word2vec is applied to the data set of recipes. Word2vec can be applied by considering recipes as documents and ingredients as words. We do not include a window size parameter, since it is used to encode the ordering of words in document where it is relevant. In recipes, the listing of ingredients is unordered. The objective function is
$$\sum _{r \in R} \sum _{w_i \in r} \sum _{j \ne i} \log P(w_{j}|w_i) $$ (Eq. 12)
where $R$ is a set of recipes, $r$ is a recipe, and $w_i$ is the $i$ th ingredient in recipe $r$ . The architecture is described in middle of Figure 5 . The objective function is to maximize the likelihood of the prediction of the ingredient $w_j$ in the same recipe given the ingredient $w_i$ . The probability is defined below.
$$P(w_j|w_i) = \frac{\exp (v_{w_i}^Tv_{w_j}^{^{\prime }})}{\sum _{w \in W} \exp (v_{w_i}^Tv_w^{^{\prime }})}$$ (Eq. 13)
where $w$ is an ingredient, $v_w \in \mathbb {R}^K$ is an input vector of ingredient, $v^{^{\prime }}_w \in \mathbb {R}^K$ is an output vector of ingredient, $K$ is the dimension of the vector, and $W$ is the set of all ingredients.
Each ingredient is vectorized by word2vec, and the similarity of each ingredient is calculated using cosine similarity. Through vectorization in word2vec, those of the same genre are placed nearby. In other words, by using the word2vec vector, it is possible to select ingredients with similar genres.
Next, we extend word2vec to be able to incorporate information of the country. When we vectorize the countries, we can calculate the analogy between countries and ingredients. For example, this method can tell us what is the French ingredient that corresponds to Japanese soy sauce by calculating “Soy sauce - Japan + French = ?".
The detail of our method is as follows. We maximize objective function ( 10 ).
$$\sum _{r \in R} \sum _{w_i \in r} \left( \log P(w_{i}|c_r) + \log P(c_r|w_{i}) + \sum _{j \ne i} \log P(w_{j}|w_i)\right)$$ (Eq. 14)
where $R$ is a set of recipes, $r$ is a recipe, $w_i$ is the $i$ th ingredient in recipe $r$ , and $c_r$ is the country recipe $r$ belongs to. The architecture is described in right of Figure 5 . The objective function is to maximize the likelihood of the prediction of the ingredient $w_j$ in the same recipe given the ingredient $w_i$ along with the prediction of the the ingredients $w_i$ given the country $r$0 and the prediction of the the country $r$1 given the ingredient $r$2 . The probability is defined below.
$$P(b|a) = \frac{\exp (v_{a}^Tv_{b}^{^{\prime }})}{\sum _{c \in W} \exp (v_{a}^Tv_c^{^{\prime }})}$$ (Eq. 15)
where $a$ is a ingredient or country, $b,c$ are also, $v_a \in \mathbb {R}^K$ is an input vector of ingredient or country, $v^{^{\prime }}_a \in \mathbb {R}^K$ is an output vector of ingredient or country, $K$ is the dimension of vector, and $W$ is the set of all ingredients and all countries.
We can use hierarchical softmax or negative sampling BIBREF10 to maximize objective function ( 10 ) and find the vectors of ingredients and countries in the same vector space.
Table 3 shows the ingredients around each country in the vector space, and which could be considered as most authentic for that regional cuisine BIBREF11 . Also, Figure 6 shows the ingredients and countries in 2D map by using t-SNE method BIBREF12 .
Experiment
Our substitution strategy is as follows. First we use extended word2vec and train it by Yummly dataset. After that all ingredients and countries are vectorized into 100 dimensional vector space. Second we find substitution by analogy calculation. For example, to find french substitution of Mirin, we calculate “Mirin - Japanese + French" in the vector space and get the vector as result. After that we find similar ingredients around the vector by calculating the cosine similarity.
As an example of our proposed system, we transformed a traditional Japanese “Sukiyaki" into French style. Table 4 shows the suggested replaceable ingredients and the probability after replacing. “Sukiyaki" consists of soy sauce, beef sirloin, white sugar, green onions, mirin, shiitake, egg, vegetable oil, konnyaku, and chinese cabbage. Figure 7 shows the Sukiyaki in French style cooked by professional chef KM who is one of the authors of this paper. He assesses the new recipe as valid and novel to him in terms of Sukiyaki in French. Here our task is in generating a new dish, for which by definition there is no ground truth for comparison. Rating by experts is the standard approach for assessing novel generative artifacts, e.g. in studies of creativity BIBREF13 , but going forward it is important to develop other approaches for assessment.
Discussion
With growing diversity in personal food preference and regional cuisine style, the development of data-driven systems which can transform recipes into any given regional cuisine style might be of value for food companies or professional chefs to create new recipes.
In this regard, this study adds two important contributions to the literature. First, this is to the best of our knowledge, the first study to identify a recipe’s mixture of regional cuisine style from the large number of recipes around the world. Previous studies have focused on assessing degree of adherence to a single regional cuisine pattern. For example, Mediterranean Diet Score is one of the most popular diet scores. This method uses 11 main items (e.g., fruit, vegetable, olive oil, and wine) as criteria for assessing the degree of one’s Mediterranean style BIBREF14 . However, in this era, it is becoming difficult to identify a recipe’s regional cuisine style with specific country/regional style. For example, should Fish Provencal, whose recipe name is suggestive of Southern France, be cast as French style? The answer is mixture of different country styles: 32% French; 26% Italian; and 38% Spanish (see Figure 4 ).
Furthermore, our identification algorithm can be used to assess the degree of personal regional cuisine style mixture, using the user’s daily eating pattern as inputs. For example, when one enters the recipes that one has eaten in the past week into the algorithm, the probability values of each country would be returned, which shows the mixture of regional cuisine style of one’s daily eating pattern. As such, a future research direction would be developing algorithms that can transform personal regional cuisine patterns to a healthier style by providing a series of recipes that are in accordance with one’s unique food preferences.
Our transformation algorithm can be improved by adding multiple datasets from around the world. Needless to say, lack of a comprehensive data sets makes it difficult to develop algorithms for transforming regional cuisine style. For example, Yummly, one of the largest recipe sites in the world, is less likely to contain recipes from non-Western regions. Furthermore, data on traditional regional cuisine patterns is usually described in its native language. As such, developing a way to integrate multiple data sets in multiple languages is required for future research.
One of the methods to address this issue might be as follows: 1) generating the vector representation for each ingredient by using each data set independently; 2) translating only a small set of common ingredients among each data set, such as potato, tomato, and onion; 3) with a use of common ingredients, mapping each vector representation into one common vector space using a canonical correlation analysis BIBREF15 , for example.
Several fundamental limitations of the present study warrant mention. First of all, our identification and transformation algorithms depend on the quantity and quality of recipes included in the data. As such, future research using our proposed system should employ quality big recipe data. Second, the evolution of regional cuisines prevents us from developing precise algorithm. For example, the definition of Mediterranean regional cuisine pattern has been revised to adapt to current dietary patterns BIBREF16 , BIBREF17 . Therefore, future research should employ time-trend recipe data to distinctively specify a recipe’s mixture of regional cuisine style and its date cf. BIBREF18 . Third, we did not consider the cooking method (e.g., baking, boiling, and deep flying) as a characteristic of country/regional style. Each country/region has different ways of cooking ingredients and this is one of the important factors characterizing the food culture of each country/region. Fourth, the combination of ingredients was not considered as the way to represent country/regional style. For example, previous studies have shown that Western recipes and East Asian recipes are opposite in flavor compounds included in the ingredient pair BIBREF19 , BIBREF18 , BIBREF20 , BIBREF21 , BIBREF11 . For example, Western cuisines tend to use ingredient pairs sharing many flavor compounds, while East Asian cuisines tend to avoid compound sharing ingredients. It is suggested that combination of flavor compounds was also elemental factor to characterize the food in each country/region. As such, if we analyze the recipes data using flavor compounds, we might get different results.
In conclusion, we proposed a novel system which can transform a given recipe into any selected regional cuisine style. This system has two characteristics: 1) the system can identify a degree of regional cuisine style mixture of any selected recipe and visualize such regional cuisine style mixture using a barycentric Newton diagram; 2) the system can suggest ingredient substitution through extended word2vec model, such that a recipe becomes more authentic for any selected regional cuisine style. Future research directions were also discussed.
Conflict of Interest Statement
The authors declare that they have no conflict of interest.
Author Contributions
MK, LRV, and YI had the idea for the study and drafted the manuscript. MK performed the data collection and analysis. MS, CH, and KM participated in the interpretation of the results and discussions for manuscript writing and finalization. All authors read and approved the final manuscript.
Funding
Varshney's work was supported in part by the IBM-Illinois Center for Cognitive Computing Systems Research (C3SR), a research collaboration as part of the IBM AI Horizons Network.
Acknowledgments
This study used data from Yummly. We would like to express our deepest gratitude to everyone who participated in this services. We thank Kush Varshney for suggesting the spectral graph drawing approach to placing countries on the circle. | The basic idea of the visualization, drawing on Isaac Newton’s visualization of the color spectrum BIBREF8 , is to express a mixture in terms of its constituents as represented in barycentric coordinates. |
2cc42d14c8c927939a6b8d06f4fdee0913042416 | 2cc42d14c8c927939a6b8d06f4fdee0913042416_0 | Q: Do they propose any solution to debias the embeddings?
Text: Introduction
Recent work in the word embeddings literature has shown that embeddings encode gender and racial biases, BIBREF0, BIBREF1, BIBREF2. These biases can have harmful effects in downstream tasks including coreference resolution, BIBREF3 and machine translation, BIBREF4, leading to the development of a range of methods to try to mitigate such biases, BIBREF0, BIBREF5. In an adjacent literature, learning embeddings of knowledge graph (KG) entities and relations is becoming an increasingly common first step in utilizing KGs for a range of tasks, from missing link prediction, BIBREF6, BIBREF7, to more recent methods integrating learned embeddings into language models, BIBREF8, BIBREF9, BIBREF10.
A natural question to ask is “do graph embeddings encode social biases in similar fashion to word embeddings". We show that existing methods for identifying bias in word embeddings are not suitable for KG embeddings, and present an approach to overcome this using embedding finetuning. We demonstrate (perhaps unsurprisingly) that unequal distributions of people of different genders, ethnicities, religions and nationalities in Freebase and Wikidata result in biases related to professions being encoded in graph embeddings, such as that men are more likely to be bankers and women more likely to be homekeepers.
Such biases are potentially harmful when KG embeddings are used in applications. For example, if embeddings are used in a fact checking task, they would make it less likely that we accept facts that a female entity is a politician as opposed to a male entity. Alternatively, as KG embeddings get utilized as input to language models BIBREF8, BIBREF9, BIBREF10, such biases can affect all downstream NLP tasks.
Method ::: Graph Embeddings
Graph embeddings are a vector representation of dimension $d$ of all entities and relations in a KG. To learn these representations, we define a score function $g(.)$ which takes as input the embeddings of a fact in triple form and outputs a score, denoting how likely this triple is to be correct.
where $E_{1/2}$ are the dimension $d$ embeddings of entities 1/2, and $R_1$ is the dimension $d$ embedding of relation 1. The score function is composed of a transformation, which takes as input one entity embedding and the relation embedding and outputs a vector of the same dimension, and a similarity function, which calculates the similarity or distance between the output of the transformation function and the other entity embedding.
Many transformation functions have been proposed, including TransE BIBREF6, ComplEx BIBREF7 and RotatE BIBREF11. In this paper we use the TransE function and the dot product similarity metric, though emphasize that our approach is applicable to any score function:
We use embeddings of dimension 200, and sample 1000 negative triples per positive, by randomly permuting the lhs or rhs entity. We pass the 1000 negatives and single positive through a softmax function, and train using the cross entropy loss. All training is implemented using the PyTorch-BigGraph library BIBREF12.
Method ::: Defining bias in embeddings
Bias can be thought of as “prejudice in favor or against a person, group, or thing that is considered to be unfair" BIBREF13. Because definitions of fairness have changed over time, algorithms which are trained on “real-world" data may pick up associations which existed historically (or still exist), but which are considered undesirable. In the word embedding literature, one common idea is to analyse relationships which embeddings encode between professions and gender, race, ethnicity or nationality. We follow this approach in this paper, though note that our method is equally applicable to measuring the encoded relationship between any set of entities in a KG..
Method ::: Measuring bias in word embeddings
The first common technique for exposing bias in word embeddings, the “Word Embedding Association Test" BIBREF1, measures the cosine distance between embeddings and the average embeddings of sets of attribute words (e.g. male vs. female). They give a range of examples of biases according to this metric, including that science related words are more associated with “male", and art related words with “female". In a similar vein, in BIBREF0, the authors use the direction between vectors to expose stereotypical analogies, claiming that the direction between man::doctor is analogous to that of woman::nurse. Despite BIBREF14 exposing some technical shortcomings in this approach, it remains the case that distance metrics appear to be appropriate in at least exposing bias in word embeddings, which has then been shown to clearly propagate to downstream tasks, BIBREF3, BIBREF4.
We suggest that distance-based metrics are not suitable for measuring bias in KG embeddings. Figure FIGREF7 provides a simple demonstration of this. Visualizing in a two dimensional space, the embedding of person1 is closer to nurse than to doctor. However, graph embedding models do not use distance between two entity embeddings when making predictions, but rather the distance between some transformation of one entity embedding with the relation embedding.
In the simplest case of TransE BIBREF6 this transformation is a summation, which could result in a vector positioned at the yellow dot in Figure FIGREF7, when making a prediction of the profession of person1. As the transformation function becomes more complicated, BIBREF7, BIBREF11 etc., the distance metric becomes increasingly less applicable, as associations in the distance space become less and less correlated with associations in the score function space.
Method ::: Score based metric
In light of this, we present an alternative metric based on the score function. We define the sensitive attribute we are interested in, denoted $S$, and two alternative values of this attribute, denoted $A$ and $B$. For the purposes of this example we use gender as the sensitive attribute $S$, and male and female as the alternative values $A$ and $B$. We take a trained embedding of a human entity, $j$, denoted $E_j$ and calculate an update to this embedding which increases the score that they have attribute $A$ (male), and decreases the score that they have attribute $B$ (female). In other words, we finetune the embedding to make the person “more male" according to the model's encoding of masculinity. This is visualized in Figure FIGREF9, where we shift person1's embedding so that the transformation between person1 and the relation has_gender moves closer to male and away from female.
Mathematically, we define function $M$ as the difference between the score that person $j$ has sensitive attribute $A$ (male) and that they have sensitive attribute $B$ (female). We then differentiate $M$ wrt the embedding of person $j$, $E_{j}$, and update the embedding to increase this score function.
where $E_{j}^{\prime }$ denotes the new embedding for person $j$, $R_{S}$ the embedding of the sensitive relation $i$ (gender), and $E_{A}$ and $E_B$ the embeddings of attributes $A$ and $B$ (male and female). This is equivalent to providing the model with a batch of two triples, $(E_{j}, R_{S}, E_{A})$ and $(E_{j}, R_{S}, E_{B})$, and taking a step with the basic gradient descent algorithm with learning rate $\alpha $.
We then analyse the change in the scores for all professions. That is, we calculate whether, according to the model's score function, making an entity more male increases or decreases the likelihood that they have a particular profession, $p$:
where $E_{p}$ denotes the entity embedding of the profession, $p$.
Figure FIGREF10 illustrates this. The adjustment to person1's embedding defined in Figure FIGREF9 results in the transformation of person1 and the relation has_profession moving closer to doctor and further away from nurse. That is, the score g(person1, has_profession, doctor) has increased, and the score g(person1, has_profession, nurse) has decreased. In other words, the embeddings in this case encode the bias that doctor is a profession associated with male rather than female entities.
We can then repeat the process for all humans in the KG and calculate the average changes, giving a bias score $B_p$ for profession $p$:
where J is the number of human entities in the KG. We calculate this score for each profession, $p = 1,...,P$ and rank the results.
Results
We provide results in the main paper for Wikidata using TransE BIBREF6 embeddings, showing only professions which have at least 20 observations in the KG.
Table TABREF11 presents the results for gender, with attribute $A$ being male and $B$ female. Alongside the score we present the counts of humans in the KG which have this profession, split by attributes $A$ and $B$. For example, the top rows of column $C_A$ and $C_B$ in Table TABREF11 shows that there are 44 male entities in Wikidata with the profession baritone, and 0 female entities with this profession.
Whilst the discrepancies in counts are of interest in themselves BIBREF15 our main aim in this paper is to show that these differences propagate to the learned embeddings. Table TABREF11 confirms this; although it includes a number of professions which are male by definition, such as “baritone", there are also many which we may wish to be neutral, such as “banker" and “engineer". Whilst there is a strong correlation between the counts and $B_p$, it is not perfect. For example, there are more male and less female priests than there are bankers, but we get a higher score according to the model for banker than we do priest. The interconnected nature of graphs makes diagnosing the reason for this difficult, but there is clearly a difference in representation of the male entities in the graph who are bankers relatives to priests, which plays out along gender lines.
Table TABREF12 presents the most female professions relative to male for Wikidata (i.e. we reverse $A$ and $B$ from Table TABREF11). As with the most male case, there are a mixture of professions which are female by definition, such as “nun", and those which we may wish to be neutral, such as “nurse" and “homekeeper". This story is supported by Tables TABREF22 and TABREF23 in the Appendix, which give the same results but for the FB3M dataset.
We can also calculate biases for other sensitive relations such as ethnicity, religion and nationality. For each of these relations, we choose two attributes to compare. In Table TABREF13, we show the professions most associated with the ethnicity “Jewish" relative to “African American". As previously, the results include potentially harmful stereotypes, such as the “economist" and “entrepreneur" cases. It is interesting that these stereotypes play out in our measure, despite the more balanced nature of the counts. We provide sample results for religion and nationality in Appendix SECREF15, alongside results for Freebase. To verify that our approach is equally applicable to any transformation function, we also include results in Appendix SECREF30 for ComplEx embeddings.
Summary
We have presented the first study on social bias in KG embeddings, and proposed a new metric for measuring such bias. We demonstrated that differences in the distributions of entities in real-world knowledge graphs (there are many more male bankers in Wikidata than female) translate into harmful biases related to professions being encoded in embeddings. Given that KGs are formed of real-world entities, we cannot simply equalize the counts; it is not possible to correct history by creating female US Presidents, etc. In light of this, we suggest that care is needed when applying graph embeddings in NLP pipelines, and work needed to develop robust methods to debias such embeddings.
Appendices ::: Wikidata additional results
We provide a sample of additional results for Wikidata, across ethnicity, religion and nationality. For each case we choose a pair of values (e.g. Catholic and Islam for religion) to compare.
The picture presented is similar to that in the main paper; the bias measure is highly correlated with the raw counts, with some associations being non-controversial, and others demonstrating potentially harmful stereotypes. Table TABREF19 is interesting, as the larger number of US entities in Wikidata (390k) relative to UK entities (131k) means the counts are more balanced, and the correlation between counts and bias measure less strong.
Appendices ::: FB3M results
For comparison, we train TransE embeddings on FB3M of the same dimension, and present the corresponding results tables for gender, religion, ethnicity and nationality. The distribution of entities in FB3M is significantly different to that in Wikidata, resulting in a variety of different professions entering the top twenty counts. However, the broad conclusion is similar; the embeddings encode common and potentially harmful stereotypes related to professions.
Appendices ::: Complex embeddings
Our method is equally applicable to any transformation function. To demonstrate this, we trained embeddings of the same dimension using the ComplEx transformation BIBREF7, and provide the results for gender in Tables TABREF31 and TABREF32 below. It would be interesting to carry out a comparison of the differences in how bias is encoded for different transformation functions, which we leave to future work. | No |
b546f14feaa639e43aa64c799dc61b8ef480fb3d | b546f14feaa639e43aa64c799dc61b8ef480fb3d_0 | Q: How are these biases found?
Text: Introduction
Recent work in the word embeddings literature has shown that embeddings encode gender and racial biases, BIBREF0, BIBREF1, BIBREF2. These biases can have harmful effects in downstream tasks including coreference resolution, BIBREF3 and machine translation, BIBREF4, leading to the development of a range of methods to try to mitigate such biases, BIBREF0, BIBREF5. In an adjacent literature, learning embeddings of knowledge graph (KG) entities and relations is becoming an increasingly common first step in utilizing KGs for a range of tasks, from missing link prediction, BIBREF6, BIBREF7, to more recent methods integrating learned embeddings into language models, BIBREF8, BIBREF9, BIBREF10.
A natural question to ask is “do graph embeddings encode social biases in similar fashion to word embeddings". We show that existing methods for identifying bias in word embeddings are not suitable for KG embeddings, and present an approach to overcome this using embedding finetuning. We demonstrate (perhaps unsurprisingly) that unequal distributions of people of different genders, ethnicities, religions and nationalities in Freebase and Wikidata result in biases related to professions being encoded in graph embeddings, such as that men are more likely to be bankers and women more likely to be homekeepers.
Such biases are potentially harmful when KG embeddings are used in applications. For example, if embeddings are used in a fact checking task, they would make it less likely that we accept facts that a female entity is a politician as opposed to a male entity. Alternatively, as KG embeddings get utilized as input to language models BIBREF8, BIBREF9, BIBREF10, such biases can affect all downstream NLP tasks.
Method ::: Graph Embeddings
Graph embeddings are a vector representation of dimension $d$ of all entities and relations in a KG. To learn these representations, we define a score function $g(.)$ which takes as input the embeddings of a fact in triple form and outputs a score, denoting how likely this triple is to be correct.
where $E_{1/2}$ are the dimension $d$ embeddings of entities 1/2, and $R_1$ is the dimension $d$ embedding of relation 1. The score function is composed of a transformation, which takes as input one entity embedding and the relation embedding and outputs a vector of the same dimension, and a similarity function, which calculates the similarity or distance between the output of the transformation function and the other entity embedding.
Many transformation functions have been proposed, including TransE BIBREF6, ComplEx BIBREF7 and RotatE BIBREF11. In this paper we use the TransE function and the dot product similarity metric, though emphasize that our approach is applicable to any score function:
We use embeddings of dimension 200, and sample 1000 negative triples per positive, by randomly permuting the lhs or rhs entity. We pass the 1000 negatives and single positive through a softmax function, and train using the cross entropy loss. All training is implemented using the PyTorch-BigGraph library BIBREF12.
Method ::: Defining bias in embeddings
Bias can be thought of as “prejudice in favor or against a person, group, or thing that is considered to be unfair" BIBREF13. Because definitions of fairness have changed over time, algorithms which are trained on “real-world" data may pick up associations which existed historically (or still exist), but which are considered undesirable. In the word embedding literature, one common idea is to analyse relationships which embeddings encode between professions and gender, race, ethnicity or nationality. We follow this approach in this paper, though note that our method is equally applicable to measuring the encoded relationship between any set of entities in a KG..
Method ::: Measuring bias in word embeddings
The first common technique for exposing bias in word embeddings, the “Word Embedding Association Test" BIBREF1, measures the cosine distance between embeddings and the average embeddings of sets of attribute words (e.g. male vs. female). They give a range of examples of biases according to this metric, including that science related words are more associated with “male", and art related words with “female". In a similar vein, in BIBREF0, the authors use the direction between vectors to expose stereotypical analogies, claiming that the direction between man::doctor is analogous to that of woman::nurse. Despite BIBREF14 exposing some technical shortcomings in this approach, it remains the case that distance metrics appear to be appropriate in at least exposing bias in word embeddings, which has then been shown to clearly propagate to downstream tasks, BIBREF3, BIBREF4.
We suggest that distance-based metrics are not suitable for measuring bias in KG embeddings. Figure FIGREF7 provides a simple demonstration of this. Visualizing in a two dimensional space, the embedding of person1 is closer to nurse than to doctor. However, graph embedding models do not use distance between two entity embeddings when making predictions, but rather the distance between some transformation of one entity embedding with the relation embedding.
In the simplest case of TransE BIBREF6 this transformation is a summation, which could result in a vector positioned at the yellow dot in Figure FIGREF7, when making a prediction of the profession of person1. As the transformation function becomes more complicated, BIBREF7, BIBREF11 etc., the distance metric becomes increasingly less applicable, as associations in the distance space become less and less correlated with associations in the score function space.
Method ::: Score based metric
In light of this, we present an alternative metric based on the score function. We define the sensitive attribute we are interested in, denoted $S$, and two alternative values of this attribute, denoted $A$ and $B$. For the purposes of this example we use gender as the sensitive attribute $S$, and male and female as the alternative values $A$ and $B$. We take a trained embedding of a human entity, $j$, denoted $E_j$ and calculate an update to this embedding which increases the score that they have attribute $A$ (male), and decreases the score that they have attribute $B$ (female). In other words, we finetune the embedding to make the person “more male" according to the model's encoding of masculinity. This is visualized in Figure FIGREF9, where we shift person1's embedding so that the transformation between person1 and the relation has_gender moves closer to male and away from female.
Mathematically, we define function $M$ as the difference between the score that person $j$ has sensitive attribute $A$ (male) and that they have sensitive attribute $B$ (female). We then differentiate $M$ wrt the embedding of person $j$, $E_{j}$, and update the embedding to increase this score function.
where $E_{j}^{\prime }$ denotes the new embedding for person $j$, $R_{S}$ the embedding of the sensitive relation $i$ (gender), and $E_{A}$ and $E_B$ the embeddings of attributes $A$ and $B$ (male and female). This is equivalent to providing the model with a batch of two triples, $(E_{j}, R_{S}, E_{A})$ and $(E_{j}, R_{S}, E_{B})$, and taking a step with the basic gradient descent algorithm with learning rate $\alpha $.
We then analyse the change in the scores for all professions. That is, we calculate whether, according to the model's score function, making an entity more male increases or decreases the likelihood that they have a particular profession, $p$:
where $E_{p}$ denotes the entity embedding of the profession, $p$.
Figure FIGREF10 illustrates this. The adjustment to person1's embedding defined in Figure FIGREF9 results in the transformation of person1 and the relation has_profession moving closer to doctor and further away from nurse. That is, the score g(person1, has_profession, doctor) has increased, and the score g(person1, has_profession, nurse) has decreased. In other words, the embeddings in this case encode the bias that doctor is a profession associated with male rather than female entities.
We can then repeat the process for all humans in the KG and calculate the average changes, giving a bias score $B_p$ for profession $p$:
where J is the number of human entities in the KG. We calculate this score for each profession, $p = 1,...,P$ and rank the results.
Results
We provide results in the main paper for Wikidata using TransE BIBREF6 embeddings, showing only professions which have at least 20 observations in the KG.
Table TABREF11 presents the results for gender, with attribute $A$ being male and $B$ female. Alongside the score we present the counts of humans in the KG which have this profession, split by attributes $A$ and $B$. For example, the top rows of column $C_A$ and $C_B$ in Table TABREF11 shows that there are 44 male entities in Wikidata with the profession baritone, and 0 female entities with this profession.
Whilst the discrepancies in counts are of interest in themselves BIBREF15 our main aim in this paper is to show that these differences propagate to the learned embeddings. Table TABREF11 confirms this; although it includes a number of professions which are male by definition, such as “baritone", there are also many which we may wish to be neutral, such as “banker" and “engineer". Whilst there is a strong correlation between the counts and $B_p$, it is not perfect. For example, there are more male and less female priests than there are bankers, but we get a higher score according to the model for banker than we do priest. The interconnected nature of graphs makes diagnosing the reason for this difficult, but there is clearly a difference in representation of the male entities in the graph who are bankers relatives to priests, which plays out along gender lines.
Table TABREF12 presents the most female professions relative to male for Wikidata (i.e. we reverse $A$ and $B$ from Table TABREF11). As with the most male case, there are a mixture of professions which are female by definition, such as “nun", and those which we may wish to be neutral, such as “nurse" and “homekeeper". This story is supported by Tables TABREF22 and TABREF23 in the Appendix, which give the same results but for the FB3M dataset.
We can also calculate biases for other sensitive relations such as ethnicity, religion and nationality. For each of these relations, we choose two attributes to compare. In Table TABREF13, we show the professions most associated with the ethnicity “Jewish" relative to “African American". As previously, the results include potentially harmful stereotypes, such as the “economist" and “entrepreneur" cases. It is interesting that these stereotypes play out in our measure, despite the more balanced nature of the counts. We provide sample results for religion and nationality in Appendix SECREF15, alongside results for Freebase. To verify that our approach is equally applicable to any transformation function, we also include results in Appendix SECREF30 for ComplEx embeddings.
Summary
We have presented the first study on social bias in KG embeddings, and proposed a new metric for measuring such bias. We demonstrated that differences in the distributions of entities in real-world knowledge graphs (there are many more male bankers in Wikidata than female) translate into harmful biases related to professions being encoded in embeddings. Given that KGs are formed of real-world entities, we cannot simply equalize the counts; it is not possible to correct history by creating female US Presidents, etc. In light of this, we suggest that care is needed when applying graph embeddings in NLP pipelines, and work needed to develop robust methods to debias such embeddings.
Appendices ::: Wikidata additional results
We provide a sample of additional results for Wikidata, across ethnicity, religion and nationality. For each case we choose a pair of values (e.g. Catholic and Islam for religion) to compare.
The picture presented is similar to that in the main paper; the bias measure is highly correlated with the raw counts, with some associations being non-controversial, and others demonstrating potentially harmful stereotypes. Table TABREF19 is interesting, as the larger number of US entities in Wikidata (390k) relative to UK entities (131k) means the counts are more balanced, and the correlation between counts and bias measure less strong.
Appendices ::: FB3M results
For comparison, we train TransE embeddings on FB3M of the same dimension, and present the corresponding results tables for gender, religion, ethnicity and nationality. The distribution of entities in FB3M is significantly different to that in Wikidata, resulting in a variety of different professions entering the top twenty counts. However, the broad conclusion is similar; the embeddings encode common and potentially harmful stereotypes related to professions.
Appendices ::: Complex embeddings
Our method is equally applicable to any transformation function. To demonstrate this, we trained embeddings of the same dimension using the ComplEx transformation BIBREF7, and provide the results for gender in Tables TABREF31 and TABREF32 below. It would be interesting to carry out a comparison of the differences in how bias is encoded for different transformation functions, which we leave to future work. | Unanswerable |
8568c82078495ab421ecbae38ddd692c867eac09 | 8568c82078495ab421ecbae38ddd692c867eac09_0 | Q: How many layers of self-attention does the model have?
Text: Introduction
Task-oriented chatbots are a type of dialogue generation system which tries to help the users accomplish specific tasks, such as booking a restaurant table or buying movie tickets, in a continuous and uninterrupted conversational interface and usually in as few steps as possible. The development of such systems falls into the Conversational AI domain which is the science of developing agents which are able to communicate with humans in a natural way BIBREF0. Digital assistants such as Apple's Siri, Google Assistant, Amazon Alexa, and Alibaba's AliMe are examples of successful chatbots developed by giant companies to engage with their customers.
There are mainly two different ways to create a task-oriented chatbot which are either using set of hand-crafted and carefully-designed rules or use corpus-based method in which the chatbot can be trained with a relatively large corpus of conversational data. Given the abundance of dialogue data, the latter method seems to be a better and a more general approach for developing task-oriented chatbots. The corpus-based method also falls into two main chatbot design architectures which are pipelined and end-to-end architectures BIBREF1. End-to-end chatbots are usually neural networks based BIBREF2, BIBREF3, BIBREF4, BIBREF5 and thus can be adapted to new domains by training on relevant dialogue datasets for that specific domain. Furthermore, all sequence modelling methods can also be used in training end-to-end task-oriented chatbots. A sequence modelling method receives a sequence as input and predicts another sequence as output. For example in the case of machine translation the input could be a sequence of words in a given language and the output would be a sentence in a second language. In a dialogue system, an utterance is the input and the predicted sequence of words would be the corresponding response.
Self-attentional models are a new paradigm for sequence modelling tasks which differ from common sequence modelling methods, such as recurrence-based and convolution-based sequence learning, in the way that their architecture is only based on the attention mechanism. The Transformer BIBREF6 and Universal Transformer BIBREF7 models are the first models that entirely rely on the self-attention mechanism for both encoder and decoder, and that is why they are also referred to as a self-attentional models. The Transformer models has produced state-of-the-art results in the task neural machine translation BIBREF6 and this encouraged us to further investigate this model for the task of training task-oriented chatbots. While in the Transformer model there is no recurrence, it turns out that the recurrence used in RNN models is essential for some tasks in NLP including language understanding tasks and thus the Transformer fails to generalize in those tasks BIBREF7. We also investigate the usage of the Universal Transformer for this task to see how it compares to the Transformer model.
We focus on self-attentional sequence modelling for this study and intend to provide an answer for one specific question which is:
How effective are self-attentional models for training end-to-end task-oriented chatbots?
Our contribution in this study is as follows:
We train end-to-end task-oriented chatbots using both self-attentional models and common recurrence-based models used in sequence modelling tasks and compare and analyze the results using different evaluation metrics on three different datasets.
We provide insight into how effective are self-attentional models for this task and benchmark the time performance of these models against the recurrence-based sequence modelling methods.
We try to quantify the effectiveness of self-attention mechanism in self-attentional models and compare its effect to recurrence-based models for the task of training end-to-end task-oriented chatbots.
Related Work ::: Task-Oriented Chatbots Architectures
End-to-end architectures are among the most used architectures for research in the field of conversational AI. The advantage of using an end-to-end architecture is that one does not need to explicitly train different components for language understanding and dialogue management and then concatenate them together. Network-based end-to-end task-oriented chatbots as in BIBREF4, BIBREF8 try to model the learning task as a policy learning method in which the model learns to output a proper response given the current state of the dialogue. As discussed before, all encoder-decoder sequence modelling methods can be used for training end-to-end chatbots. Eric and Manning eric2017copy use the copy mechanism augmentation on simple recurrent neural sequence modelling and achieve good results in training end-to-end task-oriented chatbots BIBREF9.
Another popular method for training chatbots is based on memory networks. Memory networks augment the neural networks with task-specific memories which the model can learn to read and write. Memory networks have been used in BIBREF8 for training task-oriented agents in which they store dialogue context in the memory module, and then the model uses it to select a system response (also stored in the memory module) from a set of candidates. A variation of Key-value memory networks BIBREF10 has been used in BIBREF11 for the training task-oriented chatbots which stores the knowledge base in the form of triplets (which is (subject,relation,object) such as (yoga,time,3pm)) in the key-value memory network and then the model tries to select the most relevant entity from the memory and create a relevant response. This approach makes the interaction with the knowledge base smoother compared to other models.
Another approach for training end-to-end task-oriented dialogue systems tries to model the task-oriented dialogue generation in a reinforcement learning approach in which the current state of the conversation is passed to some sequence learning network, and this network decides the action which the chatbot should act upon. End-to-end LSTM based model BIBREF12, and the Hybrid Code Networks BIBREF13 can use both supervised and reinforcement learning approaches for training task-oriented chatbots.
Related Work ::: Sequence Modelling Methods
Sequence modelling methods usually fall into recurrence-based, convolution-based, and self-attentional-based methods. In recurrence-based sequence modeling, the words are fed into the model in a sequential way, and the model learns the dependencies between the tokens given the context from the past (and the future in case of bidirectional Recurrent Neural Networks (RNNs)) BIBREF14. RNNs and their variations such as Long Short-term Memory (LSTM) BIBREF15, and Gated Recurrent Units (GRU) BIBREF16 are the most widely used recurrence-based models used in sequence modelling tasks. Convolution-based sequence modelling methods rely on Convolutional Neural Networks (CNN) BIBREF17 which are mostly used for vision tasks but can also be used for handling sequential data. In CNN-based sequence modelling, multiple CNN layers are stacked on top of each other to give the model the ability to learn long-range dependencies. The stacking of layers in CNNs for sequence modeling allows the model to grow its receptive field, or in other words context size, and thus can model complex dependencies between different sections of the input sequence BIBREF18, BIBREF19. WaveNet van2016wavenet, used in audio synthesis, and ByteNet kalchbrenner2016neural, used in machine translation tasks, are examples of models trained using convolution-based sequence modelling.
Models
We compare the most commonly used recurrence-based models for sequence modelling and contrast them with Transformer and Universal Transformer models. The models that we train are:
Models ::: LSTM and Bi-Directional LSTM
Long Short-term Memory (LSTM) networks are a special kind of RNN networks which can learn long-term dependencies BIBREF15. RNN models suffer from the vanishing gradient problem BIBREF20 which makes it hard for RNN models to learn long-term dependencies. The LSTM model tackles this problem by defining a gating mechanism which introduces input, output and forget gates, and the model has the ability to decide how much of the previous information it needs to keep and how much of the new information it needs to integrate and thus this mechanism helps the model keep track of long-term dependencies.
Bi-directional LSTMs BIBREF21 are a variation of LSTMs which proved to give better results for some NLP tasks BIBREF22. The idea behind a Bi-directional LSTM is to give the network (while training) the ability to not only look at past tokens, like LSTM does, but to future tokens, so the model has access to information both form the past and future. In the case of a task-oriented dialogue generation systems, in some cases, the information needed so that the model learns the dependencies between the tokens, comes from the tokens that are ahead of the current index, and if the model is able to take future tokens into accounts it can learn more efficiently.
Models ::: Transformer
As discussed before, Transformer is the first model that entirely relies on the self-attention mechanism for both the encoder and the decoder. The Transformer uses the self-attention mechanism to learn a representation of a sentence by relating different positions of that sentence. Like many of the sequence modelling methods, Transformer follows the encoder-decoder architecture in which the input is given to the encoder and the results of the encoder is passed to the decoder to create the output sequence. The difference between Transformer (which is a self-attentional model) and other sequence models (such as recurrence-based and convolution-based) is that the encoder and decoder architecture is only based on the self-attention mechanism. The Transformer also uses multi-head attention which intends to give the model the ability to look at different representations of the different positions of both the input (encoder self-attention), output (decoder self-attention) and also between input and output (encoder-decoder attention) BIBREF6. It has been used in a variety of NLP tasks such as mathematical language understanding [110], language modeling BIBREF23, machine translation BIBREF6, question answering BIBREF24, and text summarization BIBREF25.
Models ::: Universal Transformer
The Universal Transformer model is an encoder-decoder-based sequence-to-sequence model which applies recurrence to the representation of each of the positions of the input and output sequences. The main difference between the RNN recurrence and the Universal Transformer recurrence is that the recurrence used in the Universal Transformer is applied on consecutive representation vectors of each token in the sequence (i.e., over depth) whereas in the RNN models this recurrence is applied on positions of the tokens in the sequence. A variation of the Universal Transformer, called Adaptive Universal Transformer, applies the Adaptive Computation Time (ACT) BIBREF26 technique on the Universal Transformer model which makes the model train faster since it saves computation time and also in some cases can increase the model accuracy. The ACT allows the Universal Transformer model to use different recurrence time steps for different tokens.
We know, based on reported evidence that transformers are potent in NLP tasks like translation and question answering. Our aim is to assess the applicability and effectiveness of transformers and universal-transformers in the domain of task-oriented conversational agents. In the next section, we report on experiments to investigate the usage of self-attentional models performance against the aforementioned models for the task of training end-to-end task-oriented chatbots.
Experiments
We run our experiments on Tesla 960M Graphical Processing Unit (GPU). We evaluated the models using the aforementioned metrics and also applied early stopping (with delta set to 0.1 for 600 training steps).
Experiments ::: Datasets
We use three different datasets for training the models. We use the Dialogue State Tracking Competition 2 (DSTC2) dataset BIBREF27 which is the most widely used dataset for research on task-oriented chatbots. We also used two other datasets recently open-sourced by Google Research BIBREF28 which are M2M-sim-M (dataset in movie domain) and M2M-sim-R (dataset in restaurant domain). M2M stands for Machines Talking to Machines which refers to the framework with which these two datasets were created. In this framework, dialogues are created via dialogue self-play and later augmented via crowdsourcing. We trained on our models on different datasets in order to make sure the results are not corpus-biased. Table TABREF12 shows the statistics of these three datasets which we will use to train and evaluate the models.
The M2M dataset has more diversity in both language and dialogue flow compared to the the commonly used DSTC2 dataset which makes it appealing for the task of creating task-oriented chatbots. This is also the reason that we decided to use M2M dataset in our experiments to see how well models can handle a more diversed dataset.
Experiments ::: Datasets ::: Dataset Preparation
We followed the data preparation process used for feeding the conversation history into the encoder-decoder as in BIBREF5. Consider a sample dialogue $D$ in the corpus which consists of a number of turns exchanged between the user and the system. $D$ can be represented as ${(u_1, s_1),(u_2, s_2), ...,(u_k, s_k)}$ where $k$ is the number of turns in this dialogue. At each time step in the conversation, we encode the conversation turns up to that time step, which is the context of the dialogue so far, and the system response after that time step will be used as the target. For example, given we are processing the conversation at time step $i$, the context of the conversation so far would be ${(u_1, s_1, u_2, s_2, ..., u_i)}$ and the model has to learn to output ${(s_i)}$ as the target.
Experiments ::: Training
We used the tensor2tensor library BIBREF29 in our experiments for training and evaluation of sequence modeling methods. We use Adam optimizer BIBREF30 for training the models. We set $\beta _1=0.9$, $\beta _2=0.997$, and $\epsilon =1e-9$ for the Adam optimizer and started with learning rate of 0.2 with noam learning rate decay schema BIBREF6. In order to avoid overfitting, we use dropout BIBREF31 with dropout chosen from [0.7-0.9] range. We also conducted early stopping BIBREF14 to avoid overfitting in our experiments as the regularization methods. We set the batch size to 4096, hidden size to 128, and the embedding size to 128 for all the models. We also used grid search for hyperparameter tuning for all of the trained models. Details of our training and hyperparameter tuning and the code for reproducing the results can be found in the chatbot-exp github repository.
Experiments ::: Inference
In the inference time, there are mainly two methods for decoding which are greedy and beam search BIBREF32. Beam search has been proved to be an essential part in generative NLP task such as neural machine translation BIBREF33. In the case of dialogue generation systems, beam search could help alleviate the problem of having many possible valid outputs which do not match with the target but are valid and sensible outputs. Consider the case in which a task-oriented chatbot, trained for a restaurant reservation task, in response to the user utterance “Persian food”, generates the response “what time and day would you like the reservation for?” but the target defined for the system is “would you like a fancy restaurant?”. The response generated by the chatbot is a valid response which asks the user about other possible entities but does not match with the defined target.
We try to alleviate this problem in inference time by applying the beam search technique with a different beam size $\alpha \in \lbrace 1, 2, 4\rbrace $ and pick the best result based on the BLEU score. Note that when $\alpha = 1$, we are using the original greedy search method for the generation task.
Experiments ::: Evaluation Measures
BLEU: We use the Bilingual Evaluation Understudy (BLEU) BIBREF34 metric which is commonly used in machine translation tasks. The BLEU metric can be used to evaluate dialogue generation models as in BIBREF5, BIBREF35. The BLEU metric is a word-overlap metric which computes the co-occurrence of N-grams in the reference and the generated response and also applies the brevity penalty which tries to penalize far too short responses which are usually not desired in task-oriented chatbots. We compute the BLEU score using all generated responses of our systems.
Per-turn Accuracy: Per-turn accuracy measures the similarity of the system generated response versus the target response. Eric and Manning eric2017copy used this metric to evaluate their systems in which they considered their response to be correct if all tokens in the system generated response matched the corresponding token in the target response. This metric is a little bit harsh, and the results may be low since all the tokens in the generated response have to be exactly in the same position as in the target response.
Per-Dialogue Accuracy: We calculate per-dialogue accuracy as used in BIBREF8, BIBREF5. For this metric, we consider all the system generated responses and compare them to the target responses. A dialogue is considered to be true if all the turns in the system generated responses match the corresponding turns in the target responses. Note that this is a very strict metric in which all the utterances in the dialogue should be the same as the target and in the right order.
F1-Entity Score: Datasets used in task-oriented chores have a set of entities which represent user preferences. For example, in the restaurant domain chatbots common entities are meal, restaurant name, date, time and the number of people (these are usually the required entities which are crucial for making reservations, but there could be optional entities such as location or rating). Each target response has a set of entities which the system asks or informs the user about. Our models have to be able to discern these specific entities and inject them into the generated response. To evaluate our models we could use named-entity recognition evaluation metrics BIBREF36. The F1 score is the most commonly used metric used for the evaluation of named-entity recognition models which is the harmonic average of precision and recall of the model. We calculate this metric by micro-averaging over all the system generated responses.
Results and Discussion ::: Comparison of Models
The results of running the experiments for the aforementioned models is shown in Table TABREF14 for the DSTC2 dataset and in Table TABREF18 for the M2M datasets. The bold numbers show the best performing model in each of the evaluation metrics. As discussed before, for each model we use different beam sizes (bs) in inference time and report the best one. Our findings in Table TABREF14 show that self-attentional models outperform common recurrence-based sequence modelling methods in the BLEU, Per-turn accuracy, and entity F1 score. The reduction in the evalution numbers for the M2M dataset and in our investigation of the trained model we found that this considerable reduction is due to the fact that the diversity of M2M dataset is considerably more compared to DSTC2 dataset while the traning corpus size is smaller.
Results and Discussion ::: Time Performance Comparison
Table TABREF22 shows the time performance of the models trained on DSTC2 dataset. Note that in order to get a fair time performance comparison, we trained the models with the same batch size (4096) and on the same GPU. These numbers are for the best performing model (in terms of evaluation loss and selected using the early stopping method) for each of the sequence modelling methods. Time to Convergence (T2C) shows the approximate time that the model was trained to converge. We also show the loss in the development set for that specific checkpoint.
Results and Discussion ::: Effect of (Self-)Attention Mechanism
As discussed before in Section SECREF8, self-attentional models rely on the self-attention mechanism for sequence modelling. Recurrence-based models such as LSTM and Bi-LSTM can also be augmented in order to increase their performance, as evident in Table TABREF14 which shows the increase in the performance of both LSTM and Bi-LSTM when augmented with an attention mechanism. This leads to the question whether we can increase the performance of recurrence-based models by adding multiple attention heads, similar to the multi-head self-attention mechanism used in self-attentional models, and outperform the self-attentional models.
To investigate this question, we ran a number of experiments in which we added multiple attention heads on top of Bi-LSTM model and also tried a different number of self-attention heads in self-attentional models in order to compare their performance for this specific task. Table TABREF25 shows the results of these experiments. Note that the models in Table TABREF25 are actually the best models that we found in our experiments on DSTC2 dataset and we only changed one parameter for each of them, i.e. the number of attention heads in the recurrence-based models and the number of self-attention heads in the self-attentional models, keeping all other parameters unchanged. We also report the results of models with beam size of 2 in inference time. We increased the number of attention heads in the Bi-LSTM model up to 64 heads to see its performance change. Note that increasing the number of attention heads makes the training time intractable and time consuming while the model size would increase significantly as shown in Table TABREF24. Furthermore, by observing the results of the Bi-LSTM+Att model in Table TABREF25 (both test and development set) we can see that Bi-LSTM performance decreases and thus there is no need to increase the attention heads further.
Our findings in Table TABREF25 show that the self-attention mechanism can outperform recurrence-based models even if the recurrence-based models have multiple attention heads. The Bi-LSTM model with 64 attention heads cannot beat the best Trasnformer model with NH=4 and also its results are very close to the Transformer model with NH=1. This observation clearly depicts the power of self-attentional based models and demonstrates that the attention mechanism used in self-attentional models as the backbone for learning, outperforms recurrence-based models even if they are augmented with multiple attention heads.
Conclusion and Future Work
We have determined that Transformers and Universal-Transformers are indeed effective at generating appropriate responses in task-oriented chatbot systems. In actuality, their performance is even better than the typically used deep learning architectures. Our findings in Table TABREF14 show that self-attentional models outperform common recurrence-based sequence modelling methods in the BLEU, Per-turn accuracy, and entity F1 score. The results of the Transformer model beats all other models in all of the evaluation metrics. Also, comparing the result of LSTM and LSTM with attention mechanism as well as the Bi-LSTM with Bi-LSTM with attention mechanism, it can be observed in the results that adding the attention mechanism can increase the performance of the models. Comparing the results of self-attentional models shows that the Transformer model outperforms the other self-attentional models, while the Universal Transformer model gives reasonably good results.
In future work, it would be interesting to compare the performance of self-attentional models (specifically the winning Transformer model) against other end-to-end architectures such as the Memory Augmented Networks. | 1, 4, 8, 16, 32, 64 |
2ea382c676e418edd5327998e076a8c445d007a5 | 2ea382c676e418edd5327998e076a8c445d007a5_0 | Q: Is human evaluation performed?
Text: Introduction
Task-oriented chatbots are a type of dialogue generation system which tries to help the users accomplish specific tasks, such as booking a restaurant table or buying movie tickets, in a continuous and uninterrupted conversational interface and usually in as few steps as possible. The development of such systems falls into the Conversational AI domain which is the science of developing agents which are able to communicate with humans in a natural way BIBREF0. Digital assistants such as Apple's Siri, Google Assistant, Amazon Alexa, and Alibaba's AliMe are examples of successful chatbots developed by giant companies to engage with their customers.
There are mainly two different ways to create a task-oriented chatbot which are either using set of hand-crafted and carefully-designed rules or use corpus-based method in which the chatbot can be trained with a relatively large corpus of conversational data. Given the abundance of dialogue data, the latter method seems to be a better and a more general approach for developing task-oriented chatbots. The corpus-based method also falls into two main chatbot design architectures which are pipelined and end-to-end architectures BIBREF1. End-to-end chatbots are usually neural networks based BIBREF2, BIBREF3, BIBREF4, BIBREF5 and thus can be adapted to new domains by training on relevant dialogue datasets for that specific domain. Furthermore, all sequence modelling methods can also be used in training end-to-end task-oriented chatbots. A sequence modelling method receives a sequence as input and predicts another sequence as output. For example in the case of machine translation the input could be a sequence of words in a given language and the output would be a sentence in a second language. In a dialogue system, an utterance is the input and the predicted sequence of words would be the corresponding response.
Self-attentional models are a new paradigm for sequence modelling tasks which differ from common sequence modelling methods, such as recurrence-based and convolution-based sequence learning, in the way that their architecture is only based on the attention mechanism. The Transformer BIBREF6 and Universal Transformer BIBREF7 models are the first models that entirely rely on the self-attention mechanism for both encoder and decoder, and that is why they are also referred to as a self-attentional models. The Transformer models has produced state-of-the-art results in the task neural machine translation BIBREF6 and this encouraged us to further investigate this model for the task of training task-oriented chatbots. While in the Transformer model there is no recurrence, it turns out that the recurrence used in RNN models is essential for some tasks in NLP including language understanding tasks and thus the Transformer fails to generalize in those tasks BIBREF7. We also investigate the usage of the Universal Transformer for this task to see how it compares to the Transformer model.
We focus on self-attentional sequence modelling for this study and intend to provide an answer for one specific question which is:
How effective are self-attentional models for training end-to-end task-oriented chatbots?
Our contribution in this study is as follows:
We train end-to-end task-oriented chatbots using both self-attentional models and common recurrence-based models used in sequence modelling tasks and compare and analyze the results using different evaluation metrics on three different datasets.
We provide insight into how effective are self-attentional models for this task and benchmark the time performance of these models against the recurrence-based sequence modelling methods.
We try to quantify the effectiveness of self-attention mechanism in self-attentional models and compare its effect to recurrence-based models for the task of training end-to-end task-oriented chatbots.
Related Work ::: Task-Oriented Chatbots Architectures
End-to-end architectures are among the most used architectures for research in the field of conversational AI. The advantage of using an end-to-end architecture is that one does not need to explicitly train different components for language understanding and dialogue management and then concatenate them together. Network-based end-to-end task-oriented chatbots as in BIBREF4, BIBREF8 try to model the learning task as a policy learning method in which the model learns to output a proper response given the current state of the dialogue. As discussed before, all encoder-decoder sequence modelling methods can be used for training end-to-end chatbots. Eric and Manning eric2017copy use the copy mechanism augmentation on simple recurrent neural sequence modelling and achieve good results in training end-to-end task-oriented chatbots BIBREF9.
Another popular method for training chatbots is based on memory networks. Memory networks augment the neural networks with task-specific memories which the model can learn to read and write. Memory networks have been used in BIBREF8 for training task-oriented agents in which they store dialogue context in the memory module, and then the model uses it to select a system response (also stored in the memory module) from a set of candidates. A variation of Key-value memory networks BIBREF10 has been used in BIBREF11 for the training task-oriented chatbots which stores the knowledge base in the form of triplets (which is (subject,relation,object) such as (yoga,time,3pm)) in the key-value memory network and then the model tries to select the most relevant entity from the memory and create a relevant response. This approach makes the interaction with the knowledge base smoother compared to other models.
Another approach for training end-to-end task-oriented dialogue systems tries to model the task-oriented dialogue generation in a reinforcement learning approach in which the current state of the conversation is passed to some sequence learning network, and this network decides the action which the chatbot should act upon. End-to-end LSTM based model BIBREF12, and the Hybrid Code Networks BIBREF13 can use both supervised and reinforcement learning approaches for training task-oriented chatbots.
Related Work ::: Sequence Modelling Methods
Sequence modelling methods usually fall into recurrence-based, convolution-based, and self-attentional-based methods. In recurrence-based sequence modeling, the words are fed into the model in a sequential way, and the model learns the dependencies between the tokens given the context from the past (and the future in case of bidirectional Recurrent Neural Networks (RNNs)) BIBREF14. RNNs and their variations such as Long Short-term Memory (LSTM) BIBREF15, and Gated Recurrent Units (GRU) BIBREF16 are the most widely used recurrence-based models used in sequence modelling tasks. Convolution-based sequence modelling methods rely on Convolutional Neural Networks (CNN) BIBREF17 which are mostly used for vision tasks but can also be used for handling sequential data. In CNN-based sequence modelling, multiple CNN layers are stacked on top of each other to give the model the ability to learn long-range dependencies. The stacking of layers in CNNs for sequence modeling allows the model to grow its receptive field, or in other words context size, and thus can model complex dependencies between different sections of the input sequence BIBREF18, BIBREF19. WaveNet van2016wavenet, used in audio synthesis, and ByteNet kalchbrenner2016neural, used in machine translation tasks, are examples of models trained using convolution-based sequence modelling.
Models
We compare the most commonly used recurrence-based models for sequence modelling and contrast them with Transformer and Universal Transformer models. The models that we train are:
Models ::: LSTM and Bi-Directional LSTM
Long Short-term Memory (LSTM) networks are a special kind of RNN networks which can learn long-term dependencies BIBREF15. RNN models suffer from the vanishing gradient problem BIBREF20 which makes it hard for RNN models to learn long-term dependencies. The LSTM model tackles this problem by defining a gating mechanism which introduces input, output and forget gates, and the model has the ability to decide how much of the previous information it needs to keep and how much of the new information it needs to integrate and thus this mechanism helps the model keep track of long-term dependencies.
Bi-directional LSTMs BIBREF21 are a variation of LSTMs which proved to give better results for some NLP tasks BIBREF22. The idea behind a Bi-directional LSTM is to give the network (while training) the ability to not only look at past tokens, like LSTM does, but to future tokens, so the model has access to information both form the past and future. In the case of a task-oriented dialogue generation systems, in some cases, the information needed so that the model learns the dependencies between the tokens, comes from the tokens that are ahead of the current index, and if the model is able to take future tokens into accounts it can learn more efficiently.
Models ::: Transformer
As discussed before, Transformer is the first model that entirely relies on the self-attention mechanism for both the encoder and the decoder. The Transformer uses the self-attention mechanism to learn a representation of a sentence by relating different positions of that sentence. Like many of the sequence modelling methods, Transformer follows the encoder-decoder architecture in which the input is given to the encoder and the results of the encoder is passed to the decoder to create the output sequence. The difference between Transformer (which is a self-attentional model) and other sequence models (such as recurrence-based and convolution-based) is that the encoder and decoder architecture is only based on the self-attention mechanism. The Transformer also uses multi-head attention which intends to give the model the ability to look at different representations of the different positions of both the input (encoder self-attention), output (decoder self-attention) and also between input and output (encoder-decoder attention) BIBREF6. It has been used in a variety of NLP tasks such as mathematical language understanding [110], language modeling BIBREF23, machine translation BIBREF6, question answering BIBREF24, and text summarization BIBREF25.
Models ::: Universal Transformer
The Universal Transformer model is an encoder-decoder-based sequence-to-sequence model which applies recurrence to the representation of each of the positions of the input and output sequences. The main difference between the RNN recurrence and the Universal Transformer recurrence is that the recurrence used in the Universal Transformer is applied on consecutive representation vectors of each token in the sequence (i.e., over depth) whereas in the RNN models this recurrence is applied on positions of the tokens in the sequence. A variation of the Universal Transformer, called Adaptive Universal Transformer, applies the Adaptive Computation Time (ACT) BIBREF26 technique on the Universal Transformer model which makes the model train faster since it saves computation time and also in some cases can increase the model accuracy. The ACT allows the Universal Transformer model to use different recurrence time steps for different tokens.
We know, based on reported evidence that transformers are potent in NLP tasks like translation and question answering. Our aim is to assess the applicability and effectiveness of transformers and universal-transformers in the domain of task-oriented conversational agents. In the next section, we report on experiments to investigate the usage of self-attentional models performance against the aforementioned models for the task of training end-to-end task-oriented chatbots.
Experiments
We run our experiments on Tesla 960M Graphical Processing Unit (GPU). We evaluated the models using the aforementioned metrics and also applied early stopping (with delta set to 0.1 for 600 training steps).
Experiments ::: Datasets
We use three different datasets for training the models. We use the Dialogue State Tracking Competition 2 (DSTC2) dataset BIBREF27 which is the most widely used dataset for research on task-oriented chatbots. We also used two other datasets recently open-sourced by Google Research BIBREF28 which are M2M-sim-M (dataset in movie domain) and M2M-sim-R (dataset in restaurant domain). M2M stands for Machines Talking to Machines which refers to the framework with which these two datasets were created. In this framework, dialogues are created via dialogue self-play and later augmented via crowdsourcing. We trained on our models on different datasets in order to make sure the results are not corpus-biased. Table TABREF12 shows the statistics of these three datasets which we will use to train and evaluate the models.
The M2M dataset has more diversity in both language and dialogue flow compared to the the commonly used DSTC2 dataset which makes it appealing for the task of creating task-oriented chatbots. This is also the reason that we decided to use M2M dataset in our experiments to see how well models can handle a more diversed dataset.
Experiments ::: Datasets ::: Dataset Preparation
We followed the data preparation process used for feeding the conversation history into the encoder-decoder as in BIBREF5. Consider a sample dialogue $D$ in the corpus which consists of a number of turns exchanged between the user and the system. $D$ can be represented as ${(u_1, s_1),(u_2, s_2), ...,(u_k, s_k)}$ where $k$ is the number of turns in this dialogue. At each time step in the conversation, we encode the conversation turns up to that time step, which is the context of the dialogue so far, and the system response after that time step will be used as the target. For example, given we are processing the conversation at time step $i$, the context of the conversation so far would be ${(u_1, s_1, u_2, s_2, ..., u_i)}$ and the model has to learn to output ${(s_i)}$ as the target.
Experiments ::: Training
We used the tensor2tensor library BIBREF29 in our experiments for training and evaluation of sequence modeling methods. We use Adam optimizer BIBREF30 for training the models. We set $\beta _1=0.9$, $\beta _2=0.997$, and $\epsilon =1e-9$ for the Adam optimizer and started with learning rate of 0.2 with noam learning rate decay schema BIBREF6. In order to avoid overfitting, we use dropout BIBREF31 with dropout chosen from [0.7-0.9] range. We also conducted early stopping BIBREF14 to avoid overfitting in our experiments as the regularization methods. We set the batch size to 4096, hidden size to 128, and the embedding size to 128 for all the models. We also used grid search for hyperparameter tuning for all of the trained models. Details of our training and hyperparameter tuning and the code for reproducing the results can be found in the chatbot-exp github repository.
Experiments ::: Inference
In the inference time, there are mainly two methods for decoding which are greedy and beam search BIBREF32. Beam search has been proved to be an essential part in generative NLP task such as neural machine translation BIBREF33. In the case of dialogue generation systems, beam search could help alleviate the problem of having many possible valid outputs which do not match with the target but are valid and sensible outputs. Consider the case in which a task-oriented chatbot, trained for a restaurant reservation task, in response to the user utterance “Persian food”, generates the response “what time and day would you like the reservation for?” but the target defined for the system is “would you like a fancy restaurant?”. The response generated by the chatbot is a valid response which asks the user about other possible entities but does not match with the defined target.
We try to alleviate this problem in inference time by applying the beam search technique with a different beam size $\alpha \in \lbrace 1, 2, 4\rbrace $ and pick the best result based on the BLEU score. Note that when $\alpha = 1$, we are using the original greedy search method for the generation task.
Experiments ::: Evaluation Measures
BLEU: We use the Bilingual Evaluation Understudy (BLEU) BIBREF34 metric which is commonly used in machine translation tasks. The BLEU metric can be used to evaluate dialogue generation models as in BIBREF5, BIBREF35. The BLEU metric is a word-overlap metric which computes the co-occurrence of N-grams in the reference and the generated response and also applies the brevity penalty which tries to penalize far too short responses which are usually not desired in task-oriented chatbots. We compute the BLEU score using all generated responses of our systems.
Per-turn Accuracy: Per-turn accuracy measures the similarity of the system generated response versus the target response. Eric and Manning eric2017copy used this metric to evaluate their systems in which they considered their response to be correct if all tokens in the system generated response matched the corresponding token in the target response. This metric is a little bit harsh, and the results may be low since all the tokens in the generated response have to be exactly in the same position as in the target response.
Per-Dialogue Accuracy: We calculate per-dialogue accuracy as used in BIBREF8, BIBREF5. For this metric, we consider all the system generated responses and compare them to the target responses. A dialogue is considered to be true if all the turns in the system generated responses match the corresponding turns in the target responses. Note that this is a very strict metric in which all the utterances in the dialogue should be the same as the target and in the right order.
F1-Entity Score: Datasets used in task-oriented chores have a set of entities which represent user preferences. For example, in the restaurant domain chatbots common entities are meal, restaurant name, date, time and the number of people (these are usually the required entities which are crucial for making reservations, but there could be optional entities such as location or rating). Each target response has a set of entities which the system asks or informs the user about. Our models have to be able to discern these specific entities and inject them into the generated response. To evaluate our models we could use named-entity recognition evaluation metrics BIBREF36. The F1 score is the most commonly used metric used for the evaluation of named-entity recognition models which is the harmonic average of precision and recall of the model. We calculate this metric by micro-averaging over all the system generated responses.
Results and Discussion ::: Comparison of Models
The results of running the experiments for the aforementioned models is shown in Table TABREF14 for the DSTC2 dataset and in Table TABREF18 for the M2M datasets. The bold numbers show the best performing model in each of the evaluation metrics. As discussed before, for each model we use different beam sizes (bs) in inference time and report the best one. Our findings in Table TABREF14 show that self-attentional models outperform common recurrence-based sequence modelling methods in the BLEU, Per-turn accuracy, and entity F1 score. The reduction in the evalution numbers for the M2M dataset and in our investigation of the trained model we found that this considerable reduction is due to the fact that the diversity of M2M dataset is considerably more compared to DSTC2 dataset while the traning corpus size is smaller.
Results and Discussion ::: Time Performance Comparison
Table TABREF22 shows the time performance of the models trained on DSTC2 dataset. Note that in order to get a fair time performance comparison, we trained the models with the same batch size (4096) and on the same GPU. These numbers are for the best performing model (in terms of evaluation loss and selected using the early stopping method) for each of the sequence modelling methods. Time to Convergence (T2C) shows the approximate time that the model was trained to converge. We also show the loss in the development set for that specific checkpoint.
Results and Discussion ::: Effect of (Self-)Attention Mechanism
As discussed before in Section SECREF8, self-attentional models rely on the self-attention mechanism for sequence modelling. Recurrence-based models such as LSTM and Bi-LSTM can also be augmented in order to increase their performance, as evident in Table TABREF14 which shows the increase in the performance of both LSTM and Bi-LSTM when augmented with an attention mechanism. This leads to the question whether we can increase the performance of recurrence-based models by adding multiple attention heads, similar to the multi-head self-attention mechanism used in self-attentional models, and outperform the self-attentional models.
To investigate this question, we ran a number of experiments in which we added multiple attention heads on top of Bi-LSTM model and also tried a different number of self-attention heads in self-attentional models in order to compare their performance for this specific task. Table TABREF25 shows the results of these experiments. Note that the models in Table TABREF25 are actually the best models that we found in our experiments on DSTC2 dataset and we only changed one parameter for each of them, i.e. the number of attention heads in the recurrence-based models and the number of self-attention heads in the self-attentional models, keeping all other parameters unchanged. We also report the results of models with beam size of 2 in inference time. We increased the number of attention heads in the Bi-LSTM model up to 64 heads to see its performance change. Note that increasing the number of attention heads makes the training time intractable and time consuming while the model size would increase significantly as shown in Table TABREF24. Furthermore, by observing the results of the Bi-LSTM+Att model in Table TABREF25 (both test and development set) we can see that Bi-LSTM performance decreases and thus there is no need to increase the attention heads further.
Our findings in Table TABREF25 show that the self-attention mechanism can outperform recurrence-based models even if the recurrence-based models have multiple attention heads. The Bi-LSTM model with 64 attention heads cannot beat the best Trasnformer model with NH=4 and also its results are very close to the Transformer model with NH=1. This observation clearly depicts the power of self-attentional based models and demonstrates that the attention mechanism used in self-attentional models as the backbone for learning, outperforms recurrence-based models even if they are augmented with multiple attention heads.
Conclusion and Future Work
We have determined that Transformers and Universal-Transformers are indeed effective at generating appropriate responses in task-oriented chatbot systems. In actuality, their performance is even better than the typically used deep learning architectures. Our findings in Table TABREF14 show that self-attentional models outperform common recurrence-based sequence modelling methods in the BLEU, Per-turn accuracy, and entity F1 score. The results of the Transformer model beats all other models in all of the evaluation metrics. Also, comparing the result of LSTM and LSTM with attention mechanism as well as the Bi-LSTM with Bi-LSTM with attention mechanism, it can be observed in the results that adding the attention mechanism can increase the performance of the models. Comparing the results of self-attentional models shows that the Transformer model outperforms the other self-attentional models, while the Universal Transformer model gives reasonably good results.
In future work, it would be interesting to compare the performance of self-attentional models (specifically the winning Transformer model) against other end-to-end architectures such as the Memory Augmented Networks. | No |
bd7a95b961af7caebf0430a7c9f675816c9c527f | bd7a95b961af7caebf0430a7c9f675816c9c527f_0 | Q: What are the three datasets used?
Text: Introduction
Task-oriented chatbots are a type of dialogue generation system which tries to help the users accomplish specific tasks, such as booking a restaurant table or buying movie tickets, in a continuous and uninterrupted conversational interface and usually in as few steps as possible. The development of such systems falls into the Conversational AI domain which is the science of developing agents which are able to communicate with humans in a natural way BIBREF0. Digital assistants such as Apple's Siri, Google Assistant, Amazon Alexa, and Alibaba's AliMe are examples of successful chatbots developed by giant companies to engage with their customers.
There are mainly two different ways to create a task-oriented chatbot which are either using set of hand-crafted and carefully-designed rules or use corpus-based method in which the chatbot can be trained with a relatively large corpus of conversational data. Given the abundance of dialogue data, the latter method seems to be a better and a more general approach for developing task-oriented chatbots. The corpus-based method also falls into two main chatbot design architectures which are pipelined and end-to-end architectures BIBREF1. End-to-end chatbots are usually neural networks based BIBREF2, BIBREF3, BIBREF4, BIBREF5 and thus can be adapted to new domains by training on relevant dialogue datasets for that specific domain. Furthermore, all sequence modelling methods can also be used in training end-to-end task-oriented chatbots. A sequence modelling method receives a sequence as input and predicts another sequence as output. For example in the case of machine translation the input could be a sequence of words in a given language and the output would be a sentence in a second language. In a dialogue system, an utterance is the input and the predicted sequence of words would be the corresponding response.
Self-attentional models are a new paradigm for sequence modelling tasks which differ from common sequence modelling methods, such as recurrence-based and convolution-based sequence learning, in the way that their architecture is only based on the attention mechanism. The Transformer BIBREF6 and Universal Transformer BIBREF7 models are the first models that entirely rely on the self-attention mechanism for both encoder and decoder, and that is why they are also referred to as a self-attentional models. The Transformer models has produced state-of-the-art results in the task neural machine translation BIBREF6 and this encouraged us to further investigate this model for the task of training task-oriented chatbots. While in the Transformer model there is no recurrence, it turns out that the recurrence used in RNN models is essential for some tasks in NLP including language understanding tasks and thus the Transformer fails to generalize in those tasks BIBREF7. We also investigate the usage of the Universal Transformer for this task to see how it compares to the Transformer model.
We focus on self-attentional sequence modelling for this study and intend to provide an answer for one specific question which is:
How effective are self-attentional models for training end-to-end task-oriented chatbots?
Our contribution in this study is as follows:
We train end-to-end task-oriented chatbots using both self-attentional models and common recurrence-based models used in sequence modelling tasks and compare and analyze the results using different evaluation metrics on three different datasets.
We provide insight into how effective are self-attentional models for this task and benchmark the time performance of these models against the recurrence-based sequence modelling methods.
We try to quantify the effectiveness of self-attention mechanism in self-attentional models and compare its effect to recurrence-based models for the task of training end-to-end task-oriented chatbots.
Related Work ::: Task-Oriented Chatbots Architectures
End-to-end architectures are among the most used architectures for research in the field of conversational AI. The advantage of using an end-to-end architecture is that one does not need to explicitly train different components for language understanding and dialogue management and then concatenate them together. Network-based end-to-end task-oriented chatbots as in BIBREF4, BIBREF8 try to model the learning task as a policy learning method in which the model learns to output a proper response given the current state of the dialogue. As discussed before, all encoder-decoder sequence modelling methods can be used for training end-to-end chatbots. Eric and Manning eric2017copy use the copy mechanism augmentation on simple recurrent neural sequence modelling and achieve good results in training end-to-end task-oriented chatbots BIBREF9.
Another popular method for training chatbots is based on memory networks. Memory networks augment the neural networks with task-specific memories which the model can learn to read and write. Memory networks have been used in BIBREF8 for training task-oriented agents in which they store dialogue context in the memory module, and then the model uses it to select a system response (also stored in the memory module) from a set of candidates. A variation of Key-value memory networks BIBREF10 has been used in BIBREF11 for the training task-oriented chatbots which stores the knowledge base in the form of triplets (which is (subject,relation,object) such as (yoga,time,3pm)) in the key-value memory network and then the model tries to select the most relevant entity from the memory and create a relevant response. This approach makes the interaction with the knowledge base smoother compared to other models.
Another approach for training end-to-end task-oriented dialogue systems tries to model the task-oriented dialogue generation in a reinforcement learning approach in which the current state of the conversation is passed to some sequence learning network, and this network decides the action which the chatbot should act upon. End-to-end LSTM based model BIBREF12, and the Hybrid Code Networks BIBREF13 can use both supervised and reinforcement learning approaches for training task-oriented chatbots.
Related Work ::: Sequence Modelling Methods
Sequence modelling methods usually fall into recurrence-based, convolution-based, and self-attentional-based methods. In recurrence-based sequence modeling, the words are fed into the model in a sequential way, and the model learns the dependencies between the tokens given the context from the past (and the future in case of bidirectional Recurrent Neural Networks (RNNs)) BIBREF14. RNNs and their variations such as Long Short-term Memory (LSTM) BIBREF15, and Gated Recurrent Units (GRU) BIBREF16 are the most widely used recurrence-based models used in sequence modelling tasks. Convolution-based sequence modelling methods rely on Convolutional Neural Networks (CNN) BIBREF17 which are mostly used for vision tasks but can also be used for handling sequential data. In CNN-based sequence modelling, multiple CNN layers are stacked on top of each other to give the model the ability to learn long-range dependencies. The stacking of layers in CNNs for sequence modeling allows the model to grow its receptive field, or in other words context size, and thus can model complex dependencies between different sections of the input sequence BIBREF18, BIBREF19. WaveNet van2016wavenet, used in audio synthesis, and ByteNet kalchbrenner2016neural, used in machine translation tasks, are examples of models trained using convolution-based sequence modelling.
Models
We compare the most commonly used recurrence-based models for sequence modelling and contrast them with Transformer and Universal Transformer models. The models that we train are:
Models ::: LSTM and Bi-Directional LSTM
Long Short-term Memory (LSTM) networks are a special kind of RNN networks which can learn long-term dependencies BIBREF15. RNN models suffer from the vanishing gradient problem BIBREF20 which makes it hard for RNN models to learn long-term dependencies. The LSTM model tackles this problem by defining a gating mechanism which introduces input, output and forget gates, and the model has the ability to decide how much of the previous information it needs to keep and how much of the new information it needs to integrate and thus this mechanism helps the model keep track of long-term dependencies.
Bi-directional LSTMs BIBREF21 are a variation of LSTMs which proved to give better results for some NLP tasks BIBREF22. The idea behind a Bi-directional LSTM is to give the network (while training) the ability to not only look at past tokens, like LSTM does, but to future tokens, so the model has access to information both form the past and future. In the case of a task-oriented dialogue generation systems, in some cases, the information needed so that the model learns the dependencies between the tokens, comes from the tokens that are ahead of the current index, and if the model is able to take future tokens into accounts it can learn more efficiently.
Models ::: Transformer
As discussed before, Transformer is the first model that entirely relies on the self-attention mechanism for both the encoder and the decoder. The Transformer uses the self-attention mechanism to learn a representation of a sentence by relating different positions of that sentence. Like many of the sequence modelling methods, Transformer follows the encoder-decoder architecture in which the input is given to the encoder and the results of the encoder is passed to the decoder to create the output sequence. The difference between Transformer (which is a self-attentional model) and other sequence models (such as recurrence-based and convolution-based) is that the encoder and decoder architecture is only based on the self-attention mechanism. The Transformer also uses multi-head attention which intends to give the model the ability to look at different representations of the different positions of both the input (encoder self-attention), output (decoder self-attention) and also between input and output (encoder-decoder attention) BIBREF6. It has been used in a variety of NLP tasks such as mathematical language understanding [110], language modeling BIBREF23, machine translation BIBREF6, question answering BIBREF24, and text summarization BIBREF25.
Models ::: Universal Transformer
The Universal Transformer model is an encoder-decoder-based sequence-to-sequence model which applies recurrence to the representation of each of the positions of the input and output sequences. The main difference between the RNN recurrence and the Universal Transformer recurrence is that the recurrence used in the Universal Transformer is applied on consecutive representation vectors of each token in the sequence (i.e., over depth) whereas in the RNN models this recurrence is applied on positions of the tokens in the sequence. A variation of the Universal Transformer, called Adaptive Universal Transformer, applies the Adaptive Computation Time (ACT) BIBREF26 technique on the Universal Transformer model which makes the model train faster since it saves computation time and also in some cases can increase the model accuracy. The ACT allows the Universal Transformer model to use different recurrence time steps for different tokens.
We know, based on reported evidence that transformers are potent in NLP tasks like translation and question answering. Our aim is to assess the applicability and effectiveness of transformers and universal-transformers in the domain of task-oriented conversational agents. In the next section, we report on experiments to investigate the usage of self-attentional models performance against the aforementioned models for the task of training end-to-end task-oriented chatbots.
Experiments
We run our experiments on Tesla 960M Graphical Processing Unit (GPU). We evaluated the models using the aforementioned metrics and also applied early stopping (with delta set to 0.1 for 600 training steps).
Experiments ::: Datasets
We use three different datasets for training the models. We use the Dialogue State Tracking Competition 2 (DSTC2) dataset BIBREF27 which is the most widely used dataset for research on task-oriented chatbots. We also used two other datasets recently open-sourced by Google Research BIBREF28 which are M2M-sim-M (dataset in movie domain) and M2M-sim-R (dataset in restaurant domain). M2M stands for Machines Talking to Machines which refers to the framework with which these two datasets were created. In this framework, dialogues are created via dialogue self-play and later augmented via crowdsourcing. We trained on our models on different datasets in order to make sure the results are not corpus-biased. Table TABREF12 shows the statistics of these three datasets which we will use to train and evaluate the models.
The M2M dataset has more diversity in both language and dialogue flow compared to the the commonly used DSTC2 dataset which makes it appealing for the task of creating task-oriented chatbots. This is also the reason that we decided to use M2M dataset in our experiments to see how well models can handle a more diversed dataset.
Experiments ::: Datasets ::: Dataset Preparation
We followed the data preparation process used for feeding the conversation history into the encoder-decoder as in BIBREF5. Consider a sample dialogue $D$ in the corpus which consists of a number of turns exchanged between the user and the system. $D$ can be represented as ${(u_1, s_1),(u_2, s_2), ...,(u_k, s_k)}$ where $k$ is the number of turns in this dialogue. At each time step in the conversation, we encode the conversation turns up to that time step, which is the context of the dialogue so far, and the system response after that time step will be used as the target. For example, given we are processing the conversation at time step $i$, the context of the conversation so far would be ${(u_1, s_1, u_2, s_2, ..., u_i)}$ and the model has to learn to output ${(s_i)}$ as the target.
Experiments ::: Training
We used the tensor2tensor library BIBREF29 in our experiments for training and evaluation of sequence modeling methods. We use Adam optimizer BIBREF30 for training the models. We set $\beta _1=0.9$, $\beta _2=0.997$, and $\epsilon =1e-9$ for the Adam optimizer and started with learning rate of 0.2 with noam learning rate decay schema BIBREF6. In order to avoid overfitting, we use dropout BIBREF31 with dropout chosen from [0.7-0.9] range. We also conducted early stopping BIBREF14 to avoid overfitting in our experiments as the regularization methods. We set the batch size to 4096, hidden size to 128, and the embedding size to 128 for all the models. We also used grid search for hyperparameter tuning for all of the trained models. Details of our training and hyperparameter tuning and the code for reproducing the results can be found in the chatbot-exp github repository.
Experiments ::: Inference
In the inference time, there are mainly two methods for decoding which are greedy and beam search BIBREF32. Beam search has been proved to be an essential part in generative NLP task such as neural machine translation BIBREF33. In the case of dialogue generation systems, beam search could help alleviate the problem of having many possible valid outputs which do not match with the target but are valid and sensible outputs. Consider the case in which a task-oriented chatbot, trained for a restaurant reservation task, in response to the user utterance “Persian food”, generates the response “what time and day would you like the reservation for?” but the target defined for the system is “would you like a fancy restaurant?”. The response generated by the chatbot is a valid response which asks the user about other possible entities but does not match with the defined target.
We try to alleviate this problem in inference time by applying the beam search technique with a different beam size $\alpha \in \lbrace 1, 2, 4\rbrace $ and pick the best result based on the BLEU score. Note that when $\alpha = 1$, we are using the original greedy search method for the generation task.
Experiments ::: Evaluation Measures
BLEU: We use the Bilingual Evaluation Understudy (BLEU) BIBREF34 metric which is commonly used in machine translation tasks. The BLEU metric can be used to evaluate dialogue generation models as in BIBREF5, BIBREF35. The BLEU metric is a word-overlap metric which computes the co-occurrence of N-grams in the reference and the generated response and also applies the brevity penalty which tries to penalize far too short responses which are usually not desired in task-oriented chatbots. We compute the BLEU score using all generated responses of our systems.
Per-turn Accuracy: Per-turn accuracy measures the similarity of the system generated response versus the target response. Eric and Manning eric2017copy used this metric to evaluate their systems in which they considered their response to be correct if all tokens in the system generated response matched the corresponding token in the target response. This metric is a little bit harsh, and the results may be low since all the tokens in the generated response have to be exactly in the same position as in the target response.
Per-Dialogue Accuracy: We calculate per-dialogue accuracy as used in BIBREF8, BIBREF5. For this metric, we consider all the system generated responses and compare them to the target responses. A dialogue is considered to be true if all the turns in the system generated responses match the corresponding turns in the target responses. Note that this is a very strict metric in which all the utterances in the dialogue should be the same as the target and in the right order.
F1-Entity Score: Datasets used in task-oriented chores have a set of entities which represent user preferences. For example, in the restaurant domain chatbots common entities are meal, restaurant name, date, time and the number of people (these are usually the required entities which are crucial for making reservations, but there could be optional entities such as location or rating). Each target response has a set of entities which the system asks or informs the user about. Our models have to be able to discern these specific entities and inject them into the generated response. To evaluate our models we could use named-entity recognition evaluation metrics BIBREF36. The F1 score is the most commonly used metric used for the evaluation of named-entity recognition models which is the harmonic average of precision and recall of the model. We calculate this metric by micro-averaging over all the system generated responses.
Results and Discussion ::: Comparison of Models
The results of running the experiments for the aforementioned models is shown in Table TABREF14 for the DSTC2 dataset and in Table TABREF18 for the M2M datasets. The bold numbers show the best performing model in each of the evaluation metrics. As discussed before, for each model we use different beam sizes (bs) in inference time and report the best one. Our findings in Table TABREF14 show that self-attentional models outperform common recurrence-based sequence modelling methods in the BLEU, Per-turn accuracy, and entity F1 score. The reduction in the evalution numbers for the M2M dataset and in our investigation of the trained model we found that this considerable reduction is due to the fact that the diversity of M2M dataset is considerably more compared to DSTC2 dataset while the traning corpus size is smaller.
Results and Discussion ::: Time Performance Comparison
Table TABREF22 shows the time performance of the models trained on DSTC2 dataset. Note that in order to get a fair time performance comparison, we trained the models with the same batch size (4096) and on the same GPU. These numbers are for the best performing model (in terms of evaluation loss and selected using the early stopping method) for each of the sequence modelling methods. Time to Convergence (T2C) shows the approximate time that the model was trained to converge. We also show the loss in the development set for that specific checkpoint.
Results and Discussion ::: Effect of (Self-)Attention Mechanism
As discussed before in Section SECREF8, self-attentional models rely on the self-attention mechanism for sequence modelling. Recurrence-based models such as LSTM and Bi-LSTM can also be augmented in order to increase their performance, as evident in Table TABREF14 which shows the increase in the performance of both LSTM and Bi-LSTM when augmented with an attention mechanism. This leads to the question whether we can increase the performance of recurrence-based models by adding multiple attention heads, similar to the multi-head self-attention mechanism used in self-attentional models, and outperform the self-attentional models.
To investigate this question, we ran a number of experiments in which we added multiple attention heads on top of Bi-LSTM model and also tried a different number of self-attention heads in self-attentional models in order to compare their performance for this specific task. Table TABREF25 shows the results of these experiments. Note that the models in Table TABREF25 are actually the best models that we found in our experiments on DSTC2 dataset and we only changed one parameter for each of them, i.e. the number of attention heads in the recurrence-based models and the number of self-attention heads in the self-attentional models, keeping all other parameters unchanged. We also report the results of models with beam size of 2 in inference time. We increased the number of attention heads in the Bi-LSTM model up to 64 heads to see its performance change. Note that increasing the number of attention heads makes the training time intractable and time consuming while the model size would increase significantly as shown in Table TABREF24. Furthermore, by observing the results of the Bi-LSTM+Att model in Table TABREF25 (both test and development set) we can see that Bi-LSTM performance decreases and thus there is no need to increase the attention heads further.
Our findings in Table TABREF25 show that the self-attention mechanism can outperform recurrence-based models even if the recurrence-based models have multiple attention heads. The Bi-LSTM model with 64 attention heads cannot beat the best Trasnformer model with NH=4 and also its results are very close to the Transformer model with NH=1. This observation clearly depicts the power of self-attentional based models and demonstrates that the attention mechanism used in self-attentional models as the backbone for learning, outperforms recurrence-based models even if they are augmented with multiple attention heads.
Conclusion and Future Work
We have determined that Transformers and Universal-Transformers are indeed effective at generating appropriate responses in task-oriented chatbot systems. In actuality, their performance is even better than the typically used deep learning architectures. Our findings in Table TABREF14 show that self-attentional models outperform common recurrence-based sequence modelling methods in the BLEU, Per-turn accuracy, and entity F1 score. The results of the Transformer model beats all other models in all of the evaluation metrics. Also, comparing the result of LSTM and LSTM with attention mechanism as well as the Bi-LSTM with Bi-LSTM with attention mechanism, it can be observed in the results that adding the attention mechanism can increase the performance of the models. Comparing the results of self-attentional models shows that the Transformer model outperforms the other self-attentional models, while the Universal Transformer model gives reasonably good results.
In future work, it would be interesting to compare the performance of self-attentional models (specifically the winning Transformer model) against other end-to-end architectures such as the Memory Augmented Networks. | DSTC2, M2M-sim-M, M2M-sim-R |
f011d6d5287339a35d00cd9ce1dfeabb1f3c0563 | f011d6d5287339a35d00cd9ce1dfeabb1f3c0563_0 | Q: Did they experiment with the corpus?
Text: Introduction
When a group of people communicate in a common channel there are often multiple conversations occurring concurrently. Often there is no explicit structure identifying conversations or their structure, such as in Internet Relay Chat (IRC), Google Hangout, and comment sections on websites. Even when structure is provided it often has limited depth, such as threads in Slack, which provide one layer of branching. In all of these cases, conversations are entangled: all messages appear together, with no indication of separate conversations. Automatic disentanglement could be used to provide more interpretable results when searching over chat logs, and to help users understand what is happening when they join a channel. Over a decade of research has considered conversation disentanglement BIBREF0 , but using datasets that are either small BIBREF1 or not released BIBREF2 .
We introduce a conversation disentanglement dataset of 77,563 messages of IRC manually annotated with reply-to relations between messages. Our data is sampled from a technical support channel at 173 points in time between 2004 and 2018, providing a diverse set of speakers and topics, while remaining in a single domain. Our data is the first to include context, which differentiates messages that start a conversation from messages that are responding to an earlier point in time. We are also the first to adjudicate disagreements in disentanglement annotations, producing higher quality development and test sets. We also developed a simple model that is more effective than prior work, and showed that having diverse data makes it perform better and more consistently.
We also analyze prior disentanglement work. In particular, a recent approach from BIBREF3 , BIBREF4 . By applying disentanglement to an enormous log of IRC messages, they developed a resource that has been widely used (over 315 citations), indicating the value of disentanglement in dialogue research. However, they lacked annotated data to evaluate the conversations produced by their method. We find that 20% of the conversations are completely right or a prefix of a true conversation; 58% are missing messages, 3% contain messages from other conversations, and 19% have both issues. As a result, systems trained on the data will not be learning from accurate human-human dialogues.
Task Definition
We consider a shared channel in which a group of people are communicating by sending messages that are visible to everyone. We label this data with a graph in which messages are nodes and edges indicate that one message is a response to another. Each connected component is a conversation.
Figure shows an example of two entangled conversations and their graph structure. It includes a message that receives multiple responses, when multiple people independently help BurgerMann, and the inverse, when the last message responds to multiple messages. We also see two of the users, delire and Seveas, simultaneously participating in two conversations. This multi-conversation participation is common.
The example also shows two aspects of IRC we will refer to later. Directed messages, an informal practice in which a participant is named in the message. These cues are useful for understanding the discussion, but only around 48% of messages have them. System messages, which indicate actions like users entering the channel. These all start with ===, but not all messages starting with === are system messages, as shown by the second message in Figure .
Related Work
IRC Disentanglement Data: The most significant work on conversation disentanglement is a line of papers developing data and models for the #Linux IRC channel BIBREF1 , BIBREF5 , BIBREF6 , BIBREF7 . Until now, their dataset was the only publicly available set of messages with annotated conversations (partially re-annotated by BIBREF8 with reply-structure graphs), and has been used for training and evaluation in subsequent work BIBREF9 , BIBREF8 , BIBREF10 .
We are aware of three other IRC disentanglement datasets. First, BIBREF2 studied disentanglement and topic identification, but did not release their data. Second, BIBREF11 annotated conversations and discourse relations in the #Ubuntu-fr channel (French Ubuntu support). Third, BIBREF3 , BIBREF4 heuristically extracted conversations from the #Ubuntu channel. Their work opened up a new research opportunity by providing 930,000 disentangled conversations, and has already been the basis of many papers (315 citations), particularly on developing dialogue agents. This is far beyond the size of resources previously collected, even with crowdsourcing BIBREF15 . Using our data we provide the first empirical evaluation of their method.
Other Disentanglement Data: IRC is not the only form of synchronous group conversation online. Other platforms with similar communication formats have been studied in settings such as classes BIBREF16 , BIBREF17 , support communities BIBREF18 , and customer service BIBREF19 . Unfortunately, only one of these resources BIBREF17 is available, possibly due to privacy concerns.
Another stream of research has used user-provided structure to get conversation labels BIBREF0 , BIBREF20 and reply-to relations BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 . By removing these labels and mixing conversations they create a disentanglement problem. While convenient, this risks introducing a bias, as people write differently when explicit structure is defined, and only a few papers have released data BIBREF27 , BIBREF28 , BIBREF29 .
Models: BIBREF1 explored various message-pair feature sets and linear classifiers, combined with local and global inference methods. Their system is the only publicly released statistical model for disentanglement of chat conversation, but most of the other work cited above applied similar models. We evaluate their model on both our data and our re-annotated version of their data. Recent work has applied neural networks BIBREF8 , BIBREF10 , with slight gains in performance.
Graph Structure: Within a conversation, we define a graph of reply-to relations. Almost all prior work with annotated graph structures has been for threaded web forums BIBREF30 , BIBREF31 , BIBREF32 , which do not exhibit the disentanglement problem we explore. Studies that do consider graphs for disentanglement have used small datasets BIBREF17 , BIBREF8 that are not always released BIBREF16 , BIBREF33 .
Data
We introduce a manually annotated dataset of 77,563 messages: 74,963 from the #Ubuntu IRC channel, and 2,600 messages from the #Linux IRC channel. Annotating the #Linux data enables comparison with BIBREF1 , while the #Ubuntu channel has over 34 million messages, making it an interesting large-scale resource for dialogue research. It also allows us to evaluate BIBREF3 , BIBREF4 's widely used heuristically disentangled conversations.
When choosing samples we had to strike a balance between the number of samples and the size of each one. We sampled the training set in three ways: (1) 95 uniform length samples, (2) 10 smaller samples to check annotator agreement, and (3) 48 time spans of one hour that are diverse in terms of the number of messages, the number of participants, and what percentage of messages are directed. For additional details of the data selection process, see the supplementary material.
Dataset Comparison
Table presents properties of our data and prior work on disentanglement in real-time chat.
Availability: Only one other dataset, annotated twice, has been publicly released, and two others were shared when we contacted the authors.
Scale: Our dataset is 31 times larger than almost any other dataset, the exception being one that was not released. As well as being larger, our data is also based on many different points in time. This is crucial because a single sample presents a biased view of the task. Having multiple samples also means our training and evaluation sets are from different points in time, preventing overfitting to specific users or topics of conversation.
Context: We are the first to consider the fact that IRC data is sampled from a continuous stream and the context prior to the sample is important. In prior work, a message with no antecedent could either be the start of a conversation or a response to a message that occurs prior to the sample.
Adjudication: Our labeling method is similar to prior work, but we are the first to perform adjudication of annotations. While some cases were ambiguous, often one option was clearly incorrect. By performing adjudication we can reduce these errors, creating high quality sets.
Methodology
Guidelines: We developed annotation guidelines through three rounds of pilot annotations in which annotators labeled a set of messages and discussed all disagreements. We instructed annotators to link each message to the one or more messages it is a response to. If a message started a new conversation it was linked to itself. We also described a series of subtle cases, using one to three examples to tease out differences. These included when a question is repeated, when a user responds multiple times, interjections, etc. For our full guidelines, see the supplementary material. All annotations were performed using SLATE BIBREF34 , a custom-built tool with features designed specifically for this task.
Adjudication: Table shows the number of annotators for each subset of our data. For the development, test, out-of-domain data, and a small set of the training data, we labeled each sample multiple times and then resolved all disagreements in an adjudication step. During adjudication, there was no indication of who had given which annotation, and there was the option to choose a different annotation entirely. In order to maximize the volume annotated, we did not perform adjudication for most of the training data. Also, the 18,924 training message set initially only had 100 messages of context per sample, and we later added another 900 lines and checked every message that was not a reply to see if it was a response to something in the additional context.
Annotators: The annotators were all fluent English speakers with a background in computer science (necessary to understand the technical content): a postdoc, a master's student, and three CS undergraduates. All adjudication was performed by the postdoc, who is a native English speaker.
Time: Annotations took between 7 and 11 seconds per message depending on the complexity of the discussion, and adjudication took 5 seconds per message. Overall, we spent approximately 240 hours on annotation and 15 hours on adjudication.
Annotation Quality
Our annotations define two levels of structure: (1) links between pairs of messages, and (2) sets of messages, where each set is one conversation. Annotators label (1), from which (2) can be inferred. Table TABREF10 presents inter-annotator agreement measures for both cases. These are measured in the standard manner, by comparing the labels from different annotators on the same data. We also include measurements for annotations in prior work.
Figure shows ambiguous examples from our data to provide some intuition for the source of disagreements. In both examples the disagreement involves one link, but the conversation structure in the second case is substantially changed. Some disagreements in our data are mistakes, where one annotation is clearly incorrect, and some are ambiguous cases, such as these. In Channel Two, we also see mistakes and ambiguous cases, including a particularly long discussion about a user's financial difficulties that could be divided in multiple ways (also noted by BIBREF1 ).
Graphs: We measure agreement on the graph structure annotation using BIBREF35 's INLINEFORM0 . This measure of inter-rater reliability corrects for chance agreement, accounting for the class imbalance between linked and not-linked pairs.
Values are in the good agreement range proposed by BIBREF36 , and slightly higher than for BIBREF8 's annotations. Results are not shown for BIBREF1 because they did not annotate graphs.
Conversations: We consider three metrics:
(1) Variation of Information BIBREF37 . A measure of information gained or lost when going from one clustering to another. It is the sum of conditional entropies INLINEFORM0 , where INLINEFORM1 and INLINEFORM2 are clusterings of the same set of items. We consider a scaled version, using the bound for INLINEFORM3 items that VI INLINEFORM4 , and present INLINEFORM5 VI so that larger values are better.
(2) One-to-One Overlap BIBREF1 . Percentage overlap when conversations from two annotations are optimally paired up using the max-flow algorithm. We follow BIBREF8 and keep system messages.
(3) Exact Match F INLINEFORM0 . Calculated using the number of perfectly matching conversations, excluding conversations with only one message (mostly system messages). This is an extremely challenging metric. We include it because it is easy to understand and it directly measures a desired value (perfectly extracted conversations).
Our scores are higher in 4 cases and lower in 5. Interestingly, while INLINEFORM0 was higher for us than BIBREF8 , our scores for conversations are lower. This is possible because a single link can merge two conversations, meaning a single disagreement in links can cause a major difference in conversations. This may reflect the fact that our annotation guide was developed for the Ubuntu channel, which differs in conversation style from the Channel Two data. Manually comparing the annotations, there was no clear differences in the types of disagreements.
Agreement is lower on the Channel Two data, particularly on its test set. From this we conclude that there is substantial variation in the difficulty of conversation disentanglement across datasets.
Evaluating Disentanglement Quality
In this section, we propose new simple disentanglement models that perform better than prior methods, and re-examine prior work. The models we consider are:
Previous: Each message is linked to the most recent non-system message before it.
BIBREF4 : A heuristic based on time differences and identifying directed messages.
BIBREF1 : A linear pairwise scoring model in which each message is linked to the highest scoring previous message, or none if all scores are below zero.
Linear: Our linear ranking model that scores potential antecedents using a feature-based model based on properties such as time, directedness, word overlap, and context.
Feedforward (FF): Our feedforward model with the same features as the linear model, plus a sentence embedding calculated using an average of vectors from GloVe BIBREF38 .
Union: Run 10 FF models trained with different random seeds and combine their output by keeping all edges predicted.
Vote: Run 10 FF models and combine output by keeping the edges they all agree on. Link messages with no agreed antecedent to themselves.
Intersect: Conversations that 10 FF models agree on, and other messages as singleton conversations.
For Channel Two we also compare to BIBREF9 and BIBREF8 , but their code was unavailable, preventing evaluation on our data. We exclude BIBREF10 as they substantially modified the dataset. For details of models, including hyperparameters tuned on the development set, see the supplementary material.
Results
Graphs: Table TABREF13 presents precision, recall, and F-score over links. Our models perform much better than the baseline. As we would expect, vote has higher precision, while union has higher recall. Vote has higher recall than a single feedforward model because it identifies more of the self-link cases (its default when there is no agreement).
Conversations: Table TABREF14 presents results on the metrics defined in Section SECREF8 . There are three regions of performance. First, the baseline has consistently low scores since it forms a single conversation containing all messages. Second, BIBREF1 and BIBREF4 perform similarly, with one doing better on VI and the other on 1-1, though BIBREF1 do consistently better across the exact conversation extraction metrics. Third, our methods do best, with x10 vote best in all cases except precision, where the intersect approach is much better.
Dataset Variations: Table TABREF15 shows results for the feedforward model with several modifications to the training set, designed to test corpus design decisions. Removing context does not substantially impact results. Decreasing the data size to match BIBREF1 's training set leads to worse results, both if the sentences are from diverse contexts (3rd row), and if they are from just two contexts (bottom row). We also see a substantial increase in the standard deviation when only two samples are used, indicating that performance is not robust when the data is not widely sampled.
Channel Two Results
For channel Two, we consider two annotations of the same underlying text: ours and BIBREF1 's. To compare with prior work, we use the metrics defined by BIBREF0 and BIBREF1 . We do not use these for our data as they have been superseded by more rigorously studied metrics (VI for Shen) or make strong assumptions about the data (Loc). We do not evaluate on graphs because BIBREF1 's annotations do not include them. This also prevents us from training our method on their data.
Model Comparison: For Elsner's annotations (top section of Table TABREF18 ), their approach remains the most effective with just Channel Two data. However, training on our Ubuntu data, treating Channel Two as an out-of-domain sample, yields substantially higher performance on two metrics and comparable performance on the third. On our annotations (bottom section), we see the same trend. In both cases, the heuristic from BIBREF3 , BIBREF4 performs poorly. We suspect our model trained only on Channel Two data is overfitting, as the graph F-score on the training data is 94, whereas on the Ubuntu data it is 80.
Data Comparison: Comparing the same models in the top and bottom section, scores are consistently higher for our annotations, except for the BIBREF3 , BIBREF4 heuristic. Comparing the annotations, we find that their annotators identified between 250 and 328 conversations (mean 281), while we identify 257. Beyond this difference it is hard to identify consistent variations in the annotations. Another difference is the nature of the evaluation. On Elsner's data, evaluation is performed by measuring relative to each annotators labels and averaging the scores. On our data, we adjudicated the annotations, providing a single gold standard. Evaluating our Channel-Two-trained Feedforward model on our two pre-adjudication annotations and averaging scores, the results are lower by 3.1, 1.8, and 4.3 on 1-1, Loc and Shen respectively. This suggests that our adjudication process removes annotator mistakes that introduce noise into the evaluation.
Evaluating ,
The previous section showed that only 10.8% of the conversations extracted by the heuristic in BIBREF3 , BIBREF4 are correct (P in Table TABREF14 ). We focus on precision because the primary use of their method has been to extract conversations to train and test dialogue systems, which will be impacted by errors in the conversations. Recall errors (measuring missed conversations) are not as serious a problem because the Ubuntu chat logs are so large that even with low recall a large number of conversations will still be extracted.
Additional Metrics: First, we must check this is not an artifact of our test set. On our development set, P, R, and F are slightly higher (11.6, 8.1 and 9.5), but VI and 1-1 are slightly lower (80.0 and 51.7). We can also measure performance as the distribution of scores over all of the samples we annotated. The average precision was 10, and varied from 0 to 50, with 19% of cases at 0 and 95% below 23. To avoid the possibility that we made a mistake running their code, we also considered evaluating their released conversations. On the data that overlapped with our annotations, the precision was 9%. These results indicate that the test set performance is not an aberration: the heuristic's results are consistently low, with only about 10% of output conversations completely right.
Error Types: Figure shows an example heuristic output with several types of errors. The initial question was missed, as was the final resolution, and in the middle there is a message from a separate conversation. 67% of conversations were a subset of a true conversation (ie., only missed messages), and 3% were a superset of a true conversation (ie., only had extra messages). The subset cases were missing 1-187 messages (missing 56% of the conversation on average) and the superset cases had 1-3 extra messages (an extra 31% of the conversation on average). The first message is particularly important because it is usually the question being resolved. In 47% of cases the first message is not the true start of a conversation.
It is important to note that the dialogue task the conversations were intended for only uses a prefix of each conversation. For this purpose, missing the end of a conversation is not a problem. In 9% of cases, the conversation is a true prefix of a gold conversation. Combined with the exact match cases, that means 20% of the conversations are accurate as used in the next utterance selection task. A further 9% of cases are a continuous chunk of a conversation, but missing one or more messages at the start.
Long Distance Links: One issue we observed is that conversations often spanned days. We manually inspected a random sample: 20 conversations 12 to 24 hours long, and 20 longer than 24 hours. All of the longer conversations and 17 of the shorter ones were clearly incorrect. This issue is not measured in the analysis above because our samples do not span days (they are 5.5 hours long on average when including context). The original work notes this issue, but claims that it is rare. We measured the time between consecutive messages in conversations and plot the frequency of each value in Figure FIGREF20 . The figure indicates that the conversations often extend over days, or even more than a month apart (note the point in the top-right corner). In contrast, our annotations rarely contain links beyond an hour, and the output of our model rarely contains links longer than 2 hours.
Causes: To investigate possible reasons for these issues, we measured several properties of our data to test assumptions in the heuristic. First, the heuristic assumes if all directed messages from a user are in one conversation, all undirected messages from the user are in the same conversation. We find this is true 52.2% of the time. Second, it assumes that it is rare for two people to respond to an initial question. In our data, of the messages that start a conversation and receive a response, 37.7% receive multiple responses. Third, that a directed message can start a conversation, which we find in 6.8% of cases. Fourth, that the first response to a question is within 3 minutes, which we find is true in 94.8% of conversations. Overall, these assumptions have mixed support from our data, which may be why the heuristic produces so few accurate conversations.
Dialogue Modeling: Most of the work building on BIBREF4 uses the conversations to train and evaluate dialogue systems. To see the impact on downstream work, we constructed a next utterance selection task as described in their work, disentangling the entire #Ubuntu logs with our feedforward model. We tried two dialogue models: a dual-encoder BIBREF4 , and Enhanced Long Short-Term Memory BIBREF39 . For full details of the task and model hyperparameters, see the supplementary material.
Table TABREF23 show results when varying the training and test datasets. Training and testing on the same dataset leads to higher performance than training on one and testing on the other. This is true even though the heuristic data contains nine times as many training conversations. This is evidence that our conversations are fundamentally different despite being derived from the same resource and filtered in the same way. This indicates that our changes lead to quantitatively different downstream models. Fortunately, the relative performance of the two models remains consistent across the two datasets.
Re-Examining Disentanglement Research
Using our data we also investigate other assumptions made in prior work. The scale of our data provides a more robust test of these ideas.
Number of samples: Table shows that all prior work with available data has considered a small number of samples. In Table TABREF15 , we saw that training on less diverse data samples led to models that performed worse and with higher variance. We can also investigate this by looking at performance on the different samples in our test set. The difficulty of samples varies considerably, with the F-score of our model varying from 11 to 40 and annotator agreement scores before adjudication varying from 0.65 to 0.78. The model performance and agreement levels are also strongly correlated, with a Spearman's rank correlation of 0.77. This demonstrates the importance of evaluating on data from more than one point in time to get a robust estimate of performance.
How far apart consecutive messages in a conversation are: BIBREF1 and BIBREF8 use a limit of 129 seconds, BIBREF10 limit to within 1 hour, BIBREF33 limit to within 8 messages, and we limit to within 100 messages. Figure FIGREF20 shows the distribution of time differences in our conversations. 94.9% are within 2 minutes, and almost all are within an hour. 88.3% are 8 messages or less apart, and 99.4% are 100 or less apart. This suggests that the lower limits in prior work are too low. However, in Channel Two, 98% of messages are within 2 minutes, suggesting this property is channel and sample dependent.
Concurrent conversations: BIBREF2 forced annotators to label at most 3 conversations, while BIBREF10 remove conversations to ensure there are no more than 10 at once. We find there are 3 or fewer 46.4% of the time and 10 or fewer 97.3% of the time (where time is in terms of messages, not minutes, and we ignore system messages), Presumably the annotators in BIBREF2 would have proposed changes if the 3 conversation limit was problematic, suggesting that their data is less entangled than ours.
Conversation and message length: BIBREF2 annotate blocks of 200 messages. If such a limit applied to our data, 13.7% of conversations would not finish before the cutoff point. This suggests that their conversations are typically shorter, which is consistent with the previous conclusion that their conversations are less entangled. BIBREF10 remove conversations with fewer than 10 messages, describing them as outliers, and remove messages shorter than 5 words, arguing that they were not part of real conversations. Not counting conversations with only system messages, 83.4% of our conversations have fewer than 10 messages, 40.8% of which have multiple authors. 88.5% of messages with less than 5 words are in conversations with more than one author. These values suggest that these messages and conversations are real and not outliers.
Overall: This analysis indicates that working from a small number of samples can lead to major bias in system design for disentanglement. There is substantial variation across channels, and across time within a single channel.
Conclusion
Conversation disentanglement has been under-studied because of a lack of public, annotated datasets. We introduce a new corpus that is larger and more diverse than any prior corpus, and the first to include context and adjudicated annotations. Using our data, we perform the first empirical analysis of BIBREF3 , BIBREF4 's widely used data, finding that only 20% of the conversations their method produces are true prefixes of conversations. The models we develop have already enabled new directions in dialogue research, providing disentangled conversations for DSTC 7 track 1 BIBREF40 , BIBREF41 and will be used in DSTC 8. We also show that diversity is particularly important for the development of robust models. This work fills a key gap that has limited research, providing a new opportunity for understanding synchronous multi-party conversation online.
Acknowledgements
We would like to thank Jacob Andreas, Greg Durrett, Will Radford, Ryan Lowe, and Glen Pink for helpful feedback on earlier drafts of this paper and the anonymous reviewers for their helpful suggestions. This material is based in part on work supported by IBM as part of the Sapphire Project at the University of Michigan. Any opinions, findings, conclusions or recommendations expressed above do not necessarily reflect the views of IBM. | Yes |
2ba0c7576eb5b84463a59ff190d4793b67f40ccc | 2ba0c7576eb5b84463a59ff190d4793b67f40ccc_0 | Q: How were the feature representations evaluated?
Text: Introduction
Neural networks for language processing have advanced rapidly in recent years. A key breakthrough was the introduction of transformer architectures BIBREF0 . One recent system based on this idea, BERT BIBREF1 , has proven to be extremely flexible: a single pretrained model can be fine-tuned to achieve state-of-the-art performance on a wide variety of NLP applications. This suggests the model is extracting a set of generally useful features from raw text. It is natural to ask, which features are extracted? And how is this information represented internally?
Similar questions have arisen with other types of neural nets. Investigations of convolutional neural networks BIBREF2 , BIBREF3 have shown how representations change from layer to layer BIBREF4 ; how individual units in a network may have meaning BIBREF5 ; and that “meaningful” directions exist in the space of internal activation BIBREF6 . These explorations have led to a broader understanding of network behavior.
Analyses on language-processing models (e.g., BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 ) point to the existence of similarly rich internal representations of linguistic structure. Syntactic features seem to be extracted by RNNs (e.g., BIBREF7 , BIBREF9 ) as well as in BERT BIBREF11 , BIBREF12 , BIBREF13 , BIBREF10 . Inspirational work from Hewitt and Manning BIBREF8 found evidence of a geometric representation of entire parse trees in BERT's activation space.
Our work extends these explorations of the geometry of internal representations. Investigating how BERT represents syntax, we describe evidence that attention matrices contain grammatical representations. We also provide mathematical arguments that may explain the particular form of the parse tree embeddings described in BIBREF8 . Turning to semantics, using visualizations of the activations created by different pieces of text, we show suggestive evidence that BERT distinguishes word senses at a very fine level. Moreover, much of this semantic information appears to be encoded in a relatively low-dimensional subspace.
Context and related work
Our object of study is the BERT model introduced in BIBREF1 . To set context and terminology, we briefly describe the model's architecture. The input to BERT is based on a sequence of tokens (words or pieces of words). The output is a sequence of vectors, one for each input token. We will often refer to these vectors as context embeddings because they include information about a token's context.
BERT's internals consist of two parts. First, an initial embedding for each token is created by combining a pre-trained wordpiece embedding with position and segment information. Next, this initial sequence of embeddings is run through multiple transformer layers, producing a new sequence of context embeddings at each step. (BERT comes in two versions, a 12-layer BERT-base model and a 24-layer BERT-large model.) Implicit in each transformer layer is a set of attention matrices, one for each attention head, each of which contains a scalar value for each ordered pair $(token_i, token_j)$ .
Language representation by neural networks
Sentences are sequences of discrete symbols, yet neural networks operate on continuous data–vectors in high-dimensional space. Clearly a successful network translates discrete input into some kind of geometric representation–but in what form? And which linguistic features are represented?
The influential Word2Vec system BIBREF14 , for example, has been shown to place related words near each other in space, with certain directions in space correspond to semantic distinctions. Grammatical information such as number and tense are also represented via directions in space. Analyses of the internal states of RNN-based models have shown that they represent information about soft hierarchical syntax in a form that can be extracted by a one-hidden-layer network BIBREF9 . One investigation of full-sentence embeddings found a wide variety of syntactic properties could be extracted not just by an MLP, but by logistic regression BIBREF15 .
Several investigations have focused on transformer architectures. Experiments suggest context embeddings in BERT and related models contain enough information to perform many tasks in the traditional “NLP pipeline” BIBREF12 –tagging part-of-speech, co-reference resolution, dependency labeling, etc.–with simple classifiers (linear or small MLP models) BIBREF11 , BIBREF10 . Qualitative, visualization-based work BIBREF16 suggests attention matrices may encode important relations between words.
A recent and fascinating discovery by Hewitt and Manning BIBREF8 , which motivates much of our work, is that BERT seems to create a direct representation of an entire dependency parse tree. The authors find that (after a single global linear transformation, which they term a “structural probe”) the square of the distance between context embeddings is roughly proportional to tree distance in the dependency parse. They ask why squaring distance is necessary; we address this question in the next section.
The work cited above suggests that language-processing networks create a rich set of intermediate representations of both semantic and syntactic information. These results lead to two motivating questions for our research. Can we find other examples of intermediate representations? And, from a geometric perspective, how do all these different types of information coexist in a single vector?
Geometry of syntax
We begin by exploring BERT's internal representation of syntactic information. This line of inquiry builds on the work by Hewitt and Manning in two ways. First, we look beyond context embeddings to investigate whether attention matrices encode syntactic features. Second, we provide a simple mathematical analysis of the tree embeddings that they found.
Attention probes and dependency representations
As in BIBREF8 , we are interested in finding representations of dependency grammar relations BIBREF17 . While BIBREF8 analyzed context embeddings, another natural place to look for encodings is in the attention matrices. After all, attention matrices are explicitly built on the relations between pairs of words.
To formalize what it means for attention matrices to encode linguistic features, we use an attention probe, an analog of edge probing BIBREF11 . An attention probe is a task for a pair of tokens, $(token_i, token_j)$ where the input is a model-wide attention vector formed by concatenating the entries $a_{ij}$ in every attention matrix from every attention head in every layer. The goal is to classify a given relation between the two tokens. If a linear model achieves reliable accuracy, it seems reasonable to say that the model-wide attention vector encodes that relation. We apply attention probes to the task of identifying the existence and type of dependency relation between two words.
The data for our first experiment is a corpus of parsed sentences from the Penn Treebank BIBREF18 . This dataset has the constituency grammar for the sentences, which was translated to a dependency grammar using the PyStanfordDependencies library BIBREF19 . The entirety of the Penn Treebank consists of 3.1 million dependency relations; we filtered this by using only examples of the 30 dependency relations with more than 5,000 examples in the data set. We then ran each sentence through BERT-base, and obtained the model-wide attention vector (see Figure 1 ) between every pair of tokens in the sentence, excluding the $[SEP]$ and $[CLS]$ tokens. This and subsequent experiments were conducted using PyTorch on MacBook machines.
With these labeled embeddings, we trained two L2 regularized linear classifiers via stochastic gradient descent, using BIBREF20 . The first of these probes was a simple linear binary classifier to predict whether or not an attention vector corresponds to the existence of a dependency relation between two tokens. This was trained with a balanced class split, and 30% train/test split. The second probe was a multiclass classifier to predict which type of dependency relation exists between two tokens, given the dependency relation’s existence. This probe was trained with distributions outlined in table 2 .
The binary probe achieved an accuracy of 85.8%, and the multiclass probe achieved an accuracy of 71.9%. Our real aim, again, is not to create a state-of-the-art parser, but to gauge whether model-wide attention vectors contain a relatively simple representation of syntactic features. The success of this simple linear probe suggests that syntactic information is in fact encoded in the attention vectors.
Geometry of parse tree embeddings
Hewitt and Manning's result that context embeddings represent dependency parse trees geometrically raises several questions. Is there a reason for the particular mathematical representation they found? Can we learn anything by visualizing these representations?
Hewitt and Manning ask why parse tree distance seems to correspond specifically to the square of Euclidean distance, and whether some other metric might do better BIBREF8 . We describe mathematical reasons why squared Euclidean distance may be natural.
First, one cannot generally embed a tree, with its tree metric $d$ , isometrically into Euclidean space (Appendix "Embedding trees in Euclidean space" ). Since an isometric embedding is impossible, motivated by the results of BIBREF8 we might ask about other possible representations.
Definition 1 (power- $p$ embedding) Let $M$ be a metric space, with metric $d$ . We say $f: M \rightarrow \mathbb {R}^n$ is a power- $p$ embedding if for all $x, y \in M$ , we have $||f(x) - f(y)||^p = d(x, y)$
In these terms, we can say BIBREF8 found evidence of a power-2 embedding for parse trees. It turns out that power-2 embeddings are an especially elegant mapping. For one thing, it is easy to write down an explicit model–a mathematical idealization–for a power-2 embedding for any tree.
Theorem 1 Any tree with $n$ nodes has a power-2 embedding into $\mathbb {R}^{n-1}$ .
Let the nodes of the tree be $t_0, ..., t_{n-1}$ , with $t_0$ being the root node. Let $\lbrace e_1, ..., e_{n-1}\rbrace $ be orthogonal unit basis vectors for $\mathbb {R}^{n-1}$ . Inductively, define an embedding $f$ such that: $f(t_0) = 0$ $f(t_i) = e_i + f(parent(t_i))$
Given two distinct tree nodes $x$ and $y$ , where $m$ is the tree distance $d(x, y)$ , it follows that we can move from $f(x)$ to $f(y)$ using $m$ mutually perpendicular unit steps. Thus $||f(x) - f(y)||^2 = m = d(x, y)$
Remark 1
This embedding has a simple informal description: at each embedded vertex of the graph, all line segments to neighboring embedded vertices are unit-distance segments, orthogonal to each other and to every other edge segment. (It's even easy to write down a set of coordinates for each node.) By definition any two power-2 embeddings of the same tree are isometric; with that in mind, we refer to this as the canonical power-2 embedding.
In the proof of Theorem 1, instead of choosing basis vectors in advance, one can choose random unit vectors. Because two random vectors will be nearly orthogonal in high-dimensional space, the power-2 embedding condition will approximately hold. This means that in space that is sufficiently high-dimensional (compared to the size of the tree) it is possible to construct an approximate power-2 embedding with essentially “local” information, where a tree node is connected to its children via random unit-length branches. We refer to this type of embedding as a random branch embedding. (See Appendix "Ideal vs. actual parse tree embeddings" for a visualization of these various embeddings.)
In addition to these appealing aspects of power-2 embeddings, it is worth noting that power- $p$ embeddings will not necessarily even exist when $p < 2$ . (See Appendix "Embedding trees in Euclidean space" for the proof.)
Theorem 2 For any $p < 2$ , there is a tree which has no power- $p$ embedding.
Remark 2
On the other hand, the existence result for power-2 embeddings, coupled with results of BIBREF22 , implies that power- $p$ tree embeddings do exist for any $p > 2$ .
The simplicity of power-2 tree embeddings, as well as the fact that they may be approximated by a simple random model, suggests they may be a generally useful alternative to approaches to tree embeddings that require hyperbolic geometry BIBREF23 .
How do parse tree embeddings in BERT compare to exact power-2 embeddings? To explore this question, we created a simple visualization tool. The input to each visualization is a sentence from the Penn Treebank with associated dependency parse trees (see Section "Geometry of word senses" ). We then extracted the token embeddings produced by BERT-large in layer 16 (following BIBREF8 ), transformed by the Hewitt and Manning’s “structural probe” matrix $B$ , yielding a set of points in 1024-dimensional space. We used PCA to project to two dimensions. (Other dimensionality-reduction methods, such as t-SNE and UMAP BIBREF24 , were harder to interpret.)
To visualize the tree structure, we connected pairs of points representing words with a dependency relation. The color of each edge indicates the deviation from true tree distance. We also connected, with dotted line, pairs of words without a dependency relation but whose positions (before PCA) were far closer than expected. The resulting image lets us see both the overall shape of the tree embedding, and fine-grained information on deviation from a true power-2 embedding.
Two example visualizations are shown in Figure 2 , next to traditional diagrams of their underlying parse trees. These are typical cases, illustrating some common patterns; for instance, prepositions are embedded unexpectedly close to words they relate to. (Figure 7 shows additional examples.)
A natural question is whether the difference between these projected trees and the canonical ones is merely noise, or a more interesting pattern. By looking at the average embedding distances of each dependency relation (see Figure 3 ) , we can see that they vary widely from around 1.2 ( $compound:prt$ , $advcl$ ) to 2.5 ( $mwe$ , $parataxis$ , $auxpass$ ). Such systematic differences suggest that BERT's syntactic representation has an additional quantitative aspect beyond traditional dependency grammar.
Geometry of word senses
BERT seems to have several ways of representing syntactic information. What about semantic features? Since embeddings produced by transformer models depend on context, it is natural to speculate that they capture the particular shade of meaning of a word as used in a particular sentence. (E.g., is “bark” an animal noise or part of a tree?) We explored geometric representations of word sense both qualitatively and quantitatively.
Visualization of word senses
Our first experiment is an exploratory visualization of how word sense affects context embeddings. For data on different word senses, we collected all sentences used in the introductions to English-language Wikipedia articles. (Text outside of introductions was frequently fragmentary.) We created an interactive application, which we plan to make public. A user enters a word, and the system retrieves 1,000 sentences containing that word. It sends these sentences to BERT-base as input, and for each one it retrieves the context embedding for the word from a layer of the user's choosing.
The system visualizes these 1,000 context embeddings using UMAP BIBREF24 , generally showing clear clusters relating to word senses. Different senses of a word are typically spatially separated, and within the clusters there is often further structure related to fine shades of meaning. In Figure 4 , for example, we not only see crisp, well-separated clusters for three meanings of the word “die,” but within one of these clusters there is a kind of quantitative scale, related to the number of people dying.
See Appendix "Additional word sense visualizations" for further examples. The apparent detail in the clusters we visualized raises two immediate questions. First, is it possible to find quantitative corroboration that word senses are well-represented? Second, how can we resolve a seeming contradiction: in the previous section, we saw how position represented syntax; yet here we see position representing semantics.
Measurement of word sense disambiguation capability
The crisp clusters seen in visualizations such as Figure 4 suggest that BERT may create simple, effective internal representations of word senses, putting different meanings in different locations. To test this hypothesis quantitatively, we test whether a simple classifier on these internal representations can perform well at word-sense disambiguation (WSD).
We follow the procedure described in BIBREF10 , which performed a similar experiment with the ELMo model. For a given word with $n$ senses, we make a nearest-neighbor classifier where each neighbor is the centroid of a given word sense's BERT-base embeddings in the training data. To classify a new word we find the closest of these centroids, defaulting to the most commonly used sense if the word was not present in the training data. We used the data and evaluation from BIBREF25 : the training data was SemCor BIBREF26 (33,362 senses), and the testing data was the suite described in BIBREF25 (3,669 senses).
The simple nearest-neighbor classifier achieves an F1 score of 71.1, higher than the current state of the art (Table 1 ), with the accuracy monotonically increasing through the layers. This is a strong signal that context embeddings are representing word-sense information. Additionally, an even higher score of 71.5 was obtained using the technique described in the following section.
We hypothesized that there might also exist a linear transformation under which distances between embeddings would better reflect their semantic relationships–that is, words of the same sense would be closer together and words of different senses would be further apart.
To explore this hypothesis, we trained a probe following Hewitt and Manning's methodology. We initialized a random matrix $B\in {R}^{k\times m}$ , testing different values for $m$ . Loss is, roughly, defined as the difference between the average cosine similarity between embeddings of words with different senses, and that between embeddings of the same sense. However, we clamped the cosine similarity terms to within $\pm 0.1$ of the pre-training averages for same and different senses. (Without clamping, the trained matrix simply ended up taking well-separated clusters and separating them further. We tested values between $0.05$ and $0.2$ for the clamping range and $0.1$ had the best performance.)
Our training corpus was the same dataset from 4.1.2., filtered to include only words with at least two senses, each with at least two occurrences (for 8,542 out of the original 33,362 senses). Embeddings came from BERT-base (12 layers, 768-dimensional embeddings).
We evaluate our trained probes on the same dataset and WSD task used in 4.1.2 (Table 1 ). As a control, we compare each trained probe against a random probe of the same shape. As mentioned in 4.1.2, untransformed BERT embeddings achieve a state-of-the-art accuracy rate of 71.1%. We find that our trained probes are able to achieve slightly improved accuracy down to $m=128$ .
Though our probe achieves only a modest improvement in accuracy for final-layer embeddings, we note that we were able to more dramatically improve the performance of embeddings at earlier layers (see Appendix for details: Figure 10 ). This suggests there is more semantic information in the geometry of earlier-layer embeddings than a first glance might reveal.
Our results also support the idea that word sense information may be contained in a lower-dimensional space. This suggests a resolution to the seeming contradiction mentioned above: a vector encodes both syntax and semantics, but in separate complementary subspaces.
Embedding distance and context: a concatenation experiment
If word sense is affected by context, and encoded by location in space, then we should be able to influence context embedding positions by systematically varying their context. To test this hypothesis, we performed an experiment based on a simple and controllable context change: concatenating sentences where the same word is used in different senses.
We picked 25,096 sentence pairs from SemCor, using the same keyword in different senses. E.g.:
A: "He thereupon went to London and spent the winter talking to men of wealth." went: to move from one place to another.
B: "He went prone on his stomach, the better to pursue his examination." went: to enter into a specified state.
We define a matching and an opposing sense centroid for each keyword. For sentence A, the matching sense centroid is the average embedding for all occurrences of “went” used with sense A. A's opposing sense centroid is the average embedding for all occurrences of “went” used with sense B.
We gave each individual sentence in the pair to BERT-base and recorded the cosine similarity between the keyword embeddings and their matching sense centroids. We also recorded the similarity between the keyword embeddings and their opposing sense centroids. We call the ratio between the two similarities the individual similarity ratio. Generally this ratio is greater than one, meaning that the context embedding for the keyword is closer to the matching centroid than the opposing one.
We joined each sentence pair with the word "and" to create a single new sentence.
We gave these concatenations to BERT and recorded the similarities between the keyword embeddings and their matching/opposing sense centroids. Their ratio is the concatenated similarity ratio.
Our hypothesis was that the keyword embeddings in the concatenated sentence would move towards their opposing sense centroids. Indeed, we found that the average individual similarity ratio was higher than the average concatenated similarity ratio at every layer (see Figure 5 ). Concatenating a random sentence did not change the individual similarity ratios. If the ratio is less than one for any sentence, that means BERT has misclassified its keyword sense. We found that the misclassification rate was significantly higher for final-layer embeddings in the concatenated sentences compared to the individual sentences: 8.23% versus 2.43% respectively.
We also measured the effect of projecting the final-layer keyword embeddings into the semantic subspace discussed in 4.1.3. After multiplying each embedding by our trained semantic probe, we obtained an average concatenated similarity ratio of 1.578 and individual similarity ratio of 1.875, which suggests that the transformed embeddings are closer to their matching sense centroids than the original embeddings (the original concatenated similarity ratio is 1.284 and the individual similarity ratio is 1.430). We also measured lower average misclassification rates for the transformed embeddings: 7.31% for concatenated sentences and 2.27% for individual sentences.
Conclusion and future work
We have presented a series of experiments that shed light on BERT's internal representations of linguistic information. We have found evidence of syntactic representation in attention matrices, with certain directions in space representing particular dependency relations. We have also provided a mathematical justification for the squared-distance tree embedding found by Hewitt and Manning.
Meanwhile, we have shown that just as there are specific syntactic subspaces, there is evidence for subspaces that represent semantic information. We also have shown how mistakes in word sense disambiguation may correspond to changes in internal geometric representation of word meaning. Our experiments also suggest an answer to the question of how all these different representations fit together. We conjecture that the internal geometry of BERT may be broken into multiple linear subspaces, with separate spaces for different syntactic and semantic information.
Investigating this kind of decomposition is a natural direction for future research. What other meaningful subspaces exist? After all, there are many types of linguistic information that we have not looked for.
A second important avenue of exploration is what the internal geometry can tell us about the specifics of the transformer architecture. Can an understanding of the geometry of internal representations help us find areas for improvement, or refine BERT's architecture?
Acknowledgments: We would like to thank David Belanger, Tolga Bolukbasi, Jasper Snoek, and Ian Tenney for helpful feedback and discussions.
Embedding trees in Euclidean space
Here we provide additional detail on the existence of various forms of tree embeddings.
Isometric embeddings of a tree (with its intrinsic tree metric) into Euclidean space are rare. Indeed, such an embedding is impossible even a four-point tree $T$ , consisting of a root node $R$ with three children $C_1, C_2, C_3$ . If $f:T \rightarrow \mathbb {R}^n$ is a tree isometry then $||f(R) - f(C_1)) || = ||f(R) - f(C_2)) || = 1$ , and $||f(C_1) - f(C_2)) || = 2$ . It follows that $f(R)$ , $f(C_1)$ , $f(C_2)$ are collinear. The same can be said of $f(R)$ , $R$0 , and $R$1 , meaning that $R$2 .
Since this four-point tree cannot be embedded, it follows the only trees that can be embedded are simply chains.
Not only are isometric embeddings generally impossible, but power- $p$ embeddings may also be unavailable when $p < 2$ , as the following argument shows.
Proof of Theorem "Theorem 2" We covered the case of $p = 1$ above. When $p < 1$ , even a tree of three points is impossible to embed without violating the triangle inequality. To handle the case when $1 < p < 2$ , consider a “star-shaped” tree of one root node with $k$ children; without loss of generality, assume the root node is embedded at the origin. Then in any power- $p$ embedding the other vertices will be sent to unit vectors, and for each pair of these unit vectors we have $||v_i - v_j||^p = 2$ .
On the other hand, a well-known folk theorem (e.g., see BIBREF27 ) says that given $k$ unit vectors $v_1, ..., v_k$ at least one pair of distinct vectors has $v_i \cdot v_j \ge -1/(k - 1)$ . By the law of cosines, it follows that $||v_i - v_j|| \le \sqrt{2 + \frac{2}{k-1}}$ . For any $p < 2$ , there is a sufficiently large $k$ such that $||v_i - v_j||^p \le (\sqrt{2 + \frac{2}{k-1}})^p = (2 + \frac{2}{k-1})^{p/2} < 2$ . Thus for any $p < 2$ a large enough star-shaped tree cannot have a power- $p$ embedding.
Ideal vs. actual parse tree embeddings
Figure 2 shows (left) a visualization of a BERT parse tree embedding (as defined by the context embeddings for individual words in a sentence). We compare with PCA projections of the canonical power-2 embedding of the same tree structure, as well as a random branch embedding. Finally, we display a completely randomly embedded tree as a control. The visualizations show a clear visual similarity between the BERT embedding and the two mathematical idealizations.
Additional BERT parse tree visualizations
Figure 7 shows four additional examples of PCA projections of BERT parse tree embeddings.
Additional word sense visualizations
We provide two additional examples of word sense visualizations, hand-annotated to show key clusters. See Figure 8 and Figure 9 . | attention probes, using visualizations of the activations created by different pieces of text |
c58e60b99a6590e6b9a34de96c7606b004a4f169 | c58e60b99a6590e6b9a34de96c7606b004a4f169_0 | Q: What linguistic features were probed for?
Text: Introduction
Neural networks for language processing have advanced rapidly in recent years. A key breakthrough was the introduction of transformer architectures BIBREF0 . One recent system based on this idea, BERT BIBREF1 , has proven to be extremely flexible: a single pretrained model can be fine-tuned to achieve state-of-the-art performance on a wide variety of NLP applications. This suggests the model is extracting a set of generally useful features from raw text. It is natural to ask, which features are extracted? And how is this information represented internally?
Similar questions have arisen with other types of neural nets. Investigations of convolutional neural networks BIBREF2 , BIBREF3 have shown how representations change from layer to layer BIBREF4 ; how individual units in a network may have meaning BIBREF5 ; and that “meaningful” directions exist in the space of internal activation BIBREF6 . These explorations have led to a broader understanding of network behavior.
Analyses on language-processing models (e.g., BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 ) point to the existence of similarly rich internal representations of linguistic structure. Syntactic features seem to be extracted by RNNs (e.g., BIBREF7 , BIBREF9 ) as well as in BERT BIBREF11 , BIBREF12 , BIBREF13 , BIBREF10 . Inspirational work from Hewitt and Manning BIBREF8 found evidence of a geometric representation of entire parse trees in BERT's activation space.
Our work extends these explorations of the geometry of internal representations. Investigating how BERT represents syntax, we describe evidence that attention matrices contain grammatical representations. We also provide mathematical arguments that may explain the particular form of the parse tree embeddings described in BIBREF8 . Turning to semantics, using visualizations of the activations created by different pieces of text, we show suggestive evidence that BERT distinguishes word senses at a very fine level. Moreover, much of this semantic information appears to be encoded in a relatively low-dimensional subspace.
Context and related work
Our object of study is the BERT model introduced in BIBREF1 . To set context and terminology, we briefly describe the model's architecture. The input to BERT is based on a sequence of tokens (words or pieces of words). The output is a sequence of vectors, one for each input token. We will often refer to these vectors as context embeddings because they include information about a token's context.
BERT's internals consist of two parts. First, an initial embedding for each token is created by combining a pre-trained wordpiece embedding with position and segment information. Next, this initial sequence of embeddings is run through multiple transformer layers, producing a new sequence of context embeddings at each step. (BERT comes in two versions, a 12-layer BERT-base model and a 24-layer BERT-large model.) Implicit in each transformer layer is a set of attention matrices, one for each attention head, each of which contains a scalar value for each ordered pair $(token_i, token_j)$ .
Language representation by neural networks
Sentences are sequences of discrete symbols, yet neural networks operate on continuous data–vectors in high-dimensional space. Clearly a successful network translates discrete input into some kind of geometric representation–but in what form? And which linguistic features are represented?
The influential Word2Vec system BIBREF14 , for example, has been shown to place related words near each other in space, with certain directions in space correspond to semantic distinctions. Grammatical information such as number and tense are also represented via directions in space. Analyses of the internal states of RNN-based models have shown that they represent information about soft hierarchical syntax in a form that can be extracted by a one-hidden-layer network BIBREF9 . One investigation of full-sentence embeddings found a wide variety of syntactic properties could be extracted not just by an MLP, but by logistic regression BIBREF15 .
Several investigations have focused on transformer architectures. Experiments suggest context embeddings in BERT and related models contain enough information to perform many tasks in the traditional “NLP pipeline” BIBREF12 –tagging part-of-speech, co-reference resolution, dependency labeling, etc.–with simple classifiers (linear or small MLP models) BIBREF11 , BIBREF10 . Qualitative, visualization-based work BIBREF16 suggests attention matrices may encode important relations between words.
A recent and fascinating discovery by Hewitt and Manning BIBREF8 , which motivates much of our work, is that BERT seems to create a direct representation of an entire dependency parse tree. The authors find that (after a single global linear transformation, which they term a “structural probe”) the square of the distance between context embeddings is roughly proportional to tree distance in the dependency parse. They ask why squaring distance is necessary; we address this question in the next section.
The work cited above suggests that language-processing networks create a rich set of intermediate representations of both semantic and syntactic information. These results lead to two motivating questions for our research. Can we find other examples of intermediate representations? And, from a geometric perspective, how do all these different types of information coexist in a single vector?
Geometry of syntax
We begin by exploring BERT's internal representation of syntactic information. This line of inquiry builds on the work by Hewitt and Manning in two ways. First, we look beyond context embeddings to investigate whether attention matrices encode syntactic features. Second, we provide a simple mathematical analysis of the tree embeddings that they found.
Attention probes and dependency representations
As in BIBREF8 , we are interested in finding representations of dependency grammar relations BIBREF17 . While BIBREF8 analyzed context embeddings, another natural place to look for encodings is in the attention matrices. After all, attention matrices are explicitly built on the relations between pairs of words.
To formalize what it means for attention matrices to encode linguistic features, we use an attention probe, an analog of edge probing BIBREF11 . An attention probe is a task for a pair of tokens, $(token_i, token_j)$ where the input is a model-wide attention vector formed by concatenating the entries $a_{ij}$ in every attention matrix from every attention head in every layer. The goal is to classify a given relation between the two tokens. If a linear model achieves reliable accuracy, it seems reasonable to say that the model-wide attention vector encodes that relation. We apply attention probes to the task of identifying the existence and type of dependency relation between two words.
The data for our first experiment is a corpus of parsed sentences from the Penn Treebank BIBREF18 . This dataset has the constituency grammar for the sentences, which was translated to a dependency grammar using the PyStanfordDependencies library BIBREF19 . The entirety of the Penn Treebank consists of 3.1 million dependency relations; we filtered this by using only examples of the 30 dependency relations with more than 5,000 examples in the data set. We then ran each sentence through BERT-base, and obtained the model-wide attention vector (see Figure 1 ) between every pair of tokens in the sentence, excluding the $[SEP]$ and $[CLS]$ tokens. This and subsequent experiments were conducted using PyTorch on MacBook machines.
With these labeled embeddings, we trained two L2 regularized linear classifiers via stochastic gradient descent, using BIBREF20 . The first of these probes was a simple linear binary classifier to predict whether or not an attention vector corresponds to the existence of a dependency relation between two tokens. This was trained with a balanced class split, and 30% train/test split. The second probe was a multiclass classifier to predict which type of dependency relation exists between two tokens, given the dependency relation’s existence. This probe was trained with distributions outlined in table 2 .
The binary probe achieved an accuracy of 85.8%, and the multiclass probe achieved an accuracy of 71.9%. Our real aim, again, is not to create a state-of-the-art parser, but to gauge whether model-wide attention vectors contain a relatively simple representation of syntactic features. The success of this simple linear probe suggests that syntactic information is in fact encoded in the attention vectors.
Geometry of parse tree embeddings
Hewitt and Manning's result that context embeddings represent dependency parse trees geometrically raises several questions. Is there a reason for the particular mathematical representation they found? Can we learn anything by visualizing these representations?
Hewitt and Manning ask why parse tree distance seems to correspond specifically to the square of Euclidean distance, and whether some other metric might do better BIBREF8 . We describe mathematical reasons why squared Euclidean distance may be natural.
First, one cannot generally embed a tree, with its tree metric $d$ , isometrically into Euclidean space (Appendix "Embedding trees in Euclidean space" ). Since an isometric embedding is impossible, motivated by the results of BIBREF8 we might ask about other possible representations.
Definition 1 (power- $p$ embedding) Let $M$ be a metric space, with metric $d$ . We say $f: M \rightarrow \mathbb {R}^n$ is a power- $p$ embedding if for all $x, y \in M$ , we have $||f(x) - f(y)||^p = d(x, y)$
In these terms, we can say BIBREF8 found evidence of a power-2 embedding for parse trees. It turns out that power-2 embeddings are an especially elegant mapping. For one thing, it is easy to write down an explicit model–a mathematical idealization–for a power-2 embedding for any tree.
Theorem 1 Any tree with $n$ nodes has a power-2 embedding into $\mathbb {R}^{n-1}$ .
Let the nodes of the tree be $t_0, ..., t_{n-1}$ , with $t_0$ being the root node. Let $\lbrace e_1, ..., e_{n-1}\rbrace $ be orthogonal unit basis vectors for $\mathbb {R}^{n-1}$ . Inductively, define an embedding $f$ such that: $f(t_0) = 0$ $f(t_i) = e_i + f(parent(t_i))$
Given two distinct tree nodes $x$ and $y$ , where $m$ is the tree distance $d(x, y)$ , it follows that we can move from $f(x)$ to $f(y)$ using $m$ mutually perpendicular unit steps. Thus $||f(x) - f(y)||^2 = m = d(x, y)$
Remark 1
This embedding has a simple informal description: at each embedded vertex of the graph, all line segments to neighboring embedded vertices are unit-distance segments, orthogonal to each other and to every other edge segment. (It's even easy to write down a set of coordinates for each node.) By definition any two power-2 embeddings of the same tree are isometric; with that in mind, we refer to this as the canonical power-2 embedding.
In the proof of Theorem 1, instead of choosing basis vectors in advance, one can choose random unit vectors. Because two random vectors will be nearly orthogonal in high-dimensional space, the power-2 embedding condition will approximately hold. This means that in space that is sufficiently high-dimensional (compared to the size of the tree) it is possible to construct an approximate power-2 embedding with essentially “local” information, where a tree node is connected to its children via random unit-length branches. We refer to this type of embedding as a random branch embedding. (See Appendix "Ideal vs. actual parse tree embeddings" for a visualization of these various embeddings.)
In addition to these appealing aspects of power-2 embeddings, it is worth noting that power- $p$ embeddings will not necessarily even exist when $p < 2$ . (See Appendix "Embedding trees in Euclidean space" for the proof.)
Theorem 2 For any $p < 2$ , there is a tree which has no power- $p$ embedding.
Remark 2
On the other hand, the existence result for power-2 embeddings, coupled with results of BIBREF22 , implies that power- $p$ tree embeddings do exist for any $p > 2$ .
The simplicity of power-2 tree embeddings, as well as the fact that they may be approximated by a simple random model, suggests they may be a generally useful alternative to approaches to tree embeddings that require hyperbolic geometry BIBREF23 .
How do parse tree embeddings in BERT compare to exact power-2 embeddings? To explore this question, we created a simple visualization tool. The input to each visualization is a sentence from the Penn Treebank with associated dependency parse trees (see Section "Geometry of word senses" ). We then extracted the token embeddings produced by BERT-large in layer 16 (following BIBREF8 ), transformed by the Hewitt and Manning’s “structural probe” matrix $B$ , yielding a set of points in 1024-dimensional space. We used PCA to project to two dimensions. (Other dimensionality-reduction methods, such as t-SNE and UMAP BIBREF24 , were harder to interpret.)
To visualize the tree structure, we connected pairs of points representing words with a dependency relation. The color of each edge indicates the deviation from true tree distance. We also connected, with dotted line, pairs of words without a dependency relation but whose positions (before PCA) were far closer than expected. The resulting image lets us see both the overall shape of the tree embedding, and fine-grained information on deviation from a true power-2 embedding.
Two example visualizations are shown in Figure 2 , next to traditional diagrams of their underlying parse trees. These are typical cases, illustrating some common patterns; for instance, prepositions are embedded unexpectedly close to words they relate to. (Figure 7 shows additional examples.)
A natural question is whether the difference between these projected trees and the canonical ones is merely noise, or a more interesting pattern. By looking at the average embedding distances of each dependency relation (see Figure 3 ) , we can see that they vary widely from around 1.2 ( $compound:prt$ , $advcl$ ) to 2.5 ( $mwe$ , $parataxis$ , $auxpass$ ). Such systematic differences suggest that BERT's syntactic representation has an additional quantitative aspect beyond traditional dependency grammar.
Geometry of word senses
BERT seems to have several ways of representing syntactic information. What about semantic features? Since embeddings produced by transformer models depend on context, it is natural to speculate that they capture the particular shade of meaning of a word as used in a particular sentence. (E.g., is “bark” an animal noise or part of a tree?) We explored geometric representations of word sense both qualitatively and quantitatively.
Visualization of word senses
Our first experiment is an exploratory visualization of how word sense affects context embeddings. For data on different word senses, we collected all sentences used in the introductions to English-language Wikipedia articles. (Text outside of introductions was frequently fragmentary.) We created an interactive application, which we plan to make public. A user enters a word, and the system retrieves 1,000 sentences containing that word. It sends these sentences to BERT-base as input, and for each one it retrieves the context embedding for the word from a layer of the user's choosing.
The system visualizes these 1,000 context embeddings using UMAP BIBREF24 , generally showing clear clusters relating to word senses. Different senses of a word are typically spatially separated, and within the clusters there is often further structure related to fine shades of meaning. In Figure 4 , for example, we not only see crisp, well-separated clusters for three meanings of the word “die,” but within one of these clusters there is a kind of quantitative scale, related to the number of people dying.
See Appendix "Additional word sense visualizations" for further examples. The apparent detail in the clusters we visualized raises two immediate questions. First, is it possible to find quantitative corroboration that word senses are well-represented? Second, how can we resolve a seeming contradiction: in the previous section, we saw how position represented syntax; yet here we see position representing semantics.
Measurement of word sense disambiguation capability
The crisp clusters seen in visualizations such as Figure 4 suggest that BERT may create simple, effective internal representations of word senses, putting different meanings in different locations. To test this hypothesis quantitatively, we test whether a simple classifier on these internal representations can perform well at word-sense disambiguation (WSD).
We follow the procedure described in BIBREF10 , which performed a similar experiment with the ELMo model. For a given word with $n$ senses, we make a nearest-neighbor classifier where each neighbor is the centroid of a given word sense's BERT-base embeddings in the training data. To classify a new word we find the closest of these centroids, defaulting to the most commonly used sense if the word was not present in the training data. We used the data and evaluation from BIBREF25 : the training data was SemCor BIBREF26 (33,362 senses), and the testing data was the suite described in BIBREF25 (3,669 senses).
The simple nearest-neighbor classifier achieves an F1 score of 71.1, higher than the current state of the art (Table 1 ), with the accuracy monotonically increasing through the layers. This is a strong signal that context embeddings are representing word-sense information. Additionally, an even higher score of 71.5 was obtained using the technique described in the following section.
We hypothesized that there might also exist a linear transformation under which distances between embeddings would better reflect their semantic relationships–that is, words of the same sense would be closer together and words of different senses would be further apart.
To explore this hypothesis, we trained a probe following Hewitt and Manning's methodology. We initialized a random matrix $B\in {R}^{k\times m}$ , testing different values for $m$ . Loss is, roughly, defined as the difference between the average cosine similarity between embeddings of words with different senses, and that between embeddings of the same sense. However, we clamped the cosine similarity terms to within $\pm 0.1$ of the pre-training averages for same and different senses. (Without clamping, the trained matrix simply ended up taking well-separated clusters and separating them further. We tested values between $0.05$ and $0.2$ for the clamping range and $0.1$ had the best performance.)
Our training corpus was the same dataset from 4.1.2., filtered to include only words with at least two senses, each with at least two occurrences (for 8,542 out of the original 33,362 senses). Embeddings came from BERT-base (12 layers, 768-dimensional embeddings).
We evaluate our trained probes on the same dataset and WSD task used in 4.1.2 (Table 1 ). As a control, we compare each trained probe against a random probe of the same shape. As mentioned in 4.1.2, untransformed BERT embeddings achieve a state-of-the-art accuracy rate of 71.1%. We find that our trained probes are able to achieve slightly improved accuracy down to $m=128$ .
Though our probe achieves only a modest improvement in accuracy for final-layer embeddings, we note that we were able to more dramatically improve the performance of embeddings at earlier layers (see Appendix for details: Figure 10 ). This suggests there is more semantic information in the geometry of earlier-layer embeddings than a first glance might reveal.
Our results also support the idea that word sense information may be contained in a lower-dimensional space. This suggests a resolution to the seeming contradiction mentioned above: a vector encodes both syntax and semantics, but in separate complementary subspaces.
Embedding distance and context: a concatenation experiment
If word sense is affected by context, and encoded by location in space, then we should be able to influence context embedding positions by systematically varying their context. To test this hypothesis, we performed an experiment based on a simple and controllable context change: concatenating sentences where the same word is used in different senses.
We picked 25,096 sentence pairs from SemCor, using the same keyword in different senses. E.g.:
A: "He thereupon went to London and spent the winter talking to men of wealth." went: to move from one place to another.
B: "He went prone on his stomach, the better to pursue his examination." went: to enter into a specified state.
We define a matching and an opposing sense centroid for each keyword. For sentence A, the matching sense centroid is the average embedding for all occurrences of “went” used with sense A. A's opposing sense centroid is the average embedding for all occurrences of “went” used with sense B.
We gave each individual sentence in the pair to BERT-base and recorded the cosine similarity between the keyword embeddings and their matching sense centroids. We also recorded the similarity between the keyword embeddings and their opposing sense centroids. We call the ratio between the two similarities the individual similarity ratio. Generally this ratio is greater than one, meaning that the context embedding for the keyword is closer to the matching centroid than the opposing one.
We joined each sentence pair with the word "and" to create a single new sentence.
We gave these concatenations to BERT and recorded the similarities between the keyword embeddings and their matching/opposing sense centroids. Their ratio is the concatenated similarity ratio.
Our hypothesis was that the keyword embeddings in the concatenated sentence would move towards their opposing sense centroids. Indeed, we found that the average individual similarity ratio was higher than the average concatenated similarity ratio at every layer (see Figure 5 ). Concatenating a random sentence did not change the individual similarity ratios. If the ratio is less than one for any sentence, that means BERT has misclassified its keyword sense. We found that the misclassification rate was significantly higher for final-layer embeddings in the concatenated sentences compared to the individual sentences: 8.23% versus 2.43% respectively.
We also measured the effect of projecting the final-layer keyword embeddings into the semantic subspace discussed in 4.1.3. After multiplying each embedding by our trained semantic probe, we obtained an average concatenated similarity ratio of 1.578 and individual similarity ratio of 1.875, which suggests that the transformed embeddings are closer to their matching sense centroids than the original embeddings (the original concatenated similarity ratio is 1.284 and the individual similarity ratio is 1.430). We also measured lower average misclassification rates for the transformed embeddings: 7.31% for concatenated sentences and 2.27% for individual sentences.
Conclusion and future work
We have presented a series of experiments that shed light on BERT's internal representations of linguistic information. We have found evidence of syntactic representation in attention matrices, with certain directions in space representing particular dependency relations. We have also provided a mathematical justification for the squared-distance tree embedding found by Hewitt and Manning.
Meanwhile, we have shown that just as there are specific syntactic subspaces, there is evidence for subspaces that represent semantic information. We also have shown how mistakes in word sense disambiguation may correspond to changes in internal geometric representation of word meaning. Our experiments also suggest an answer to the question of how all these different representations fit together. We conjecture that the internal geometry of BERT may be broken into multiple linear subspaces, with separate spaces for different syntactic and semantic information.
Investigating this kind of decomposition is a natural direction for future research. What other meaningful subspaces exist? After all, there are many types of linguistic information that we have not looked for.
A second important avenue of exploration is what the internal geometry can tell us about the specifics of the transformer architecture. Can an understanding of the geometry of internal representations help us find areas for improvement, or refine BERT's architecture?
Acknowledgments: We would like to thank David Belanger, Tolga Bolukbasi, Jasper Snoek, and Ian Tenney for helpful feedback and discussions.
Embedding trees in Euclidean space
Here we provide additional detail on the existence of various forms of tree embeddings.
Isometric embeddings of a tree (with its intrinsic tree metric) into Euclidean space are rare. Indeed, such an embedding is impossible even a four-point tree $T$ , consisting of a root node $R$ with three children $C_1, C_2, C_3$ . If $f:T \rightarrow \mathbb {R}^n$ is a tree isometry then $||f(R) - f(C_1)) || = ||f(R) - f(C_2)) || = 1$ , and $||f(C_1) - f(C_2)) || = 2$ . It follows that $f(R)$ , $f(C_1)$ , $f(C_2)$ are collinear. The same can be said of $f(R)$ , $R$0 , and $R$1 , meaning that $R$2 .
Since this four-point tree cannot be embedded, it follows the only trees that can be embedded are simply chains.
Not only are isometric embeddings generally impossible, but power- $p$ embeddings may also be unavailable when $p < 2$ , as the following argument shows.
Proof of Theorem "Theorem 2" We covered the case of $p = 1$ above. When $p < 1$ , even a tree of three points is impossible to embed without violating the triangle inequality. To handle the case when $1 < p < 2$ , consider a “star-shaped” tree of one root node with $k$ children; without loss of generality, assume the root node is embedded at the origin. Then in any power- $p$ embedding the other vertices will be sent to unit vectors, and for each pair of these unit vectors we have $||v_i - v_j||^p = 2$ .
On the other hand, a well-known folk theorem (e.g., see BIBREF27 ) says that given $k$ unit vectors $v_1, ..., v_k$ at least one pair of distinct vectors has $v_i \cdot v_j \ge -1/(k - 1)$ . By the law of cosines, it follows that $||v_i - v_j|| \le \sqrt{2 + \frac{2}{k-1}}$ . For any $p < 2$ , there is a sufficiently large $k$ such that $||v_i - v_j||^p \le (\sqrt{2 + \frac{2}{k-1}})^p = (2 + \frac{2}{k-1})^{p/2} < 2$ . Thus for any $p < 2$ a large enough star-shaped tree cannot have a power- $p$ embedding.
Ideal vs. actual parse tree embeddings
Figure 2 shows (left) a visualization of a BERT parse tree embedding (as defined by the context embeddings for individual words in a sentence). We compare with PCA projections of the canonical power-2 embedding of the same tree structure, as well as a random branch embedding. Finally, we display a completely randomly embedded tree as a control. The visualizations show a clear visual similarity between the BERT embedding and the two mathematical idealizations.
Additional BERT parse tree visualizations
Figure 7 shows four additional examples of PCA projections of BERT parse tree embeddings.
Additional word sense visualizations
We provide two additional examples of word sense visualizations, hand-annotated to show key clusters. See Figure 8 and Figure 9 . | dependency relation between two words, word sense |
6a099dfe354a79936b59d651ba0887d9f586eaaf | 6a099dfe354a79936b59d651ba0887d9f586eaaf_0 | Q: Does the paper describe experiments with real humans?
Text: Overinformativeness in referring expressions
Reference to objects is one of the most basic and prevalent uses of language. In order to refer, speakers must choose from among a wealth of referring expressions they have at their disposal. How does a speaker choose whether to refer to an object as the animal, the dog, the dalmatian, or the big mostly white dalmatian? The context within which the object occurs (other non-dogs, other dogs, other dalmatians) plays a large part in determining which features the speaker chooses to include in their utterance – speakers aim to be sufficiently informative to establish unique reference to the intended object. However, speakers' utterances often exhibit what has been claimed to be overinformativeness: referring expressions are often more specific than necessary for establishing unique reference, and they are more specific in systematic ways. For instance, speakers are likely to produce referring expressions like the small blue pin instead of the small pin in contexts like Figure 1 , even though the color modifier provides no additional information BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Similar use of redundant size modifiers, in contrast, is rare. Providing a unified theory for speakers' systematic patterns of overinformativeness has so far proven elusive.
This paper is concerned with accounting for these systematic patterns in overinformative referring expressions. We restrict ourselves to definite descriptions of the form the (ADJ?)+ NOUN, that is, noun phrases that minimally contain the definite determiner the followed by a head noun, with any number of adjectives occurring between the determiner and the noun. A model of such referring expressions will allow us to unify two domains in language production that have been typically treated as separate. The choice of adjectives in (purportedly) overmodified referring expressions has been a primary focus of the language production literature BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF2 , BIBREF3 , BIBREF10 , while the choice of noun in simple nominal expressions has so far mostly received attention in the concepts and categorization literature BIBREF11 , BIBREF12 and in the developmental literature on generalizing basic level terms BIBREF13 . In the following, we review some of the key phenomena and puzzles in each of these literatures. We then present a model of referring expression production within the Rational Speech Act framework BIBREF14 , BIBREF15 , BIBREF16 , which treats speakers as boundedly rational agents who optimize the tradeoff between utterance cost and informativeness. Our key innovation is to relax the assumption that semantic truth functions are deterministic. Under this relaxed semantics, where certain terms may apply better than others without strictly being true or false, it can be useful and informative to add seemingly overinformative modifiers or use nouns that are seemingly too specific; not doing so might allow the listener to go astray, or to invest too much processing effort in inferring the speaker's intention. This model provides a unified explanation for a number of seemingly disparate phenomena from the modified and nominal referring expression literature.
We spend the remainder of the paper demonstrating how this account applies to various phenomena. In Section "Overinformativeness in referring expressions" we spell out the problem and introduce the key overinformativeness phenomena. In Section "Modeling speakers' choice of referring expression" we introduce the basic Rational Speech Act framework with deterministic semantics and show how it can be extended to a relaxed semantics. In Sections 3 - 5 we evaluate the relaxed semantics RSA model on data from interactive online reference game experiments that exhibit the phenomena introduced in Section "Overinformativeness in referring expressions" : size and color modifier choice under varying conditions of scene complexity; typicality effects in the choice of color modifier; and choice of nominal level of reference. We wrap up in Section "General Discussion" by summarizing our findings and discussing the far-reaching implications of and further challenges for this line of work.
Production of referring expressions: a case against rational language use?
How should a cooperative speaker choose between competing referring expressions? Grice, in his seminal work, provided some guidance by formulating his famous conversational maxims, intended as a guide to listeners' expectations about good speaker behavior BIBREF17 . His maxim of Quantity, consisting of two parts, requires of speakers to:
Quantity-1: Make your contribution as informative as is required (for the purposes of the exchange).
Quantity-2: Do not make your contribution more informative than is required.
That is, speakers should aim to produce neither under- nor overinformative utterances. While much support has been found for the avoidance of underinformativeness BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , BIBREF9 , BIBREF22 , speakers seem remarkably willing to systematically violate Quantity-2. In modified referring expressions, they routinely produce modifiers that are not necessary for uniquely establishing reference (e.g., the small blue pin instead of the small pin in contexts like Figure 1 ). In simple nominal expressions, speakers routinely choose to refer to an object with a basic level term even when a superordinate level term would have been sufficient for establishing reference BIBREF23 e.g., the dog instead of the animal in contexts like Figure 18 ;>Rosch1976, hoffmann1983objektidentifikation, TanakaTaylor91BasicLevelAndExpertise, Johnson1997, brown1958words.
These observations have posed a challenge for theories of language production, especially those positing rational language use (including the Gricean one): why this extra expenditure of useless effort? Why this seeming blindness to the level of informativeness requirement? Many have argued from these observations that speakers are in fact not economical BIBREF9 , BIBREF5 . Some have derived a built-in preference for referring at the basic level from considerations of perceptual factors such as shape BIBREF12 , BIBREF11 , BIBREF24 . Others have argued for salience-driven effects on willingness to overmodify BIBREF1 , BIBREF25 . In all cases, it is argued that informativeness cannot be the key factor in determining the content of speakers' referring expressions.
Here we revisit this claim and show that systematically relaxing the requirement of a deterministic semantics for referring expressions also systematically changes the informativeness of utterances. This results in a reconceptualization of what have been termed overinformative referring expressions as rationally redundant referring expressions. We begin by reviewing the phenomena of interest that a revised theory of definite referring expressions should be able to account for.
Modified referring expressions
Most of the literature on overinformative referring expressions has been devoted to the use of overinformative modifiers in modified referring expressions. The prevalent observation is that speakers frequently do not include only the minimal modifiers required for establishing reference, but often also include redundant modifiers BIBREF5 , BIBREF6 , BIBREF8 , BIBREF9 , BIBREF2 , BIBREF3 . However, not all modifiers are created equal: there are systematic differences in the overmodification patterns observed for size adjectives (e.g., big, small), color adjectives (e.g., blue, red), material adjectives (e.g., plastic, wooden), and others BIBREF7 . Here we review some key patterns of overmodification that have been observed, before spelling out our account of these phenomena in Section "Modeling speakers' choice of referring expression" .
In Figure 1 , singling out the object highlighted by the green border requires only mentioning its size (the small pin). But it is now well-documented that speakers routinely include redundant color adjectives (the small blue pin) which are not necessary for uniquely singling out the intended referent in these kinds of contexts BIBREF5 , BIBREF26 , BIBREF0 . However, the same is not true for size: in contexts like Figure 1 , where color is sufficient for unique reference (the blue pin), speakers overmodify much more rarely. Though there is quite a bit of variation in proportions of overmodification, this asymmetry in the propensity for overmodifying with color but not size has been documented repeatedly BIBREF5 , BIBREF7 , BIBREF0 , BIBREF10 , BIBREF25 , BIBREF27 .
Explanations for this asymmetry have varied. Pechmann1989 was the first to take the asymmetry as evidence for speakers following an incremental strategy of object naming: speakers initially start to articulate an adjective denoting a feature that listeners can quickly and easily recognize (i.e., color) before they have fully inspected the display and extracted the sufficient dimension. However, this would predict that speakers routinely should produce expressions like the blue small pin, which violate the preference for size adjectives to occur before color adjectives in English BIBREF28 , BIBREF29 . While Pechmann did observe such violations in his dataset, most cases of overmodification did not constitute such violations, and he himself concluded that incrementality cannot (on its own) account for the asymmetry in speakers' propensity for overmodifying with color vs. size.
Another explanation for the asymmetry is that speakers try to produce modifiers that denote features that are reasonably easy for the listener to perceive, so that, even when a feature is not fully distinguishing in context, it at least serves to restrict the number of objects that could plausibly be considered the target. Indeed, there has been some support for the idea that overmodification can be beneficial to listeners by facilitating target identification BIBREF2 , BIBREF10 , BIBREF30 . We return to this idea in Section "Modeling speakers' choice of referring expression" and the General Discussion.
There have been various attempts to capture the color-size asymmetry in computational natural language generation models. The earliest contenders for models of definite referring expressions like the Full Brevity algorithm BIBREF31 or the Greedy algorithm BIBREF31 focused only on discriminatory value – that is, an utterance's informativeness – in generating referring expressions. This is equivalent to the very simple interpretation of Grice laid out above, and consequently these models demonstrated the same inability to capture the color-size asymmetry: they only produced the minimally specified expressions. Subsequently, the Incremental algorithm BIBREF32 incorporated a preference order on features, with color ranked higher than size. The order is traversed and each encountered feature included in the expression if it serves to exclude at least one further distractor. This results in the production of overinformative color but not size adjectives. However, the resulting asymmetry is much greater than that evident in human speakers, and is deterministic rather than exhibiting the probabilistic production patterns that human speakers exhibit. More recently, the PRO model BIBREF33 has sought to integrate the observation that speakers seem to have a preference for including color terms with the observation that a preference does not imply the deterministic inclusion of said color term. The model is specifically designed to capture the color-size asymmetry: in a first step, the uniquely distinguishing property (if there is one) is first selected deterministically. In a second step, an additional property is added probabilistically, depending on both a salience parameter associated with the additional property and a parameter capturing speakers' eagerness to overmodify. If both properties are uniquely distinguishing, a property is selected probabilistically depending on its associated salience parameter. The second step proceeds as before.
However, while the PRO model – the most state-of-the-art computational model of human production of modified referring expressions – can capture the color-size asymmetry, it is neither flexible enough to be extended straightforwardly to other modifiers beyond color and size, nor can it straightforwardly be extended to capture the more subtle systematicity with which the preference to overmodify with color changes based on various features of context.
Speakers' propensity to overmodify with color is highly dependent on features of the distractor objects in the context. In particular, as the variation present in the scene increases, so does the probability of overmodifying BIBREF22 , BIBREF27 . How exactly scene variation is quantified differs across experiments. One very clear demonstration of the scene variation effect was given by Koolen2013, who quantified scene variation as the number of feature dimensions along which objects in a scene vary. Over the course of three experiments, they compared a low-variation condition in which objects never differed in color with a high-variation condition in which objects differed in type, color, orientation, and size. They consistently found higher rates of overmodification with color in the high-variation (28-27%) than in the low-variation (4-10%) conditions. Similarly, Davies2013 found that listeners judge overmodified referring expressions in low-variation scenes of four objects as less natural than in high-variation scenes of 4 potentially compositional `objects-on-objects' (e.g., a button on a sock). And finally, gatt2017, while not reporting differences in overmodification behavior, did find that when size and color are jointly disambiguating, speech onset times for non-redundant color-and-size utterances increased as the number of distractors in the display increased.
The effect of scene variation on propensity to overmodify has typically been explained as the result of the demands imposed on visual search: in low-variation scenes, it is easier to discern the discriminating dimensions than in high-variation scenes, where it may be easier to simply start naming features of the target that are salient BIBREF27 .
Above, we have considered three different ways of quantifying scene variation: the number of dimensions along which objects differ, whether objects are `simple' or `compositional', and the number of distractors present in a scene. A model of referring expression generation should ideally capture all of these types of variation in a unified way.
Modifier type and amount of scene variation are not the only factors determining overmodification. Overmodification with color has been shown to be systematically related to the typicality of the color for the object. Building on work by sedivy2003a, Westerbeek2015 (and more recently, rubiofernandez2016) have shown that the more typical a color is for an object, the less likely it is to be mentioned when not necessary for unique reference. For example, speakers never refer to a yellow banana in the absence of other bananas as the yellow banana (see Figure 2 ), but they sometimes refer to a brown banana as the brown banana, and they almost always refer to a blue banana as the blue banana (see Figure 2 ). Similar typicality effects have been shown for other (non-color) properties. For example, Mitchell2013 showed that speakers are more likely to include an atypical than a typical property (either shape or material) when referring to everyday objects like boxes when mentioning at least one property was necessary for unique reference.
Whether speakers are more likely to mention atypical properties over typical properties because they are more salient to them or because they are trying to make reference resolution easier for the listener, for whom presumably these properties are also salient, is an open question BIBREF25 . Some support for the audience design account comes from a study by Huettig2011, who found that listeners, after hearing a noun with a diagnostic color (e.g., frog), are more likely to fixate objects of that diagnostic color (green), indicating that typical object features are rapidly activated and aid visual search. Similarly, Arts2011 showed that overspecified expressions result in faster referent identification. Nevertheless, the benefit for listeners and the salience for speakers might simply be a happy coincidence and speakers might not, in fact, be designing their utterances for their addressees. We return to this issue in the General Discussion.
Nominal referring expressions
Even in the absence of adjectives, a referring expression can be more or less informative: the dalmatian communicates more information about the object in question than the dog (being a dalmatian entails being a dog), which in turn is globally more informative than the animal. Thus, this choice can be considered analogous to the choice of adding more modifiers – in both cases, the speaker has a choice of being more or less specific about the intended referent. However, the choice of reference level in simple nominal referring expressions is also interestingly different from that of adding modifiers in that there is no additional word-level cost associated with being more specific – the choice is between different one-word utterances, not between utterances differing in word count.
Nevertheless, cognitive cost affects the choice of reference level: in particular, speakers prefer more frequent words over less frequent ones BIBREF34 , and they prefer shorter ones over longer ones BIBREF35 , BIBREF36 . This may go part of the way towards explaining the well-documented effect from the concepts and categorization literature that speakers prefer to refer at the basic level BIBREF12 , BIBREF37 . That is, in the absence of other constraints, even when a superordinate level term would be sufficient for establishing reference (as in Figure 3 ), speakers prefer to say the dog rather than the animal.
Contextual informativeness is another factor that has been shown to affect speakers' nominal production choices BIBREF23 e.g.,>brennan1996. For instance, in a context like Figure 3 , speakers should use the subordinate level term dalmatian to refer to the target marked with a green border, because a higher-level term (dog, animal) would be contextually underinformative. However, there are nevertheless cases of contexts where either the superordinate animal or the basic level dog term would be sufficient for unique reference, as in Figure 3 , in which speakers nevertheless prefer to use the subordinate level term the dalmatian. This is the case when the object is a particularly good instance of the subordinate level term or a particularly bad instance of the basic level term, compared to the other objects in the context. For example, penguins, which are rated as particularly atypical birds, are often referred to at the subordinate level penguin rather than at the basic level bird, despite the general preference for the basic level BIBREF38 .
Summary
In sum, the production of modified and simple nominal referring expressions is governed by many factors, including an utterance's informativeness, its cost relative to alternative utterances, and the typicality of an object or its features. Critically, these factors are all in play at once, potentially interacting in rich and complex ways. In the next section, we provide an explicit computational account of these different factors and how they interact, with a focus on cases where speakers appear to be overinformative – either by adding more modifiers or by referring at a more specific level than necessary for establishing unique reference. A summary of the effects we will focus on in the remainder of the paper is provided in Table 1 .
To date, there is no theory to account for all of these different phenomena; and no model has attempted to unify overinformativeness in the domain of modified and nominal referring expressions. We touched on some of the explanations that have been proposed for these phenomena. We also indicated where computational models have been proposed for individual phenomena. In the next section, we present the Rational Speech Act modeling framework, which we then use to capture these disparate phenomena in one model.
Modeling speakers' choice of referring expression
Here we propose a computational model of referring expression production that accounts for the phenomena introduced above. The model is formulated within the Rational Speech Act (RSA) framework BIBREF14 , BIBREF15 . It provides a principled explanation for the phenomena reviewed in the previous section and holds promise for being generalizable to many further production phenomena related to overinformativeness, which we discuss in Section "General Discussion" . We proceed by first presenting the general framework in Section "Basic RSA" , and show why the most basic model, as formulated by frank2012, does not produce the phenomena outlined above due to its strong focus on speakers maximizing the informativeness of expressions under a deterministic semantics. In Section "RSA with continuous semantics – emergent color-size asymmetry" we introduce the crucial innovation: relaxing the assumption of a deterministic semantics. We show that the model can qualitatively account both for speakers' asymmetric propensity to overmodify with color rather than with size and (in Section "RSA with continuous semantics – scene variation" ) for speakers' propensity to overmodify more with increasing scene variation.
Basic RSA
The production component of RSA aims to soft-maximize the utility of utterances, where utility is defined in terms of the contextual informativeness of an utterance, given each utterance's literal semantics. Formally, this is treated as a pragmatic speaker $S_1$ reasoning about a literal listener $L_0$ , who can be described by the following formula:
$$P_{L_0}(o | u) \propto \mathcal {L}(u,o).$$ (Eq. 23)
The literal listener $L_0$ observes an utterance $u$ from the set of utterances $U$ , consisting of single adjectives denoting features available in the context of a set of objects $O$ , and returns a distribution over objects $o \in O$ . Here, $\mathcal {L}(u,o)$ is the lexicon that encodes deterministic lexical meanings such that:
$$\mathcal {L}(u,o) = \left\lbrace \begin{array}{rl} 1 & \text{if } u \text{ is true of } o\\ 0 & \text{otherwise}. \end{array} \right.$$ (Eq. 24)
Thus, $P_{L_0}(o | u)$ returns a uniform distribution over all contextually available $o$ in the extension of $u$ . For example, in the size-sufficient context shown in Figure 1 , $U = \lbrace \textrm {\emph {big}}, \textrm {\emph {small}}, \textrm {\emph {blue}}, \textrm {\emph {red}}\rbrace $ and $O = \lbrace o_{\textrm {big\_blue}}, o_{\textrm {big\_red}}, o_{\textrm {small\_blue}}\rbrace $ . Upon observing blue, the literal listener therefore assigns equal probability to $o_{\textrm {big\_blue}}$ and $o_{\textrm {small\_blue}}$ . Values of $P_{L_0}(o | u)$ for each $u$ are shown on the left in Table 2 .
The pragmatic speaker in turn produces an utterance with probability proportional to the utility of that utterance:
$$P_{S_1}(u | o) \propto e^{U(u,o)}$$ (Eq. 25)
The speaker's utility $U(u,o)$ is a function of both the utterance's informativeness with respect to the literal listener $P_{L_0}(o | u)$ and the utterance's cost $c(u)$ :
$$U(u,o) = \beta _{i} \ln P_{L_0}(o | u) - \beta _c c(u)$$ (Eq. 26)
Two free parameters, $\beta _i$ and $\beta _c$ enter the computation, weighting the respective contributions of informativeness and utterance cost, respectively. In order to understand the effect of $\beta _i$ , it is useful to explore its effect when utterances are cost-free. In this case, as $\beta _i$ approaches infinity, the speaker increasingly only chooses utterances that maximize informativeness; if $\beta _i$ is 0, informativeness is disregarded and the speaker chooses randomly from the set of all available utterances; if $\beta _i$ is 1, the speaker probability-matches, i.e., chooses utterances proportional to their informativeness BIBREF23 equivalent to Luce's choice rule,>luce1959. Applied to the example in Table 2 , if the speaker wants to refer to $o_{\textrm {small\_blue}}$ they have two semantically possible utterances, small and blue, where small is twice as informative as blue. They produce small with probability 1 when $\beta _i \rightarrow \infty $ , probability 2/3 when $\beta _c$0 and probability 1/4 when $\beta _c$1 .
Conversely, disregarding informativeness and focusing only on cost, any asymmetry in costs will be exaggerated with increasing $\beta _c$ , such that the speaker will choose the least costly utterance with higher and higher probability as $\beta _c$ increases.
As has been pointed out by GattEtAl2013, the basic Rational Speech Act model described so far BIBREF14 does not generate overinformative referring expressions for two reasons. One of these is trivial: $U$ only contains one-word utterances. We can ameliorate this easily by allowing complex two-word utterances. We assume an intersective semantics for complex utterances $u_{\textrm {complex}}$ that consist of a two adjective sequence $u_{\textrm {size}} \in \lbrace \textrm {\emph {big}}, \textrm {\emph {small}}\rbrace $ and $u_{\textrm {color}} \in \lbrace \textrm {\emph {blue}}, \textrm {\emph {red}}\rbrace $ , such that the meaning of a complex two-word utterance is defined as
$$\mathcal {L}(u_{\text{complex}},o) = \mathcal {L}(u_{\text{size}},o) \times \mathcal {L}(u_{\text{color}},o).$$ (Eq. 29)
The resulting renormalized literal listener distributions for our example size-sufficient context in Figure 1 are shown in the middle columns in Table 2 .
Unfortunately, simply including complex utterances in the set of alternatives does not solve the problem. Let's turn again to the case where the speaker wants to communicate the small blue object. There are now two utterances, small and small blue, which are both more informative than blue and equally informative as each other, for referring to the small blue object. Because they are equally contextually informative, the only way for the complex utterance to be chosen with greater probability than the simple utterance is if it was the cheaper one. While this would achieve the desired mathematical effect, the cognitive plausibility of complex utterances being cheaper than simple utterances is highly dubious. Even if it wasn't dubious, as mentioned previously proportions of overinformative referring expressions are variable across experiments. The only way to achieve that variability under the basic model is to assume that the costs of utterances vary from task to task. This also seems to us an implausible assumption. Thus we must look elsewhere to account for overinformativeness. We propose that the place to look is the computation of informativeness itself.
RSA with continuous semantics – emergent color-size asymmetry
Here we introduce the crucial innovation: rather than assuming a deterministic truth-conditional semantics that returns true (1) or false (0) for any combination of expression and object, we relax to a continuous semantics that returns real values in the interval $[0,1]$ . Formally, the only change is in the values that the lexicon can return:
$$\mathcal {L}(u,o) \in [0, 1] \subset \mathbb {R}$$ (Eq. 32)
That is, rather than assuming that an object is unambiguously big (or not) or unambiguously blue (or not), this continuous semantics captures that objects count as big or blue to varying degrees BIBREF23 similar to approaches in fuzzy logic and prototype theory,>zadeh1965fuzzy, Rosch1973.
To see the basic effect of switching to a continuous semantics, and to see how far we can get in capturing overinformativeness patterns with this change, let us explore a simple semantic theory in which all colors are treated the same, all sizes are as well, and the two compose via a product rule. That is, when a size adjective would be `true' of an object under a deterministic semantics, we take $\mathcal {L}(u,o) = x_{\text{size}}$ , a constant; when it is `false' of the object, $\mathcal {L}(u,o) = 1 - x_{\text{size}}$ . Similarly for color adjectives. This results in two free model parameters, $x_{\text{size}}$ and $x_{\text{color}}$ , that can take on different values, capturing that size and color adjectives may apply more or less well/reliably to objects. Together with the product composition rule, Eq. 29 , this fully specifies a relaxed semantic function for our reference domain.
Now consider the RSA literal listener, Eq. 23 , who uses these relaxed semantic values. Given an utterance, the listener simply normalizes over potential referents. As an example, the resulting renormalized literal listener distributions for the size-sufficient example context in Figure 1 are shown for values $x_{\text{size}} = .8$ and $x_{\text{color}} = .99$ on the right in Table 2 . Recall that in this context, the speaker intends for the listener to select the small blue pin. To see which would be the best utterance to produce for this purpose, we compare the literal listener probabilities in the $o_{\text{small\_blue}}$ column. The two best utterances under both the deterministic and the continuous semantics are bolded in the table: under the deterministic semantics, the two best utterances are small and small blue, with no difference in listener probability. In contrast, under the continuous semantics small has a smaller literal listener probability (.67) of retrieving the intended referent than the redundant small blue (.80). Consequently, the pragmatic speaker will be more likely to produce small blue than small, though the precise probabilities depend on the cost and informativeness parameters $\beta _c$ and $\beta _i$ .
Crucially, the reverse is not the case when color is the distinguishing dimension. Imagine the speaker in the same context wanted to communicate the big red pin. The two best utterances for this purpose are red (.99) and big red (.99). In contrast to the results for the small blue pin, these utterances do not differ in their capacity to direct the literal listener to the intended referent. The reason for this is that we defined color to be almost noiseless, with the result that the literal listener distributions in response to utterances containing color terms are more similar to those obtained via a deterministic semantics than the distributions obtained in response to utterances containing size terms. The reader is encouraged to verify this by comparing the row-wise distributions under the deterministic and continuous semantics in Table 2 .
To gain a wider understanding of the effects of assuming continuous meanings in contexts like that depicted in Figure 1 , we visualize the results of varying $x_{\text{size}}$ and $x_{\text{color}}$ in Figure 4 . To orient the reader to the graph: the deterministic semantics of utterances is approximated where the semantic values of both size and color utterances are close to 1 (.999, top right-most point in graph). In this case, the simple sufficient (small pin) and complex redundant utterance (small blue pin) are equally likely, around .5, because they are both equally informative and utterances are assumed to have 0 cost. All other utterances are highly unlikely. The interesting question is under which circumstances, if any, the standard color-size asymmetry emerges. This is the yellow/orange/red space in the `small blue' facet, characterized by values of $x_{\text{size}}$ that are lower than $x_{\text{color}}$ , with high values for $x_{\text{color}}$ . That is, redundant utterances are more likely than sufficient utterances when the redundant dimension (in this case color) is less noisy than the sufficient dimension (in this case size) and overall is close to noiseless.
Thus, when size adjectives are noisier than color adjectives, the model produces overinformative referring expressions with color, but not with size – precisely the pattern observed in the literature BIBREF5 , BIBREF0 . Note also that no difference in adjective cost is necessary for obtaining the overinformativeness asymmetry, though assuming a greater cost for size than for color does further increase the observed asymmetry. We defer a discussion of costs to Section "Experiment 1: scene variation in modified referring expressions" , where we infer the best parameter values for both the costs and the semantic values of size and color, given data from a reference game experiment.
We defer a complete discussion of the important potential psychological and linguistic interpretation of these continuous semantic values to the General Discussion in Section "General Discussion" . However, it is worth reflecting on why size adjectives may be inherently noisier than color adjectives. Color adjectives are typically treated as absolute adjectives while size adjectives are inherently relative BIBREF42 . That is, while both size and color adjectives are vague, size adjectives are arguably context-dependent in a way that color adjectives are not – whether an object is big depends inherently on its comparison class; whether an object is red does not. In addition, color as a property has been claimed to be inherently salient in a way that size is not BIBREF2 , BIBREF33 . Finally, we have shown in recent work that color adjectives are rated as less subjective than size adjectives BIBREF43 . All of these suggest that the use of size adjectives may be more likely to vary across people and contexts than color.
To summarize, we have thus far shown that RSA with continuous adjective semantics can give rise to the well-documented color-size asymmetry in the production of overinformative referring expressions when color adjectives are closer to deterministic truth-functions than size adjectives. The crucial mechanism is that when modifiers are relaxed, adding additional, `stricter' modifiers adds information. From this perspective, these redundant modifiers are not overinformative; they are rationally redundant, or sufficiently informative given the needs of the listener.
RSA with continuous semantics – scene variation
As discussed in Section "Overinformativeness in referring expressions" , increased scene variation has been shown to increase the probability of referring expressions that are overmodified with color. Here we simulate the experimental conditions reported by Koolen2013 and explore the predictions that continuous semantics RSA – henceforth cs-RSA – makes for these situations. Koolen2013 quantified scene variation as the number of feature dimensions along which pieces of furniture in a scene varied: type (e.g., chair, fan), size (big, small), and color (e.g., red, blue). Here, we simulate the high and low variation conditions from their Experiments 1 and 2, reproduced in Figure 5 .
In both conditions in both experiments, color was not necessary for establishing reference; that is, color mentions were always redundant. The two experiments differed in the dimension necessary for unique reference. In Exp. 1, only type was necessary (fan and couch in the low and high variation conditions in Figure 5 , respectively). In Exp. 2, size and type were necessary (big chair and small chair in Figure 5 , respectively). Koolen2013 found lower rates of redundant color use in the low variation conditions (4% and 9%) than in the high variation conditions (24% and 18%).
We generated model predictions for precisely these four conditions. Note that by adding the type dimension as a distinguishing dimension, we must allow for an additional semantic value $x_{\text{type}}$ , which encodes how noisy nouns are.
Koolen2013 counted any mention of color as a redundant mention. In Exp. 1, this includes the simple redundant utterances like blue couch as well as complex redundant utterances like small blue couch. In Exp. 2, where size was necessary for unique reference, only the complex redundant utterance small brown chair was truly redundant (brown chair was insifficient, but still included in coounts of color mention). The results of simulating these conditions with parameters $\beta _i = 30$ , $ \beta _c = c(u_{\textrm {size}}) = c(u_{\textrm {color}}) = 1$ , $x_{\text{size}} = .8$ , $x_{\text{color}} = .999$ , and $x_{\text{type}} = .9$ are shown in Figure 5 , under the assumption that the cost of a two-word utterance $c(u)$ is the sum of the costs of the one-word sub-utterances. For both experiments, the model exhibits the empirically-observed qualitative effect of variation on the probability of redundant color mention: when variation is greater, redundant color mention is more likely. Indeed, this effect of scene variation is predicted by the model anytime the semantic values for size, type, and color are ordered as: $x_{\text{size}} \le x_{\text{type}} < x_{\text{color}}$ . If, on the other hand, $x_{\text{type}}$ is greater than $x_{\text{color}}$ , the probability of redundantly mentioning color is close to zero and does not differ between variation conditions (in those cases, color mention reduces, rather than adds, information about the target).
To further explore the scene variation effect predicted by RSA, turn again to Figure 1 . Here, the target item is the small blue pin and there are two distractor items: a big blue pin and a big red pin. Thus, for the purpose of establishing unique reference, size is the sufficient dimension and color the insufficient dimension. We can measure scene variation as the proportion of distractor items that do not share the value of the insufficient feature with the target, that is, as the number of distractors $n_{\textrm {diff}}$ that differ in the value of the insufficient feature divided by the total number of distractors $n_{\textrm {total}}$ : $ \textrm {scene variation} = \frac{n_{\textrm {diff}}}{n_{\textrm {total}}} $
In Figure 1 , there is one distractor that differs from the target in color (the big red pin) and there are two distractors in total. Thus, $\textrm {scene variation} = \frac{1}{2} = .5$ . In general, this measure of scene variation is minimal when all distractors are of the same color as the target, in which case it is 0. Scene variation is maximal when all distractors except for one (in order for the dimension to remain insufficient for establishing reference) are of a different color than the target. That is, scene variation may take on values between 0 and $\frac{n_{\textrm {total}} - 1}{n_{\textrm {total}}}$ .
Using the same parameter values as above, we generate model predictions for size-sufficient and color-sufficient contexts, manipulating scene variation by varying number of distractors (2, 3, or 4) and number of distractors that don't share the insufficient feature value. The resulting model predictions are shown in Figure 6 . The predicted probability of redundant adjective use is largely (though not completely) correlated with scene variation. Redundant adjective use increases with increasing scene variation when size is sufficient (and color redundant), but not when color is sufficient (and size redundant). The latter prediction depends, however, on the actual semantic value of color—with slightly lower semantic values for color, the model predicts small increases in redundant size use. In general: increased scene variation is predicted to lead to a greater increase in redundant adjective use for less noisy adjectives.
RSA with a continuous semantics thus captures the qualitative effects of color-size asymmetry and scene variation in production of redundant expressions, and it makes quantitative predictions for both. Testing these quantitative predictions, however, will require more data. In Sections 3, 4, and 5 we quantitatively evaluate cs-RSA on datasets capturing the phenomena described in the Introduction (Table 1 ): modifier type and scene variation effects on modified referring expressions, typicality effects on color mention, and the choice of taxonomic level of reference in nominal choice.
Modified referring expressions: size and color modifiers under different scene variation conditions
Adequately assessing the explanatory value of RSA with continuous semantics requires evaluating how well it does at predicting the probability of various types of utterances occurring in large datasets of naturally produced referring expressions. We first report the results of a web-based interactive reference game in which we systematically manipulate scene variation (in a somewhat different way than Koolen2013 did). We then perform a Bayesian data analysis to both assess how likely the model is to generate the observed data – i.e., to obtain a measure of model quality – and to explore the posterior distribution of parameter values – i.e., to understand whether the assumed asymmetries in the adjectives' semantic values and/or cost discussed in the previous section are validated by the data.
Experiment 1: scene variation in modified referring expressions
We saw in Section "RSA with continuous semantics – scene variation" that cs-RSA correctly predicts qualitative effects of scene variation on redundant adjective use. In particular, we saw that color is more likely to be used redundantly when objects vary along more dimensions. To test the model predictions, we conducted an interactive web-based production study within a reference game setting. Speakers and listeners were shown arrays of objects that varied in color and size. Speakers were asked to produce a referring expression to allow the listener to identify a target object. We manipulated the number of distractor objects in the grid, as well as the variation in color and size among distractor objects.
We recruited 58 pairs of participants (116 participants total) over Amazon's Mechanical Turk who were each paid $1.75 for their participation. Data from another 7 pairs who prematurely dropped out of the experiment and who could therefore not be compensated for their work, were also included. Here and in all other experiments reported in this paper, participants' IP address was limited to US addresses and only participants with a past work approval rate of at least 95% were accepted.
Participants were paired up through a real-time multi-player interface BIBREF44 . For each pair, one participant was assigned the speaker role and one the listener role. They initially received written instructions that informed participants that one of them would be the Speaker and the other the Listener. They were further told that they would see some number of objects on each round and that the speaker's task is to communicate one of those objects, marked by a green border, to the listener. They were explicitly told that using locative modifiers (like left or right) would be useless because the order of objects on their partner's screen would be different than on their own screen. Before continuing to the experiment, participants were required to correctly answer a series of questions about the experimental procedure. These questions are listed in Appendix "Pre-experiment quiz" .
On each trial participants saw an array of objects. The array contained the same objects for both speaker and listener, but the order of objects was randomized and was typically different for speaker and listener. In the speaker's display, one of the objects – henceforth the target – was highlighted with a green border. See Figure 7 for an example of the listener's and speaker's view on a particular trial.
The speaker produced a referring expression to communicate the target to the listener by typing into an unrestricted chat window. After pressing Enter or clicking the `Send' button, the speaker's message was shown to the listener. The listener then clicked on the object they thought was the target, given the speaker's message. Once the listener clicked on an object, a red border appeared around that object in both the listener and the speaker's display for 1 second before advancing to the next trial. That is, both participants received feedback about the speaker's intended referent and the listener's inference.
Both speakers and listeners could write in the chat window, allowing listeners to request clarification if necessary. Listeners were able to click on an object, advancing to the next trial, only once the speaker sent an initial message.
Participants proceeded through 72 trials. Of these, half were critical trials of interest and half were filler trials. On critical trials, we varied the feature that was sufficient to mention for uniquely establishing reference, the total number of objects in the array, and the number of objects that shared the insufficient feature with the target.
Objects varied in color and size. On 18 trials, color was sufficient for establishing reference. On the other 18 trials, size was sufficient. Figure 7 shows an example of a size-sufficient trial. We further varied the amount of variation in the scene by varying the number of distractor objects in each array (2, 3, or 4) and the number of distractors that did share the redundant feature value with the target. That is, when size was sufficient, we varied the number of distractors that shared the same color as the target. This number had to be at least one, since otherwise the redundant property would have been sufficient for uniquely establishing reference, i.e. mentioning it would not have been redundant. Each total number of distractors was crossed with each possible number of distractors that shared the redundant property, leading to the following nine conditions: 2-1, 2-2, 3-1, 3-2, 3-3, 4-1, 4-2, 4-3, and 4-4, where the first number indicates the total number and the second number the shared number of distractors. Each condition occurred twice with each sufficient dimension. Objects never differed in type within one array (e.g., all objects are pins in Figure 7 ) but always differed in type across trials. Each object type could occur in two different sizes and two different colors. We deliberately chose photo-realistic objects of intuitively fairly typical colors. The 36 different object types and the colors they could occur with are listed in Appendix "Exp. 1 items" .
Fillers were target trials from Exp. 2, a replication of GrafEtAl2016. Each filler item contained a three-object grid. None of the filler objects occurred on target trials. Objects stood in various taxonomic relations to each other and required neither size nor color mention for unique reference. See Section "Unmodified referring expressions: nominal taxonomic level" for a description of these materials.
We collected data from 2177 critical trials. Because we did not restrict participants' utterances in any way, they produced many different kinds of referring expressions. Testing the model's predictions required, for each trial, classifying the produced utterance as an instance of a color-only mention, a size-only mention, or a color-and-size mention (or excluding the trial if no classification was possible). To this end we conducted the following semi-automatic data pre-processing.
An R script first automatically checked whether the speaker's utterance contained a precoded color (i.e. black, blue, brown, gold, green, orange, pink, purple, red, silver, violet, white, yellow) or size (i.e. big, bigger, biggest, huge, large, larger, largest, little, small, smaller, smallest, tiny) term. In this way, 95.7 % of cases were classified as mentioning size and/or color. However, this did not capture that sometimes, participants produced meaning-equivalent modifications of color/size terms for instance by adding suffixes (e.g., bluish), using abbreviations (e.g., lg for large or purp for purple), or using non-precoded color labels (e.g., lime or lavender). Expressions containing a typo (e.g., pruple instead of purple) could also not be classified automatically. In the next step, one of the authors (CG) therefore manually checked the automatic coding to include these kinds of modifications in the analysis. This covered another 1.9% of trials. Most of the time, participants converged on a convention of producing only the target's size and/or color, e.g., purple or big blue, but not an article (e.g., the) or the noun corresponding to the object's type (e.g., comb). Articles were omitted in 88.6 % of cases and nouns were omitted in 71.6 % of cases. We did not analyze this any further.
There were 50 cases (2.3%) in which the speaker made reference to the distinguishing dimension in an abstract way, e.g. different color, unique one, ripest, very girly, or guitar closest to viewer. While interesting as utterance choices, these cases were excluded from the analysis. There were 3 cases that were nonsensical, e.g. bigger off a shade, which were also excluded. In 6 cases only the insufficient dimension was mentioned – these were excluded from the analysis reported in the next section, where we are only interested in minimal or redundant utterances, not underinformative ones, but were included in the Bayesian data analysis reported in Section "Model evaluation" . Finally, we excluded six trials where the speaker did not produce any utterances, and 33 trials on which the listener selected the wrong referent, leading to the elimination of 1.5% of trials. After the exclusion, 2076 cases classified as one of color, size, or color-and-size entered the analysis.
Proportions of redundant color-and-size utterances are shown in Figure 8 alongside model predictions (to be explained further in Section "Model evaluation" ). There are three main questions of interest: first, do we replicate the color/size asymmetry in probability of redundant adjective use? Second, do we replicate the previously established effect of increased redundant color use with increasing scene variation? Third, is there an effect of scene variation on redundant size use and if so, is it smaller compared to that on color use, as is predicted under asymmetric semantic values for color and size adjectives?
We addressed all of these questions by conducting a single mixed effects logistic regression analysis predicting redundant over minimal adjective use from fixed effects of sufficient property (color vs. size), scene variation (proportion of distractors that does not share the insufficient property value with the target), and the interaction between the two. The model included the maximal random effects structure that allowed the model to converge: by-speaker and by-item random intercepts.
We observed a main effect of sufficient property, such that speakers were more likely to redundantly use color than size adjectives ( $\beta = 3.54$ , $SE = .22$ , $p < .0001$ ), replicating the much-documented color-size asymmetry. We further observed a main effect of scene variation, such that redundant adjective use increased with increasing scene variation ( $\beta = 4.62$ , $SE = .38$ , $p < .0001$ ). Finally, we also observed a significant interaction between sufficient property and scene variation ( $\beta = 2.26$ , $SE = .74$ , $p < .003$ ). Simple effects analysis revealed that the interaction was driven by the scene variation effect being smaller in the color-sufficient condition ( $\beta = 3.49$ , $SE = .22$0 , $SE = .22$1 ) than in the size-sufficient condition ( $SE = .22$2 , $SE = .22$3 , $SE = .22$4 ), as predicted if size modifiers are noisier than color modifiers. That is, while the color-sufficient condition indeed showed a scene variation effect—and as far as we know, this is the first demonstration of an effect of scene variation on redundant size use—this effect was tiny compared to that of the size-sufficient condition.
Model evaluation
In order to evaluate RSA with continuous semantics we conducted a Bayesian data analysis. This allowed us to simultaneously generate model predictions and infer likely parameter values, by conditioning on the observed production data (coded into size, color, and size-and-color utterances as described above) and integrating over the five free parameters. To allow for differential costs for size and color, we introduce separate cost weights ( $\beta _{c(\textrm {size})}, \beta _{c(\textrm {color})}$ ) applying to size and color mentions, respectively, in addition to semantic values for color and size ( $x_{\textrm {color}}$ , $x_{\textrm {size}}$ ) and an informativeness parameter $\beta _i$ . We assumed uniform priors for each parameter: $x_{\textrm {color}}, x_{\textrm {size}} \sim \mathcal {U}(0,1)$ , $\beta _{c(\textrm {size})}, \beta _{c(\textrm {color})} \sim \mathcal {U}(0,40)$ , $\beta _i \sim \mathcal {U}(0,40)$ . Inference for the cognitive model was exact. We used Markov Chain Monte Carlo (MCMC) with a burn-in of 10000 and lag of 10 to draw 2000 samples from the joint posteriors on the five free parameters.
Point-wise maximum a posteriori (MAP) estimates of the model's posterior predictives for just redundant utterance probabilities are shown alongside the empirical data in Figure 8 . In addition, MAP estimates of the model's posterior predictives for each combination of utterance, sufficient dimension, number of distractors, and number of different distractors (collapsing across different items) are plotted against all empirical utterance proportions in Figure 9 . At this level, the model achieves a correlation of $r = .99$ . Looking at results additionally on the by-item level yields a correlation of $r = .85$ (this correlation is expected to be lower both because each item contains less data, and because we did not provide the model any means to refer differently to, e.g., combs and pins). The model thus does a very good job of capturing the quantitative patterns in the data.
Posteriors over parameters are shown in Figure 10 . Crucially, the semantic value of color is inferred to be higher than that of size – there is no overlap between the 95% highest density intervals (HDIs) for the two parameters. That is, size modifiers are inferred to be noisier than color modifiers. The high inferred $\beta _i$ (MAP $\beta _i$ = 31.4, HDI = [30.7,34.5]) suggests that this difference in semantic value contributes substantially to the observed color-size asymmetries in redundant adjective use and that speakers are maximizing quite strongly. As for cost, there is a lot of overlap in the inferred weights of size and color modifiers, which are both skewed very close to zero, suggesting that a cost difference (or indeed any cost at all) is neither necessary to obtain the color-size asymmetry and the scene variation effects, nor justified by the data. Recall further that we already showed in Section "RSA with continuous semantics – emergent color-size asymmetry" that the color-size asymmetry in redundant adjective use requires an asymmetry in semantic value and cannot be reduced to cost differences. An asymmetry in cost only serves to further enhance the asymmetry brought about by the asymmetry in semantic value, but cannot carry the redundant use asymmetry on its own.
We evaluated the cs-RSA model on the obtained production data from Exp. 2. In particular, we were interested in using model comparison to address the following issues: First, can RSA using elicited typicality as the semantic values account for quantitative details of the production data? Second, are typicality values sufficient, or is there additional utility in including a noise offset determined by the type of modifier, as was used in the previous section? Third, does utterance cost explain any of the observed production behavior.
While the architecture of the model remained the same as that of the model presented in Section "RSA with continuous semantics – emergent color-size asymmetry" , we briefly review the minor necessary changes, some of which we already mentioned at the beginning of this section. These changes concerned the semantic values and the cost function.
Whereas for the purpose of evaluating the model in Section "Modified referring expressions: size and color modifiers under different scene variation conditions" we only considered the utterance alternatives color, size, and color-size, collapsing over the precise attributes, here we included in the lexicon each possible color adjective, type noun, and combination of the two. This substantially increased the size of the lexicon to 37 unique utterances. For each combination of utterance $u$ and object $o$ that occurred in the experiment, we included a separate semantic value $x_{u,o}$ , elicited in the norming experiments described in Section UID83 (rather than inferred as done for Exp. 1, to avoid overfitting). For any given context, we assumed the utterance alternatives that correspond to the individually present features and their combinations. For example, for the context in Figure 13 , the set of utterance alternatives was yellow, green, pear, banana, avocado, yellow pear, yellow banana, and green avocado.
We compared two choices of semantics for the model. In the empirical semantics version, the empirically elicited typicality values were directly used as semantic values. In the more complex fixed plus empirical semantics version, we introduce an additional parameter interpolating between the empirical typicality values and inferred values for each utterance type as employed in Section "Modified referring expressions: size and color modifiers under different scene variation conditions" (e.g. one value for color terms and another for type terms, which are multiplied when the terms are composed in an utterance). Note that this allows us to perform a nested model comparison, since the first model is a special case of the second.
For the purpose of evaluating the model in Section "Modified referring expressions: size and color modifiers under different scene variation conditions" we inferred two constant costs (one for color and one for size), and found in the Bayesian Data Analysis that the role of cost in explaining the data was minimal at best. Here, we compared two different versions of utterance cost. In the fixed cost model we treated cost the same way as in the previous section and included only a color and type level cost, inferred from the data. We then compared this model to an empirical cost model, in which we included a more complex cost function. Specifically, we defined utterance cost $c(u)$ as follows:
$$ c(u) = \beta _F\cdot p(u) + \beta _L\cdot l(u)$$ (Eq. 98)
Here, $p(u)$ is negative log utterance frequency, as estimated from the Google Books corpus (years 1950 to 2008); $l(u)$ is the mean empirical length of the utterance in characters in the production data (e.g., sometimes yellow was abbreviated as yel, leading to an $l(u)$ smaller than 6); $\beta _F$ is a weight on frequency; and $\beta _L$ is a weight on length. Both $p(u)$ and $l(u)$ were normalized to fall into the interval $[0,1]$ . The empirical cost function thus prefers short and frequent utterances (e.g., blue) over long and infrequent ones (turquoise-ish bananaesque thing). We compared both of these models to a simpler baseline in which utterances were assumed to have no cost.
To evaluate the effect of these choices of semantics and cost, we conducted a full Bayesian model comparison. Specifically, we computed the Bayes Factor for each comparison, a measure quantifying the support for one model over another in terms of the relative likelihood they each assign to the observed data. As opposed to classical likelihood ratios, which only use the maximum likelihood estimate, the likelihoods in the Bayes Factor integrate over all parameters, thus automatically correcting for the flexibility due to extra parameters (the “Bayesian Occam's Razor”). Because it was intractable to analytically compute these integrals for our recursive model, we used Annealed Importance Sampling (AIS), a Monte Carlo algorithm commonly used to approximate these quantities. To ensure high-quality estimates, we took the mean over 100 independent samples for each model, with each chain running for 30,000 steps. The marginal log likelihoods for each model are shown in Table 6 . The best performing model used fixed plus empirical semantics and did not include a cost term. Despite the greater number of parameters associated with adding the fixed semantics to the empirical semantics, the fixed plus empirical semantics models were preferred across the board compared to their empirical-only counterparts ( $BF = 3.7 \times 10^{48}$ for fixed costs, $BF = 2.1 \times 10^{60}$ for empirical costs, and $BF = 1.4 \times 10^{71}$ for no cost). In comparison, additional cost-related parameters were not justified, with $BF = 5.7 \times 10^{21}$ for no cost compared to fixed cost and $BF = 2.1 \times 10^{27}$ for compared to empirical cost.
The correlation between empirical utterance proportions and the best model's MAP predictions at the by-item level was $r=.94$ . Predictions for the best-performing model are visualized alongside empirical proportions in Figure 16 . The model successfully reproduces the empirically observed typicality effects in all four experimental conditions, with a reasonably good quantitative agreement. The interpolation weight between the fixed and empirical semantic values $\beta _{\textrm {fixed}}$ (Figure 17 ) is in the intermediate range: this provides evidence that a noisy truth-conditional semantics as employed in Exp. 1 is justified, but that taking into account graded category membership or typicality in an utterance's final semantic value is also necessary.
There is one major, and interesting, divergence from the empirical data in conditions without color competitors. Here, color-and-type utterances are systematically somewhat underpredicted in the informative condition, and systematically somewhat overpredicted in the overinformative condition. The reverse is true for color-only utterances. It is worth looking at the posterior over parameters, shown in Figure 17 , to understand the pattern. In particular, the utterance type level semantic value of type is inferred to be systematically higher than that of color, capturing that type utterances are less noisy than color utterances. An increase in color-only mentions in the overinformative condition could be achieved by reducing the semantic value for type. However, that would lead to a further and undesirable increase in color-only mentions in the informative condition as well. That is, the two conditions are in a tug-of-war with each other.
We evaluated cs-RSA on the production data from Exp. 3. The architecture of the model is identical to that of the model presented in Section "Model evaluation" . The only difference is that the set of alternatives contained only the three potential target utterances (i.e., the target's sub, basic, and super label). Whereas the modifier models from the previous sections treat all individual features and feature combinations represented in the display as utterance alternatives, for computational efficiency we restrict alternatives in the nominal choice model, considering only the three different levels of reference to the target as alternatives, e.g., dalmatian, dog, animal. (So, when a German Shepherd is a distractor, German Shepherd is not considered an alternative. This has minimal effects on model predictions as long as German Sheperd has low semantic fit to the dalmatian target.)
For the previous dataset, we tested which of three different semantics was most justified – a fixed compositional semantics with type-level semantic values, the empirically elicited typicality semantics, or a combination of the two. For the current dataset, this question did not arise, because we investigated only one word utterances (all nouns). We hence only considered the empirical semantics. However, like in the previous dataset, we evaluated which cost function was best supported by the data: the one defined in ( 98 ) (a linear weighted combination of an utterance's length and its frequency) or a simpler baseline in which utterances were assumed to have no cost.
We employed the same procedure as in the previous section to compute the Bayes Factor for the comparison between the two cost models, and to compute the posteriors over parameters. Priors were again $\beta _i \sim \mathcal {U}(0,20)$ , $\beta _{F} \sim \mathcal {U}(0,5)$ , $\beta _{L} \sim \mathcal {U}(0,5)$ , $\beta _t \sim \mathcal {U}(0,5)$ .
Despite the greater number of parameters associated with adding the cost function, the model that includes non-zero costs was preferred compared to its no-cost counterpart ( $BF = 2.8 \times 10^{77}$ ). Posteriors over parameters are shown in Figure 20 . It is worth noting that the weight on frequency is close to zero. That is, in line with the results from the mixed effects regression, it is an utterance's length, but not its frequency, that affects the probabilitiy with which it is produced in this paradigm.
Empirical utterance proportions are shown against MAP model predictions in Figure 21 . The correlation between empirical utterance proportions and the model's MAP predictions at the level of targets, utterances, and conditions was $r = .86$ . Further collapsing across targets yields a correlation of $r = .95$ . While the model overpredicts subordinate level and underpredicts basic level choices in the sub necessary condition, it otherwise captures the patterns in the data very well.
Discussion
In this section, we reported the results of a dataset of freely collected referring expressions that replicated the well-documented color-size asymmetry in redundant adjective use, the effect of scene variation on redundant color use, and showed a novel effect of scene variation on redundant size use. We also showed that cs-RSA provides an excellent fit to these data. In particular, the crucial element in obtaining the color-size asymmetry in overmodification is that size adjectives be noisier than color adjectives, captured in RSA via a lower semantic value for size compared to color. The effect is that color adjectives are more informative than size adjectives when controlling for the number of distractors each would rule out under a deterministic semantics. Asymmetries in the cost of the adjectives were not attested, and would only serve to further enhance the modification asymmetry resulting from the asymmetry in semantic value. In addition, we showed that asymmetric effects of scene variation on overmodification straightforwardly fall out of cs-RSA: scene variation leads to a greater increase in overmodification with less noisy than with more noisy modifiers because the less noisy modifiers (colors) on average provide more information about the target.
These results raise interesting questions regarding the status of the inferred semantic values: do color modifiers have inherently higher semantic values than size modifiers? Is the difference constant? What if the color modifier is a less well known one like mauve? The way we have formulated the model thus far, there would indeed be no difference in semantic value between red and mauve. Moreover, the model is not equipped to handle potential object-level idiosyncracies such as the typicality effects discussed in Section "Experiment 3 items" . We defer a fuller discussion of the status of the semantic value term to the General Discussion (Section "Continuous semantics" ) and turn first to cs-RSA's potential for capturing these typicality effects.
In this section we demonstrated that cs-RSA predicts color typicality effects in the production of referring expressions. The model employed here did not differ in its architecture from that employed in Section "Modified referring expressions: size and color modifiers under different scene variation conditions" , but only in that a) semantic values were assumed to operate at the individual utterance/object level in addition to at the utterance type/object level; b) semantic values for individual utterances were empirically elicited via typicality norming studies; and c) an utterance's cost was allowed to be a function of its mean empirical length and its corpus frequency instead of having a constant utterance type level value, though utterance cost ultimately was found not to play a role in predicting utterance choice.
This suggests that the dynamics at work in the choice of color vs. size and in the choice of color as a function of the object's color typicality are very similar: speakers choose utterances by considering the fine-grained differences in information about the intended referent communicated by the ultimately chosen utterance compared to its competitor utterances. For noisier utterances (e.g., banana as applied to a blue banana), including the `overinformative' color modifier is useful because it provides information. For less noisy utterances (e.g., banana as applied to a yellow banana), including the color modifier is useless because the unmodified utterance is already highly informative with respect to the speaker's intention. These dynamics can sometimes even result in the color modifier being left out altogether, even when there is another—very atypical—object of the same type present, simply because the literal listener is expected to prefer the typical referent strongly enough.
Model comparison demonstrated the need for assuming a semantics that interpolates between a noisy truth-conditional semantics as employed in Exp. 1 and empirically elicited typicality values. This may reflect semantic knowledge that goes beyond graded category membership, additional effects of compositionality, or perhaps simply differences between our empirical typicality measure and the “semantic fit” expected by RSA models. Perhaps surprisingly, we replicated the result from Exp. 1 that utterance cost does not add any predictive power, even when quantified via a more sophisticated cost function that takes into account an utterance's length and frequency.
In the next section, we move beyond the choice of modifier and ask whether cs-RSA provides a good account of content selection in referring expressions more generally. To answer this question we turn to simple nominal referring expressions.
Modified referring expressions: color typicality
In Section "Modified referring expressions: size and color modifiers under different scene variation conditions" we showed that cs-RSA successfully captures both the basic asymmetry in overmodification with color vs. size as well as effects of scene variation on overmodification. In Section "Experiment 3 items" we discussed a further characteristic of speakers' overmodification behavior: speakers are more likely to redundantly produce modifiers that denote atypical rather than typical object features, i.e., they are more likely to refer to a blue banana as a blue banana rather than as a banana, and they are more likely to refer to a yellow banana as a banana than as a yellow banana BIBREF7 , BIBREF25 . So far we have not included any typicality effects in the semantics of our RSA model, hence the model so far would not capture this asymmetry.
A natural first step is to introduce a more nuanced semantics for nouns in our model. In particular, we could imagine a continuous semantics in which banana fits better (i.e. has a semantic value closer to 1 for) the yellow banana than the brown, and fits the brown better than the blue; specific such hypothetical values are shown in the first row of Table 3 . Let us further assume that modifying the noun with a color adjective leads to uniformly high semantic values close to 1 for those objects that a simple truth-conditional semantics would return `true' for (see diagonal in Table 3 ) and a very low semantic value close to 0 for any utterance applied to any object that a simple truth-conditional semantics would return `false' for.
The effect of running the speaker model forward with the standard literal listener treatment of the values in Table 3 for the three contexts in Figure 11 , where banana is the strictly sufficient utterance for unique reference (i.e., color is redundant under the standard view) is as follows: with $\beta _i$ = 12 and $\beta _c$ = 5, the resulting speaker probabilities for the minimal utterance banana are .95, .29, and .04, to refer to the yellow banana, the brown banana, and the blue banana, respectively. In contrast, the resulting speaker probabilities for the redundant yellow banana, brown banana, and blue banana are .05, .71, and .96, respectively. That is, redundant color mention increases with decreasing semantic value of the simple banana utterance.
This shows that cs-RSA can predict typicality effects if the semantic fit of the noun (and hence also of color-noun compounds) to an object is modulated by typicality. The reason the typicality effect arises is that, with the hypothetical values we assumed, the gain in informativeness between using the unmodified banana and the modified COLOR banana is greater in the blue than in the yellow banana case.
This example is somewhat oversimplified. In practice, speakers sometimes mention an object's color without mentioning the noun. In the contexts presented in Figure 11 this does not make much sense because there is always a competitor of the same color present. In contrast, in the contexts in Figure 13 and Figure 13 , color alone disambiguates the target. This suggests that we should consider among the set of utterance alternatives not just the simple type mentions (e.g., banana) and color-and-type mentions (e.g., yellow banana), but also simple color mentions (e.g., yellow). The dynamics of the model proceed as before.
An additional, more theoretically fraught, simplification concerns where typicality can enter into the semantics and how compositions proceeds. In the above, we have assumed that the semantic value of the modified expression is uniformly high, which is qualitatively what is necessary (and, as we will see below, empirically correct) in order for the typicality effects to emerge. However, there is no straightforward way to compositionally derive such uniformly high values from the semantic values of the nouns and the semantic values of the color modifiers, which we have not yet discussed. Indeed, compositional semantics of graded meanings is a well known problem for theories of modification BIBREF48 . Rather than try to solve it here, we note that RSA works at the level of whole utterances. Hence, if we can reasonably measure the semantic fit of each utterance to each possible referent, then cs-RSA will make predictions for production without the need to derive the semantic values compositionally. That is, if we can measure the typicality of the phrase blue banana for a banana, we don't need to derive it from blue, banana, and a theory of composition. This separates pragmatic aspects of reference, which are the topic of this paper, from issues in compositional semantics, which are not; hence we will take this approach for experimentally testing the predictions of relaxed semantics RSA for typicality effects.
The stimuli for Exp. 1 were specifically designed to be realistic objects with low color-diagnosticity, so they did not include objects with low typicality values or large degrees of variation in typicality. This makes the dataset from Exp. 1 not well-suited for investigating typicality effects. We therefore conducted a separate production experiment in the same paradigm but with two broad changes: first, objects' color varied in typicality; and second, we did not manipulate object size, focusing only on color mention. This allows us to ask three questions: first, do we replicate the typicality effects reported in the literature – that is, are less color-typical objects more likely to lead to redundant color use than more color-typical objects? Second, does cs-RSA with empirically elicited typicality values as proxy for a continuous semantics capture speakers' behavior? Third, does the semantic value depend only on typicality, or is there still a role for modifier type noise of the kind we investigated in the previous section? In addition, we can investigate the extent to which utterance cost, which we found not to play a role in the previous section, affects the choice of referring expression.
Experiment 2: color typicality effects
We recruited 61 pairs of participants (122 participants total) over Amazon's Mechanical Turk who were each paid $1.80 for their participation.
The procedure was identical to that of Exp. 1. See Figure 12 for an example speaker and listener perspective.
Each participant completed 42 trials. In this experiment, there were no filler trials, since pilot studies with and without fillers delivered very similar results. Each array presented to the participants consisted of three objects that could differ in type and color. One of the three objects functioned as a target and the other two as its distractors.
The stimuli were selected from seven color-diagnostic food items (apple, avocado, banana, carrot, pear, pepper, tomato), which all occurred in a typical, mid-typical and atypical color for that object. For example, the banana appeared in the colors yellow (typical), brown (midtypical), and blue (atypical). All items were presented as targets and as distractors. Pepper additionally occurred in a fourth color, which only functioned as a distractor due to the need for a green color competitor (as explained in the following paragraph).
We refer to the different context conditions as “informative”, “informative-cc”, “overinformative”, and “overinformative-cc” (see Figure 13 ). A context was “overinformative” (Figure 13 ) when mentioning the type of the item, e.g., banana, was sufficient for unambiguously identifying the target. In this condition, the target never had a color competitor. This means that mentioning color alone (without a noun) was also unambiguously identifying. In contrast, in the overinformative condition with a color competitor (“overinformative-cc”, Figure 13 ), color alone was not sufficient. In the informative conditions, color and type mention were necessary for unambiguous reference. Again, one context type did (Figure 13 ) and one did not (Figure 13 ) include a color competitor among its distractors.
Each participant saw 42 different contexts. Each of the 21 items (color-type combinations) was the target exactly twice, but the context in which they occurred was drawn randomly from the four possible conditions mentioned above. In total, there were 84 different possible configurations (seven target food items, each of them in three colors, where each could occur in four contexts). Trial order was randomized.
Two participant-pairs were excluded because they did not finish the experiment and therefore could not receive payment. Trials on which the speaker did not produce any utterances were also excluded, resulting in the exclusion of two additional participant-pairs. Finally, there were 10 speakers who consistently used roundabout descriptions instead of direct referring expressions (e.g., monkeys love... to refer to banana). These pairs were also excluded, since such indirect expressions do not inform our questions about modifier production.
We analyzed data from 1974 trials. Just as in Exp. 1, participants communicated freely, which led to a vast amount of different referring expressions. To test the model's predictions, the utterance produced for each trial was to be classified as belonging to one of the following categories: type-only (“banana”), color-and-type (“yellow banana”), and color-only (“yellow”) utterances. Referring expressions that included categories (“yellow fruit”), descriptions (“has green stem”), color-circumscriptions (“funky carrot”), and negations (“yellow but not banana”) were regarded as other and excluded. To this end we conducted the following semi-automatic data pre-processing.
The referring expressions were analyzed similarly to Exp. 1. First, 32 trials (1.6%) were excluded because the listener selected the wrong referent. 109 trials (5.6%) were excluded because the referring expressions included one of the exceptional cases described above (e.g., using negations). An R script then automatically checked the remaining 1833 utterances for whether they contained a precoded color term (i.e. green, purple, white, black, brown, yellow, orange, blue, pink, red, grey) or type (i.e. apple, banana, carrot, tomato, pear, pepper, avocado). This way, 96.5% of the remaining cases were classified as mentioning type and/or color.
However, this did not capture that sometimes, participants produced meaning-equivalent modifications of color/type terms for instance by adding suffixes (e.g., pinkish), using abbreviations (e.g., yel for yellow), or using non-precoded color and type labels (e.g., lavender or jalapeno). In addition, expressions that contained a typo (e.g., blakc instead of black) could also not be classified automatically. One of the authors (EK) therefore manually hand-coded these cases. There were 6 cases (0.3%) that could not be categorized and were excluded. Overall, 1827 utterances were classified as one of color, type, or color-and-type entered the analysis.
In order to test for typicality effects on the production data and to evaluate cs-RSA's performance, we collected empirical typicality values for each utterance/object pair in three separate studies. The first study collected typicalities for color-and-type/object pairs (e.g., yellow banana as applied to a yellow banana, a blue banana, an orange pear, etc., see Figure 14 ). The second study collected typicalities for type-only/object pairs (e.g., banana as applied to a yellow banana, a blue banana, an orange pear, etc., Figure 14 ). The third study collected typicalities for color/color pairs (e.g., yellow as applied to a color patch of the average yellow from the yellow banana stimulus or to a color patch of the average orange from the orange pear stimulus, and so on, for all other colors, Figure 14 ).
On each trial of the type or color-and-type studies, participants saw one of the stimuli used in the production experiment in isolation and were asked: “How typical is this object for a utterance”, where utterance was replaced by an utterance of interest. In the color typicality study, they were asked “How typical is this color for the color color?”, where color was replaced by one of the relevant color terms. They then adjusted a continuous sliding scale with endpoints labeled “very atypical” and “very typical” to indicate their response. A summary of the the three typicality norming studies is shown in Table 4 .
Slider values were coded as falling between 0 (`very atypical') and 1 (`very typical'). For each utterance-object combination, we computed mean typicality ratings. As an example, the means for the banana items and associated color patches are shown in Table 5 . The values exhibit the same gradient as those hypothesized for the purpose of the example in Table 3 . The means for all items are visualized in Figure 15 . Mean typicality values for utterance-object pairs obtained in the norming studies are used in the analyses and visualizations in the following.
Proportions of type-only (banana), color-and-type (yellow banana), color-only (yellow), and other (funky carrot) utterances are shown in Figure 16 as a function of the described item's mean type-only (banana) typicality. Visually inspecting just the explicitly marked yellow banana, brown banana, and blue banana cases suggests a large typicality effect in the overinformative conditions as well as a smaller typicality effect in the informative conditions, such that color is less likely to be produced with increasing typicality of the object.
The following questions are of interest. First, do we replicate the previously documented typicality effect on redundant color mention (as suggested by the visual inspection of the banana item)? Second, does typicality affect color mention even when color is informative (i.e., technically necessary for establishing unique reference)? Third, are speakers sensitive to the presence of color competitors in their use of color or are typicality effects immune to the nature of the distractor items?
To address these questions we conducted a mixed effects logistic regression predicting color use from fixed effects of typicality, informativeness, and color competitor presence. We used the typicality norms obtained in the type/object typicality elicitation study reported above (see Figure 14 ) as the continuous typicality predictor. The informativeness condition was coded as a binary variable (color informative vs. color overinformative trial) as was color competitor presence (absent vs. present). All predictors were centered before entering the analysis. The model included by-speaker and by-item random intercepts, which was the maximal random effects structure that allowed the model to converge.
There was a main effect of typicality, such that the more typical an object was for the type-only utterance, the lower the log odds of color mention ( $\beta $ = -4.17, $SE$ = 0.45, $p <$ .0001), replicating previously documented typicality effects. Stepwise model comparison revealed that including interaction terms was not justified by the data, suggesting that speakers produce more typical colors less often even when the color is in principle necessary for establishing reference (i.e., in the informative conditions). This is notable: speakers sometimes call a yellow banana simply a banana even when other bananas are present, presumably because they can rely on listeners drawing the inference that they must have meant the most typical banana. In contrast, blue bananas' color is always mentioned in the informative conditions.
There was also a main effect of informativeness, such that color mention was less likely when it was overinformative than when it was informative ( $\beta $ = -5.56, $SE$ = 0.33, $p <$ .0001). Finally, there was a main effect of color competitor presence, such that color mention was more likely when a color competitor was absent ( $\beta $ = 0.71, $SE$ = 0.16, $p <$ .0001). This suggests that speakers are indeed sensitive to the contextual utility of color – color typicality alone does not capture the full set of facts about color mention, as we already saw in Section "Modified referring expressions: size and color modifiers under different scene variation conditions" .
Unmodified referring expressions: nominal taxonomic level
In this section we investigate whether cs-RSA accounts for referring expression production beyond the choice of modifier. In particular, we focus on speakers' choice of taxonomic level of reference in nominal referring expressions. A particular object can be referred to at its subordinate (dalmatian), basic (dog), or superordinate (animal) level, among other choices. As discussed in Section "Nominal referring expressions" , multiple factors play a role in the choice of nominal referring expression, including an expression's contextual informativeness, its cognitive cost BIBREF23 short and frequent terms are preferred over long and infrequent ones,>griffin1998,jescheniak1994, and its typicality BIBREF23 an utterance is more likely to be used if the object is a good instance of it,> Jolicoeur1984. Thus, we explore the same factors as potential contributors to nominal choice that we explored in previous sections for modification.
In order to evaluate cs-RSA for nominal choice, we proceeded as in Section "Modified referring expressions: color typicality" : we collected production data within the same reference game setting, but varied the contextual informativeness of utterances by varying whether distractors shared the same basic or superordinate category with the target (see Figure 18 ). We also elicited typicality ratings for object-utterance combinations, which entered the model as the semantic values via the lexicon. We then conducted Bayesian data analysis, as in previous sections, for model comparison.
Experiment 3: taxonomic level of reference in nominal referring expressions
We recruited 58 pairs of participants (116 participants total, the same participants as in Exp. 1) over Amazon's Mechanical Turk who were each paid $1.75 for their participation.
The procedure was identical to that of Exp. 1. Participants proceeded through 72 trials. Of these, half were critical trials of interest and half were filler trials (the critical trials from Exp. 1). On critical trials, we varied the level of reference that was sufficient to mention for uniquely establishing reference.
Stimuli were selected from nine distinct domains, each corresponding to distinct basic level categories such as dog. For each domain, we selected four subcategories to form our target set (e.g. dalmatian, pug, German Shepherd and husky). See Table 9 in Appendix "Experiment 3 items" for a full list of domains and their associated target items. Each domain also contained an additional item which belonged to the same basic level category as the target (e.g., greyhound) and items which belonged to the same supercategory but not the same basic level (e.g., elephant or squirrel). The latter items were used as distractors.
Each trial consisted of a display of three images, one of which was designated as the target object. Each pair of participants saw each target exactly once, for a total of 36 trials. These target items were randomly assigned distractor items which were selected from three different context conditions, corresponding to different communicative pressures (see Figure 18 ). The subordinate necessary contexts contained one distractor of the same basic category and one distractor of the same superordinate category (e.g., target: dalmatian, distractors: greyhound (also a dog) and squirrel (also an animal)). The basic sufficient contexts contained either two distractors of the same superordinate category but different basic category as the target (e.g., target: husky, distractors: hamster and elephant) or one distractor of the same superordinate category and one unrelated item (e.g., target: pug, distractors: cow and table). The superordinate sufficient contexts contained two unrelated items (e.g., target: German Shepherd, distractors: shirt and cookie).
This context manipulation served as a manipulation of utterance informativeness: any target could be referred to at the subordinate (dalmatian), basic (dog) or superordinate (animal) level. However, the level of reference necessary for uniquely referring differed across contexts.
In order to test for typicality effects on the production data and to evaluate cs-RSA's performance, we collected empirical typicality values for each utterance/object pair (see Appendix "Typicality norms for Experiment 3" for details).
We collected 2193 referring expressions. To determine the level of reference for each trial, we followed the following procedure. First, speakers' and listeners' messages were parsed automatically; the referring expression used by the speaker was extracted for each trial and checked for whether it contained the current target's correct sub(ordinate), basic, or super(ordinate) level term using a simple grep search. In this way, 71.4% of trials were labelled as mentioning a pre-coded level of reference. In the next step, remaining utterances were checked manually by one of the authors (CG) to determine whether they contained a correct level of reference term which was not detected by the grep search due to typos or grammatical modification of the expression. In this way, meaning-equivalent alternatives such as doggie for dog, or reduced forms such as gummi, gummies and bears for gummy bears were counted as containing the corresponding level of reference term. This covered another 15.0% of trials. 41 trials on which the listener selected the wrong referent were excluded, leading to the elimination of 2.1% of trials. Six trials were excluded because the speaker did not produce any utterances. Additionally, a total of 12.5% of correct trials were excluded because the utterance consisted only of an attribute of the superclass (the living thing for animal), of the basic level (can fly for bird), of the subcategory (barks for dog) or of the particular instance (the thing facing left) rather than a category noun. These kinds of attributes were also mentioned in addition to the noun on trials which were included in the analysis for 8.9% of sub level terms, 18.9% of basic level terms, and 60.9% of super level terms. On 1.2% of trials two different levels of reference were mentioned; in this case the more specific level of reference was counted as being mentioned in this trial. After all exclusion and pre-processing, 1872 cases classified as one of sub, basic, or super entered into the analysis.
Proportions of sub, basic, and super level utterances are shown in Figure 19 . Overall, super level mentions are highly dispreferred ( $< 2\%$ ), so we focus in this section only on predictors of sub over basic level mentions. The clearest pattern of note is that sub level mentions are only preferred in the most constrained context that necessitates the sub level mention for unique reference (e.g., target: dalmatian, distractor: greyhound; see Figure 18 ). Nevertheless, even in these contexts there is a non-negligible proportion of basic level mentions (28%). In the remaining contexts, where the sub and basic level are equally informative, there is a clear preference for the basic level. In addition, mitigating this context effect, sub level mentions increased with increasing typicality of the object as an instance of the sub level utterance.
What explains these preferences? In order to test for effects of informativeness, length, frequency, and typicality on nominal choice we conducted a mixed effects logistic regression predicting sub over basic level mention from centered predictors for the factors of interest and the maximal random effects structure that allowed the model to converge (random by-speaker and by-target intercepts).
Frequency was coded as the difference between the sub and the basic level's log frequency, as extracted from the Google Books Ngram English corpus ranging from 1960 to 2008.
Length was coded as the ratio of the sub to the basic level's length. We used the mean empirical lengths in characters of the utterances participants produced. For example, the minivan, when referred to at the subcategory level, was sometimes called “minivan” and sometimes “van” leading to a mean empirical length of 5.71. This is the value that was used, rather than 7, the length of “minivan”. That is, a higher frequency difference indicates a lower cost for the sub level term compared to the basic level, while a higher length ratio reflects a higher cost for the sub level term compared to the basic level.
Typicality was coded as the ratio of the target's sub to basic level label typicality. That is, the higher the ratio, the more typical the object was for the sub level label compared to the basic level; or in other words, a higher ratio indicates that the object was relatively atypical for the basic label compared to the sub label. For instance, the panda was relatively atypical for its basic level “bear” (mean rating 0.75) compared to the sub level term “panda bear” (mean rating 0.98), which resulted in a relatively high typicality ratio.
Informativeness condition was coded as a three-level factor: sub necessary, basic sufficient, and super sufficient, where basic sufficient (two superordinate distractors) and basic sufficient (one superordinate distractor) were collapsed into basic sufficient. Condition was Helmert-coded: two contrasts over the three condition levels were included in the model, comparing each level against the mean of the remaining levels (in order: sub necessary, basic sufficient, super sufficient). This allowed us to determine whether the probabilities of type mention for neighboring conditions were significantly different from each other, as suggested by Figure 19 .
The log odds of mentioning the sub level term were greater in the sub necessary condition than in either of the other two conditions ( $\beta = 2.11$ , $SE = .17$ , $p < .0001$ ), and greater in the basic sufficient condition than in the super sufficient condition ( $\beta = .60$ , $SE = .15$ , $p < .0001$ ), suggesting that the contextual informativeness of the sub level mention has a gradient effect on utterance choice. There was also a main effect of typicality, such that the sub level term was preferred for objects that were more typical for the sub level compared to the basic level description ( $\beta = 4.82$ , $SE = 1.35$ , $SE = .17$0 ). In addition, there was a main effect of length, such that as the length of the sub level term increased compared to the basic level term (“chihuahua”/“dog” vs. “pug”/“dog”), the sub level term was dispreferred (“chihuahua” is dispreferred compared to “pug”, $SE = .17$1 , $SE = .17$2 , $SE = .17$3 ). The main effect of frequency did not reach significance ( $SE = .17$4 , $SE = .17$5 , $SE = .17$6 ).
Unsurprisingly, there was also significant by-participant and by-domain variation in sub level term mention. For instance, mentioning the sub over the basic level term was preferred more in some domains (e.g. in the “candy” domain) than in others. Likewise, some domains had a greater preference for basic level terms (e.g. the “shirt” domain). Using the super term also ranged from hardly being observable (e.g., plant in the “flower” domain) to being used more frequently (e.g., furniture in the “table” domain and vehicle in the “car” domain).
We thus replicated the well-documented preference to refer to objects at the basic level, which is partly modulated by contextual informativeness and partly a result of the basic level term's cognitive cost and typicality compared to its sub level competitor, mirroring the results from Exp. 2.
Perhaps surprisingly, we did not observe an effect of frequency on sub level term mention. This is likely due to the modality of the experiment: the current study was a written production study, while most studies that have identified frequency as a factor governing production choices are spoken production studies. It may be that the cognitive cost of typing longer words may be disproportionately higher than that of producing longer words in speech, thus obscuring a potential effect of frequency. Support for this hypothesis comes from studies comparing written and spoken language, which has found that spoken descriptions are likely to be longer than written descriptions and, in English, seem to have a lower propositional information density than written descriptions BIBREF50 .
General Discussion
In this paper we have provided a unified account of referring expression choice that solves a long-recognized puzzle for rational theories of language use: why do speakers' referring expressions often and systematically exhibit seeming overinformativeness? We have shown here that by allowing contextual utterance informativeness to be computed with respect to a continuous (or noisy) rather than a Boolean semantics, utterances that seem overinformative can in fact be sufficiently informative. This happens when what seems like the prima facie sufficiently informative utterance is in fact noisy and may lead a literal listener astray; adding redundancy ensures successful communication. This simple modification to the Rational Speech Act approach allowed us to capture: the basic well-documented asymmetry for speakers to be more likely to redundantly use color adjectives than size adjectives; the interaction between sufficient dimension and scene variation in the probability of redundancy; and typicality effects in both color modifier choice and noun choice.
We have thus shown that with one key innovation – a continuous semantics – one can retain the assumption that speakers rationally trade off informativeness and cost of utterances in language production. Rather than being wastefully overinformative, adding redundant modifiers or referring at a lower taxonomic level than strictly necessary is in fact appropriately informative. This innovation thus not only provides a unified explanation for a number of key patterns within the overinformative referring expression literature that have thus far eluded a unified explanation; it also extends to the domain of nominal choice. And in contrast to previously proposed computational models, it is straightforwardly extendable to any instance of definite referring expressions of the sort we have examined here.
Comparison of model components across experiments
While the core architecture with relaxed semantics remained constant throughout the paper, some peripheral components were adjusted to accommodate the aims of the different experiments. These different choices are fully consistent with one another, and many of them were justified against alternatives via model comparison. Still, it is valuable to highlight the dimensions along which these components varied. We have provided an overview of the best-fitting RSA models for each of the three reported production datasets in Table 7 .
Most prominently, Exps. 2 and 3 aimed to predict patterns of reference via typicality at the object-level; in those cases the model thus required semantic values for each utterance-object pair in the lexicon. While these values could have in principle been inferred from the data, as we inferred the two type-level values in Exp. 1, it would have introduced a large number of additional parameters (see size of lexicon). Instead, we addressed this problem by empirically eliciting these values in an independent task and introducing a single free concentration parameter $\beta _t$ that modulated their strength. In the case of Exp. 2, we found that the best-fitting model smoothly integrated these empirical values with type-level values used in Exp. 1.
The need to make object-level predictions also drove decisions about what to use as the cost function and the set of alternative utterances. For instance, in Exp. 3 we could have inferred the cost of each noun but this again would have introduced a large number of free parameters and risked overfitting. Instead we used the empirically estimated length and frequency of each word. For Exp. 2, we tested models both using fixed costs for each modifier as in Exp. 1 and empirical length and frequency costs as in Exp. 3, but our model comparison showed that neither sufficiently improved the model's predictions.
Finally, the set of alternative utterances differed slightly across the three experiments for computational reasons. Because Exp. 1 collapsed over the particular levels of size and color, it was practical to consider all utterances in the lexicon for every target. In Exp. 2 and Exp. 3, however, the space of possible utterances was large enough that this exhaustive approach became impractical. We noticed that the probability of using some utterances (e.g. `table' to refer to a Dalmatian) was low enough that we could prune the utterance space to only those that could plausibly apply to the objects in context without substantially altering the model's behavior. Future work must address how predictions may change as more complex referring expressions outside the scope of this paper enter the set of alternatives (e.g. the option of combining adjectives with nominal expressions, as in the cute, spotted dog).
In the following we discuss a number of intriguing questions that this work raises and avenues for future research it suggests.
`Overinformativeness'
This work challenges the traditional notion of overinformativeness as it is commonly employed in the linguistic and psychological literature. The reason that redundant referring expressions are interesting for psycholinguists to study is that they seem to constitute a clear violation of rational theories of language production. For example, Grice's Quantity-2 maxim, which asks of speakers to “not make [their] contribution more informative than is required” BIBREF17 , appears violated by any redundant referring expression – if one feature uniquely distinguishes the target object from the rest and a second one does not, mentioning the second does not contribute any information that is not already communicated by the first. Hence, the second is considered `overinformative', a referring expression that contains it `overspecified.'
This conception of (over-)informativeness assumes that all modifiers are born equal – i.e., that there are no a priori differences in the utility of mentioning different properties of an object. Under this conception of modifiers, there are hard lines between modifiers that are and aren't informative in a context. However, what we have shown here is that under a continuous semantics, a modifier that would be regarded as overinformative under the traditional conception may in fact communicate information about the referent. The more visual variation there is in the scene, and the less noisy the redundant modifier is compared to the modifier that selects the dimension that uniquely singles out the target, the more information the redundant modifier adds about the referent, and the more likely it therefore is to be mentioned. This work thus challenges the traditional notion of utterance overinformativeness by providing an alternative that captures the quantitative variation observed in speakers' production in a principled way while still assuming that speakers are aiming to be informative, and is compatible with other efficiency-based accounts of `overinformative' referring expressions BIBREF23 e.g.,>sedivy2003a,rubiofernandez2016.
But this raises a question: what counts as a truly overinformative utterance under RSA with a continuous semantics? Cs-RSA shifts the standard for overinformativeness and turns it into a graded notion: the less expected the use of a redundant modifier is contextually, the more the use of that modifier should be considered overinformative. For example, consider again Figure 8 : the less scene variation there is, the more truly overinformative the use of the redundant modifier is. Referring to the big purple stapler when there are only purple staplers in the scene should be considered overinformative. If there is one red stapler, the utterance should be judged less overinformative, and the more non-purple staplers there are, the less overinformative the utterance should be judged. We leave a systematic test of this prediction for our stimuli for future research, though we point to some qualitative examples where it has been borne out previously in the next subsection.
Comprehension
While the account proposed in this paper is an account of the production of referring expressions, it can be extended straightforwardly to comprehension. RSA models typically assume that listeners interpret utterances by reasoning about their model of the speaker. In this paper we have provided precisely such a model of the speaker. In what way should the predicted speaker probabilities enter into comprehension? There are two interpretations of this question: first, what is the ultimate interpretation that listeners who reason about speakers characterized by the model provided in this paper arrive at, i.e. what are the predictions for referent choice? And second, how do the production probabilities enter into online processing of prima facie overinformative utterances? The first question has a clear answer. For the second question we offer a more speculative answer.
Most RSA reference models, unlike the one reported in this paper, have focused on comprehension BIBREF14 , BIBREF40 , BIBREF51 , BIBREF52 . The formula that characterizes pragmatic listeners' referent choices is:
$$P_{L_1}(o | u) \propto P_{S_1}(u | o) \cdot P(o)$$ (Eq. 135)
That is, the pragmatic listener interprets utterance $u$ (e.g., the big purple stapler) via Bayesian inference, taking into account both the speaker probability of producing the big purple stapler and its alternatives, given a particular object $o$ the speaker had in mind, as well as the listener's prior beliefs about which object the speaker is likely to intend to refer to in the context. For the situations considered in this paper, in which the utterance is semantically compatible with only one of the referents in the context, this always predicts that the listener should choose the target. And indeed, in Exps. 1-3 the error rate on the listeners' end was always below 1%. From a referent choice point of view, then, these contexts are not very interesting. They are much more interesting from an online processing point of view, which we discuss next.
The question that has typically been asked about the online processing of redundant utterances is this: do redundant utterances, compared to their minimally specified alternatives, help or hinder comprehenders in choosing the intended referent? `Help' and `hinder' are typically translated into `speed up' and `slow down', respectively. What does the RSA model presented here have to say about this?
In sentence processing, the current wisdom is that the processing effort spent on linguistic material is related to how surprising it is BIBREF53 , BIBREF54 . In particular, an utterance's log reading time is linear in its surprisal BIBREF55 , where surprisal is defined as $-\log p(u)$ . In these studies, surprisal is usually estimated from linguistic corpora. Consequently, an utterance of the big purple stapler receives a particular probability estimate independent of the non-linguistic context it occurred in. Here we provide a speaker model from which we can derive estimates of pragmatic surprisal directly for a particular context. We can thus speculate on a linking hypothesis: the more expected a redundant utterance is under the pragmatic continuous semantics speaker model, the faster it should be to process compared to its minimally specified alternative, all else being equal. We have shown that redundant expressions are more likely than minimal expressions when the sufficient dimension is relatively noisy and scene variation is relatively high. Under our speculative linking hypothesis, the redundant expression should be easier to process in these sorts of contexts than in contexts where the redundant expression is relatively less likely.
Is there evidence that listeners do behave in accordance with this prediction? Indeed, the literature reports evidence that in situations where the redundant modifier does provide some information about the referent, listeners are faster to respond and select the intended referent when they observe a redundant referring expression than when they observe a minimal one BIBREF2 , BIBREF30 . However, there is also evidence that redundancy sometimes incurs a processing cost: both Engelhardt2011 and Davies2013 (Exp. 2) found that listeners were slower to identify the target referent in response to redundant compared to minimal utterances. It is useful to examine the stimuli they used. In the Engelhardt et al study, there was only one distractor that varied in type, i.e., type was sufficient for establishing reference. This distractor varied either in size or in color. Thus, scene variation was very low and redundant expressions therefore likely surprising. Interestingly, the incurred cost was greater for redundant size than for redundant color modifiers, in line with the RSA predictions that color should be generally more likely to be used redundantly than size. In the Davies et al study, the `overinformative' conditions contained displays of four objects which differed in type. Stimuli were selected via a production pre-test: only those objects that in isolation were not referred to with a modifier were selected for the study. That is, stimuli were selected precisely on the basis that redundant modifier use would be unlikely.
While the online processing of redundant referring expressions is yet to be systematically explored under the cs-RSA account, this cursory overview of the patterns reported in the existing literature suggests that pragmatic surprisal may be a plausible linking function from model predictions to processing times. Excitingly, it has the potential for unifying the equivocal processing time evidence by providing a model of utterance probabilities that can be computed from the features of the objects in the context.
Continuous semantics
The crucial compoenent of the model that allows for capturing `overinformativeness' effects is the continuous semantics. For the purpose of Exp. 1 (modifier choice), a semantic value was assigned to modifier type. The semantics of modifiers was underlyingly truth-conditional and the semantic value captured the probability that a modifier's truth conditions would accidentally be inverted. This model included only two semantic values, one for size and one for color, which we inferred from the data. For the datasets from Exps. 2 and 3, we then extended the continuous semantics to apply at the level of utterance-object combinations (e.g., banana vs. blue banana as applied to the blue banana item, dalmatian vs. dog as applied to the dalmatian item) to account for typicality effects in modifier and nominal choice. In this instantiation of the model, the semantic value differed for every utterance-object combination and captured how good of an instance of an utterance an object was. These values were elicited experimentally to avoid over-fitting, and for the dataset from Exp. 2 we found further that a combination of a noisy truth-conditional semantics and the empirically elicited semantics best accounted for the obtained production data.
What we have said nothing about thus far is what determines these semantic values; in particular, which aspects of language users' experience – perceptual, conceptual, communicative, linguistic – they represent. We will offer some speculative remarks and directions for future research here.
First, semantic values may represent the difficulty associated with verifying whether the property denoted by the utterance holds of the object. This difficulty may be perceptual – for example, it may be relatively easier to visually determine of an object whether it is red than whether it it is big (at least in our stimuli). Similarly, at the object-utterance level, it may be easier to determine of a yellow banana than of a blue banana whether it exhibits banana-hood, consequently yielding a lower semantic value for a blue banana than for a yellow banana as an instance of banana. Further, the value may be context-invariant or context-dependent. If it is context-invariant, the semantic value inferred for color vs. size, for instance, should not vary by making size differences more salient and color differences less salient. If, instead, it is context-dependent, increasing the salience of size differences and decreasing the salience of color differences should result, e.g., in color modifiers being more noisy, with concomitant effects on production, i.e., redundant color modifiers should become less likely. This is indeed what Viethen2017 found.
Another possibility is that semantic values represent aspects of agents' prior beliefs (world knowledge) about the correlations between features of objects. For example, conditioning on an object being a banana, experience dictates that the probability of it being yellow is much greater than of it being blue. This predicts the relative ordering of the typicality values we elicited empirically, i.e., the blue banana received a lower semantic value than the yellow banana as an instance of banana.
Another possibility is that the semantic values capture the past probability of communicative success in using a particular expression. For example, the semantic value of banana as applied to a yellow banana may be high because in the past, referring to yellow bananas simply as banana was on average successful. Conversely, the semantic value of banana as applied to a blue banana may be low because in the past, referring to blue bananas simply as banana was on average unsuccessful (or the speaker may have uncertainty about its communicative success because they have never encountered blue bananas before). Similarly, the noise difference between color and size modifiers may be due to the inherent relativity of size modifiers compared to color modifiers – while color modifiers vary somewhat in meaning across domains (consider, e.g., the difference in redness between red hair and red wine), the interpretation of size modifiers is highly dependent on a comparison class (consider, e.g., the difference between a big phone and a big building). In negotiating what counts as red, then, speakers are likely to agree more often than in negotiating what counts as big. That is, size adjectives are more subjective than color adjectives. If semantic values encode adjective subjectivity, speakers should be even more likely to redundantly use adjectives that are more objective than color. In a study showing that adjective subjectivity is almost perfectly correlated with an adjective's average distance from the noun, scontras2017 collected subjectivity ratings for many different adjectives and found that material adjectives like wooden and plastic are rated to be even more objective than color adjectives. Thus, under the hypothesis that semantic values represent adjective subjectivity, material adjectives should be even more likely to be used redundantly than color adjectives. This is not the case. For instance, sedivy2003a reports that material adjectives are used redundantly about as often as size adjectives. Hence, while the hypothesis that semantic values capture the past probability of communicative success in using a particular expression has yet to be systematically investigated, subjectivity alone seems not to be the determining factor.
Finally, it is also possible that semantic values are simply an irreducible part of the lexical entry of each utterance-object pair. This seems unlikely because it would require a separate semantic value for each utterance and object token, and most potentially encounterable object tokens in the world have not been encountered, making it impossible to store utterance-token-level values. However, it is possible that, reminiscent of prototype theory, semantic values are stored at the level of utterances and object types. This view of semantic values suggests that they should not be updated in response to further exposure of objects. For example, if semantic values were a fixed component of the lexical entry banana, then even being exposed to a large number of blue bananas should not change the value. This seems unlikely but merits further investigation.
The various possibilities for the interpretation of the continuous semantic values included in the model are neither independent nor incompatible with each other. Disentangling these possibilities presents an exciting avenue for future research.
Audience design
One question which has plagued the literature on language production is that of whether, and to what extent, speakers actually tailor their utterances to their audience BIBREF56 , BIBREF57 , BIBREF58 . This is also known as the issue of audience design. With regards to redundant referring expressions, the question is whether speakers produce redundant expressions because it is helpful to them (i.e., due to internal production pressures) or because it is helpful to their interlocutor (i.e., due to considerations of audience design). For instance, Walker1993 shows that redundancy is more likely when processing resources are limited. On the other hand, there is evidence that redundant utterances are frequently used in response to signs of listener non-comprehension, when responding to listener questions, or when speaking to strangers BIBREF59 , suggesting at least some consideration of listeners' needs.
RSA seems to make a claim about this issue: speakers are trying to be informative with respect to a literal listener. That is, it would seem that speakers produce referring expressions that are tailored to their listeners. However, this is misleading. The ontological status of the literal listener is as a “dummy component” that allows the pragmatic recursion to get off the ground. Actual listeners are, in line with previous work and briefly discussed above, more likely fall into the class of pragmatic $L_1$ listeners; listeners who reason about the speaker's intended meaning via Bayesian inference BIBREF14 , BIBREF60 .
Because RSA is a computational-level theory BIBREF61 of language use, it does not claim that the mechanism of language production requires that speakers actively consult an internal model of a listener every time they choose an utterance, just that the distribution of utterances they produce reflect informativity with respect to such a model. It is possible that this distribution is cached or computed using some other algorithm that doesn't explicitly involve a listener component.
Thus, the RSA model as formulated here remains agnostic about whether speakers' (over-) informativeness should be considered geared towards listeners' needs or simply a production-internal process. Instead, the claim is that redundancy emerges as a property of the communicative situation as a whole.
Other factors that affect redundancy
RSA with a continuous semantics as presented in this paper straightforwardly accounts for effects of typicality, cost, and scene variation on redundancy in referring expressions. However, other factors have been identified as contributing to redundancy. For example, rubiofernandez2016 showed that colors are mentioned more often redundantly for clothes than for geometrical shapes. Her explanation is that knowing an object's color is generally more useful for clothing than it is for shapes. It is plausible that agents' knowledge of goals may be relevant here. For example, knowing the color of clothing is relevant to the goal of deciding what to wear or buy. In contrast, knowing the color of geometrical shapes is rarely relevant to any everyday goal agents might have. While the RSA model as implemented here does not accommodate an agent's goals, it can be extended to do so via projection functions, as has been done for capturing figurative language use BIBREF23 e.g.,>kao2014 or question-answer behavior BIBREF62 . This should be explored further in future research.
One factor that has been repeatedly discussed in the literature and that we have not taken up here is the incrementality of language production. For instance, according to Pechmann1989, incrementality is to blame for redundancy: speakers retrieve and subsequently produce words as soon as they can. Because color modifiers are easier to retrieve than size modifiers, speakers produce them regardless of whether or not they are redundant. The problem with this account is that it predicts that the preferred adjective order should be reversed, i.e., color adjectives should occur before size adjectives. Pechmann does observe some instances of this occurring, but not many. In addition, it is unclear how incrementality could account for the systematic increase in color redundancy with increasing scene variation and decreasing color typicality, unless one makes the auxiliary assumption that the more contextually discriminative or salient color is, the more easily retrievable the modifier is. Indeed, Clark2004 emphasize the importance of salience against the common ground in speakers' decisions about which of an object's properties to include in a referring expression. However, there are other ways incrementality could play a role. For example, mentioning the color adjective may buy the speaker time when the noun is hard to retrieve. This predicts that in languages with post-nominal adjectives, where this delay strategy cannot be used for noun planning, there should be less redundant color mention; indeed, this is what rubiofernandez2016 shows for Spanish. The ways in which considerations of incremental language production can and should be incorporated in RSA are yet to be explored.
Extensions to other language production phenomena
In this paper we focused on providing a computationally explicit account of definite modified and nominal referring expressions in reference games, focusing on the use of prenominal size and color adjectives as well as on the taxonomic level of noun reference. The cs-RSA model can be straightforwardly extended to different nominal domains and different properties. For instance, the literature has also explored `overinformative' referring expressions that include material (wooden, plastic), other dimensional (long, short), and other physical (spotted, striped) adjectives.
However, beyond the relatively limited linguistic forms we have explored here, future research should also investigate the very intriguing potential for this approach to be extended to any language production phenomenon that involves content selection, including in the domain of reference (pronouns, names, definite descriptions with post-nominal modification) and event descriptions. For example, in investigations of optional instrument mentions, brown1987 showed that atypical instruments are more likely to be mentioned than typical ones – if a stabbing occurred with an icepick, speakers prefer The man was stabbed with an ice pick rather than The man was stabbed. If instead a stabbing occurred with a knife, The man was stabbed is preferred overThe man was stabbed with a knife). This is very much parallel to the case of atypical color mention.
More generally, the approach should extend to any content selection phenomenon that affords a choice between a more or less specific utterance. Whenever the more specific utterance adds sufficient information, it should be included. This is related to surprisal based theories of production like Uniform Information Density BIBREF23 UID,>jaeger2006, levy2007, frank2008, jaeger2010, where researchers have found that speakers are more likely to omit linguistic signal if the underlying meaning or syntactic structure is highly predictable. Importantly, UID diverges from our account in that it is an account of the choice between meaning-equivalent alternative utterances and includes no pragmatic reasoning component.
Conclusion
In conclusion, we have provided an account of redundant referring expressions that challenges the traditional notion of `overinformativeness', unifies multiple language production literatures, and has the potential for many further extensions. We take this work to provide evidence that, rather than being wastefully overinformative, speakers are rationally redundant.
Effects of semantic value on utterance probabilities
Here we visualize the effect of different adjective types' semantic value on the probability of producing the insufficient color-only utterance (blue pin), the sufficient size-only utterance (small pin), or the redundant color-and-size utterance (small blue pin) to refer to the target in context Figure 1 under varying $\beta _i$ values, in Figure 22 . This constitutes a generalization of Figure 4 , which is duplicated in row 6 ( $\beta _i = 30$ ).
Pre-experiment quiz
Before continuing to the main experiment, each participant was required to correctly respond “True” or “False” to the following statements. Correct answers are given in parentheses after the statement.
Exp. 1 items
The following table lists all 36 object types from Exp. 1 and the colors they appeared in:
Typicality effects in Exp. 1
To assess whether we replicate the color typicality effects previously reported in the literature BIBREF7 , BIBREF25 , BIBREF10 , we elicited color typicality norms for each of the items in Exp. 1 and then included typicality as an additional predictor of redundant adjective use in the regression analysis reported in Section UID55 .
Methods
We recruited 60 participants over Amazon's Mechanical Turk who were each paid $0.25 for their participation.
On each trial, participants saw one of the big versions of the items used in Exp. 1 and were asked to answer the question “How typical is this for an X?” on a continuous slider with endpoints labeled “very atypical” to “very typical.” X was a referring expression consisting of either only the correct noun (e.g., stapler) or the noun modified by the correct color (e.g., red stapler). Figure 23 shows an example of a modified trial.
Each participant saw each of the 36 objects once. An object was randomly displayed in one of the two colors it occurred with in Exp. 1 and was randomly displayed with either the correct modified utterance or the correct unmodified utterance, in order to obtain roughly equal numbers of object-utterance combinations.
Importantly, we only elicited typicality norms for unmodified utterances and utterances with color modifiers, but not utterances with size modifiers. This was because it is impossible to obtain size typicality norms for objects presented in isolation, due to the inherently relational nature of size adjectives. Consequently, we only test for the effect of typicality on size-sufficient trials, i.e. when color is redundant.
Results and discussion
We coded the slider endpoints as 0 (“very atypical”) and 1 (“very typical”), essentially treating each response as a typicality value between 0 and 1. For each combination of object, color, and utterance (modified/unmodified), we computed that item's mean. Mean typicalities were generally lower for unmodified than for modified utterances: mean typicality for unmodified utterances was .67 (sd=.17, mode=.76) and for modified utterances .75 (sd=.12, mode=.81). This can also be seen on the left in Figure 24 . Note that, as expected given how the stimuli were constructed, typicality was generally skewed towards the high end, even for unmodified utterances. This means that there was not much variation in the difference in typicality between modified and unmodified utterances. We will refer to this difference as typicality gain, reflecting the overall gain in typicality via color modification over the unmodified baseline. As can be seen on the right in Figure 24 , in most cases typicality gain was close to zero.
This makes the typicality analysis difficult: if typicality gain is close to zero for most cases (and, taking into account confidence intervals, effectively zero), it is hard to evaluate the effect of typicality on redundant adjective use. In order to maximize power, we therefore conducted the analysis only on those items for which for at least one color the confidence intervals for the modified and unmodified utterances did not overlap. There were only four such cases: (pink) golfball, (pink) wedding cake, (green) chair, and (red) stapler, for a total of 231 data points.
Predictions differ for size-sufficient and color-sufficient trials. Given the typicality effects reported in the literature and the predictions of cs-RSA, we expect greater redundant color use on size-sufficient trials with increasing typicality gain. The predictions for redundant size use on color-sufficient trials are unclear from the previous literature. Cs-RSA, however, predicts greater redundant size use with decreasing typicality gain: small color typicality gains reflect the relatively low out-of-context utility of color. In these cases, it may be useful to redundantly use a size modifier even if that modifier is noisy. If borne out, these predictions should surface in an interaction between sufficient property and typicality gain. Visual inspection of the empirical proportions of redundant adjective use in Figure 25 suggests that this pattern is indeed borne out.
In order to investigate the effect of typicality gain on redundant adjective use, we conducted a mixed effects logistic regression analysis predicting redundant over minimal adjective use from fixed effects of scene variation, sufficient dimension, the interaction of scene variation and sufficient property, and the interaction of typicality gain and sufficient property. This is the same model as reported in Section UID55 , with the only difference that the interaction between sufficient property and typicality gain was added. All predictors were centered before entering the analysis. The model contained the maximal random effects structure that allowed it to converge: by-participant and by-item (where item was a color-object combination) random intercepts.
The model summary is shown in Table 8 . We replicate the effects of sufficient property and scene variation observed earlier on this smaller dataset. Crucially, we observe a significant interaction between sufficient property and typicality gain. Simple effects analysis reveals that this interaction is due to a positive effect of typicality gain on redundant adjective use in the size-sufficient condition ( $\beta = 4.47$ , $SE = 1.65$ , $p < .007$ ) but a negative effect of typicality gain on redundant adjective use in the color-sufficient condition ( $\beta = -5.77$ , $SE = 2.49$ , $p < .03$ ).
An important point is of note: the typicality elicitation procedure we employed here is somewhat different from that employed by Westerbeek2015, who asked their participants “How typical is this color for this object?” We did this for conceptual reasons: the values that go into the semantics of the RSA model are most easily conceptualized as the typicality of an object as an instance of an utterance. While the typicality of a feature for an object type no doubt plays into how good of an instance of the utterance the object is, deriving our typicalities from the statistical properties of the subjective distributions of features over objects is beyond the scope of this paper. However, in a separate experiment we did ask participants the Westerbeek question. The correlation between mean typicality ratings from the Westerbeek version and the unmodified “How typical is this for X” version was .75. The correlation between the Westerbeek version and the modified version was .64. The correlation between the Westerbeek version and typicality gain was -.52.
For comparison, including typicality means obtained via the Westerbeek question as a predictor instead of typicality gain on the four high-powered items replicated the significant interaction between typicality and sufficient property ( $\beta = -6.77$ , $SE = 1.88$ , $p < .0003$ ). Simple effects analysis revealed that the interaction is again due to a difference in slope in the two sufficient property conditions: in the size-sufficient condition, color is less likely to be mentioned with increasing color typicality ( $\beta = -3.66$ , $SE = 1.18$ , $p < .002$ ), whereas in the color-sufficient condition, size is more likely to be mentioned with increasing color typicality ( $\beta = 3.09$ , $SE = 1.45$ , $p < .04$ ).
We thus overall find moderate evidence for typicality effects in our dataset. Typicality effects are strong for those items that clearly display typicality differences between the modified and unmodified utterance, but much weaker for the remaining items. That the evidence for typicality effects is relatively scarce is no surprise: the stimuli were specifically designed to minimize effects of typicality. However, the fact that both ways of quantifying typicality predicted redundant adjective use in the expected direction suggests that with more power or with stimuli that exhibit greater typicality variation, these effects may show up more clearly.
Experiment 3 items
The following table lists all items used in Exp. 3 and the mean empirical utterance lengths that participants produced to refer to them:
Typicality norms for Experiment 3
Analogous to the color typicality norms elicited for utterances in Exps. 1-2, we elicited typicality norms for utterances in Exp. 3. The elicited typicalities were used in the mixed effects analyses and Bayesian Data Analysis reported in Section "Unmodified referring expressions: nominal taxonomic level" . | Yes |
f748cb05becc60e7d47d34f4c5f94189bc184d33 | f748cb05becc60e7d47d34f4c5f94189bc184d33_0 | Q: What are bottleneck features?
Text: Introduction
Social media has become a popular medium for individuals to express opinions and concerns on issues impacting their lives BIBREF0 , BIBREF1 , BIBREF2 . In countries without adequate internet infrastructure, like Uganda, communities often use phone-in talk shows on local radio stations for the same purpose. In an ongoing project by the United Nations (UN), radio-browsing systems have been developed to monitor such radio shows BIBREF3 , BIBREF4 . These systems are actively and successfully supporting UN relief and developmental programmes. The development of such systems, however, remains dependent on the availability of transcribed speech in the target languages. This dependence has proved to be a key impediment to the rapid deployment of radio-browsing systems in new languages, since skilled annotators proficient in the target languages are hard to find, especially in crisis conditions.
In a conventional keyword spotting system, where the goal is to search through a speech collection for a specified set of keywords, automatic speech recognition (ASR) is typically used to generate lattices which are then searched to predict the presence or absence of keywords BIBREF5 , BIBREF6 . State-of-the-art ASR, however, requires large amounts of transcribed speech audio BIBREF7 , BIBREF8 . In this paper we consider the development of a keyword spotter without such substantial and carefully-prepared data. Instead, we rely only on a small number of isolated repetitions of keywords and a large body of untranscribed data from the target domain. The motivation for this setting is that such isolated keywords should be easier to gather, even in a crisis scenario. Several studies have attempted ASR-free keyword spotting using a query-by-example (QbyE) retrieval procedure. In QbyE, the search query is provided as audio rather than text. Dynamic time warping (DTW) is typically used to search for instances of the query in a speech collection BIBREF9 , BIBREF10 . As an alternative, several ways of obtaining fixed-dimensional representations of input speech have been considered BIBREF11 . Recurrent neural networks (RNNs) BIBREF12 , BIBREF13 , autoencoding encoder-decoder RNNs BIBREF14 , BIBREF15 , and Siamese convolutional neural networks (CNNs) BIBREF16 have all been used to obtain such fixed-dimensional representations, which allow queries and search utterances to be directly compared without alignment. For keyword spotting, a variant of this approach has been used where textual and acoustic inputs are mapped into a shared space BIBREF17 . Most of these neural approaches, however, relies on large amounts of training data.
In this paper, we extend the ASR-free keyword spotting approach first presented in BIBREF18 . A small seed corpus of isolated spoken keywords is used to perform DTW template matching on a large corpus of untranscribed data from the target domain. The resulting DTW scores are used as targets for training a CNN-based keyword spotter. Hence we take advantage of DTW-based matching—which can be performed with limited data—and combine this with CNN-based searching—giving speed benefits since it does not require alignment. In our previous work, we used speech data only from the target language. Here we consider whether data available for other (potentially well-resourced) languages can be used to improve performance. Specifically, multilingual bottleneck features (BNFs) have been shown to provide improved performance by several authors BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 . We investigate whether such multilingual bottleneck feature extractors (trained on completely different languages) can be used to extract features for our target data, thereby improving the overall performance of our CNN-DTW keyword spotting approach.
To perform a thorough analysis of our proposed approach (which requires transcriptions), we use a corpus of South African English. We use BNFs trained on two languages and on ten languages as input features to the CNN-DTW system, and compare these to MFCCs. We also consider features from unsupervised autoencoders, trained on unlabelled datasets from five languages. We show that the 10-language BNFs work best overall, giving results that makes CNN-DTW a viable option for practical use.
Radio browsing system
The first radio browsing systems implemented as part of the UN's humanitarian monitoring programmes rely on ASR systems BIBREF3 . Human analysts filter speech segments identified by the system and add these to a searchable database to support decision making. To develop the ASR system, at least a small amount of annotated speech in the target language is required BIBREF4 . However, the collection of even a small fully transcribed corpus has proven difficult or impossible in some settings. In recent work, we have therefore proposed an ASR-free keyword spotting system based on CNNs BIBREF18 . CNN classifiers typically require a large number of training examples, which are not available in our setting. Instead, we therefore use a small set of recorded isolated keywords, which are then matched against a large collection of untranscribed speech drawn from the target domain using a DTW-based approach. The resulting DTW scores are then used as targets for a CNN. The key is that it is not necessary to know whether or not the keywords do in fact occur in this untranscribed corpus; the CNN is trained simply to emulate the behaviour of the DTW. Since the CNN does not perform any alignment, it is computationally much more efficient than DTW. The resulting CNN-DTW model can therefore be used to efficiently detect the presence of keywords in new input speech. Figure FIGREF2 show the structure of this CNN-DTW radio browsing system.
Data
We use the same datasets used in our previous work BIBREF18 . As templates, we use a small corpus of isolated utterances of 40 keywords, each spoken twice by 24 South African speakers (12 male and 12 female). This set of 1920 labelled isolated keywords constitutes the only transcribed target-domain data that we use to train our keyword spotter. As untranscribed data, we use a corpus of South African Broadcast News (SABN). This 23-hour corpus consists of a mix of English newsreader speech, interviews, and crossings to reporters, broadcast between 1996 and 2006 BIBREF23 . The division of the corpus into different sets is shown in Table TABREF3 . The SABN training set is used as our untranscribed data to obtain targets for the CNN. The models then perform keyword spotting on the SABN test set. Since this set is fully transcribed, performance evaluation and analysis is possible. The isolated keywords were recorded under fairly quiet conditions and there was no speaker overlap with the SABN dataset. Hence there is a definite mismatch between the datasets. This is intentional as it reflects the intended operational setting of our system.
Keyword Spotting Approaches
Here we describe the combined CNN-DTW keyword spotting method. We also use direct DTW and a CNN classifier as baselines, and hence these are also briefly discussed.
Dynamic time warping (DTW)
In low-resource settings with scarce training data, DTW is an attractive approach, but it can be prohibitively slow since it requires repeated alignment. We make use of a simple DTW implementation in which isolated keywords slide over search audio, with a 3-frame-skip, and a frame-wise comparison is performed while warping the time axis. From this, a normalized per-frame cosine cost is obtained, resulting in a value INLINEFORM0 , with 0 indicating a portion of speech that matches the keyword exactly. The presence or absence of the keyword is determined by applying an appropriate threshold to INLINEFORM1 .
Convolutional neural network (CNN) classifier
As a baseline, we train a CNN classifier as an end-to-end keyword spotter. This would be typical in high resource settings BIBREF16 , BIBREF24 , BIBREF25 . We perform supervised training using the 1920 recorded isolated keywords with negative examples drawn randomly from utterances in the SABN training set. For testing, a 60-frame window slides over the test utterances. The presence or absence of keyword is again based on a threshold.
CNN-DTW keyword spotting
Rather than using labels (as in the CNN classifier above), the CNN-DTW keyword spotting approach uses DTW to generate sufficient training data as targets for a CNN. The CNN-DTW is subsequently employed as the keyword spotter; this is computationally much more efficient than direct DTW. DTW similarity scores are computed between our small set of isolated keywords and a much larger untranscribed dataset, and these scores are subsequently used as targets to train a CNN, as shown in Figure FIGREF9 . Our contribution here over our previous work BIBREF18 is to use multilingual BNFs instead of MFCCs, both for performing the DTW matching and as inputs to the CNN-DTW model. In Figure FIGREF9 , the upper half shows how the supervisory signals are obtained using DTW, and the lower half shows how the CNN is trained. Equation ( EQREF8 ) shows how keyword scores are computed, resulting in a vector INLINEFORM0 for each utterance INLINEFORM1 . DISPLAYFORM0
Here, INLINEFORM0 is the sequence of speech features for the INLINEFORM1 exemplar of keyword INLINEFORM2 , INLINEFORM3 is a successive segment of utterance INLINEFORM4 , and INLINEFORM5 is the DTW alignment cost between the speech features of exemplar INLINEFORM6 and the segment INLINEFORM7 . Each value INLINEFORM8 , which is between INLINEFORM9 , is then mapped to INLINEFORM10 , with 1 indicating a perfect match and 0 indicating dissimilarity thus forming the target vector INLINEFORM11 for utterance INLINEFORM12 . A CNN is then trained using a summed cross-entropy loss (which is why the scores are mapped to the interval INLINEFORM13 ) with utterance INLINEFORM14 as input and INLINEFORM15 as target. The CNN architecture is the same as used in BIBREF18 . Finally, the trained CNN is applied to unseen utterances.
Bottleneck and Autoencoder Features
Our previous work focused purely on using data from the low-resource target language. However, large annotated speech resources exist for several well-resourced languages. We investigate whether such resources can be used to improve the CNN-DTW system in the unseen low-resource language.
Bottleneck features
One way to re-use information extracted from other multilingual corpora is to use multilingual bottleneck features (BNFs), which has shown to perform well in conventional ASR as well as intrinsic evaluations BIBREF19 , BIBREF26 , BIBREF27 , BIBREF20 , BIBREF28 , BIBREF29 . These features are typically obtained by training a deep neural network jointly on several languages for which labelled data is available. The bottom layers of the network are normally shared across all training languages. The network then splits into separate parts for each of the languages, or has a single shared output. The final output layer has phone labels or HMM states as targets. The final shared layer often has a lower dimensionality than the input layer, and is therefore referred to as a `bottleneck'. The intuition is that this layer should capture aspects that are common across all the languages. We use such features from a multilingual neural network in our CNN-DTW keyword spotting approach. The BNFs are trained on a set of well-resourced languages different from the target language.
Different neural architectures can be used to obtain BNFs following the above methodology. Here we use time-delay neural networks (TDNNs) BIBREF30 . We consider two models: a multilingual TDNN trained on only two languages, and a TDNN trained on ten diverse languages. Our aim is to investigate whether it is necessary to have a large set of diverse languages, or whether it is sufficient to simply obtain features from a supervised model trained on a smaller set of languages.
An 11-layer 2-language TDNN was trained using 40-high resolution MFCC features as input on a combined set of Dutch and Frisian speech, as described in BIBREF31 . Speaker adaptation is used with lattice-free maximum mutual information training, based on the Kaldi Switchboard recipe BIBREF32 . Each layer uses ReLU activations with batch normalisation. By combining the the FAME BIBREF33 and CGN BIBREF34 corpora, the training set consists of a combined 887 hours of data in the two languages. 40-dimensional BNFs are extracted from the resulting model.
A 6-layer 10-language TDNN was trained on the GlobalPhone corpus, also using 40-high resolution MFCC features as input, as described in BIBREF20 . For speaker adaptation, a 100-dimensional i-vector was appended to the the MFCC input features. The TDNN was trained with a block-softmax, with the hidden layers shared across all languages and a separate output layer for each language. Each of the six hidden layers had 625 dimensions, and was followed by a 39-dimensional bottleneck layer with ReLU activations and batch normalisation. Training was accomplished using the Kaldi Babel receipe using 198 hours of data in 10 languages (Bulgarian, Czech, French, German, Korean, Polish, Portuguese, Russian, Thai, Vietnamese) from GlobalPhone.
Autoencoder features
BNFs are trained in a supervised fashion with acoustic feature presented at the input and phone targets at the outputs. A more general scenario, however, is one in which training data is unlabelled, and these targets are therefore not known. In this case, it may be possible to learn useful representations by using an unsupervised model. An autoencoder is a neural network trained to reconstruct its input. By presenting the same data at the input and the output of the network while constraining intermediate connections, the network is trained to find an internal representation that is useful for reconstruction. These internal representations can be useful as features BIBREF35 , BIBREF36 , BIBREF37 , BIBREF38 , BIBREF39 , BIBREF40 . Like BNFs, autoencoders can be trained on languages different from the target language (often resulting in more data to train on).
Here we use a stacked denoising autoencoder BIBREF41 . In this model, each layer is trained individually like an autoencoder with added noise to reconstruct the output of the previous layer. Once a layer has been trained, its weights are fixed and its outputs become the inputs to the next layer to be trained. After all the layers are pre-trained in this fashion, the layers are stacked and fine-tuned. We use mean squared error loss and Adam optimisation BIBREF42 throughout. We trained a 7-layer stacked denoising autoencoder on an untranscribed dataset consisting of 160 h of Acholi, 154 h of Luganda, 9.45 h of Lugbara, 7.82 h of Rutaroo and 18 h of Somali data. We used 39-dimensional MFCCs (13 cepstra with deltas and delta-deltas) as input and extracted features from the 39-dimensional fourth layer.
Experimental setup
The experimental setup is similar to that of BIBREF18 . We consider three baseline systems: two DTW systems and a conventional CNN classifier.
Our proposed approach, CNN-DTW, is supervised by the DTW-KS system. Hyper-parameters for CNN-DTW were optimized using the target loss on the development set. Hence, the SABN transcriptions are not used for training or validation. Performance is reported in terms of the area under the curve (AUC) of the receiver operating characteristic (ROC) and equal error rate (EER). The ROC is obtained by varying the detection threshold and plotting the false positive rate against the true positive rate. AUC, therefore, indicates the performance of the model independent of a threshold, with higher AUC indicating a better model. EER is the point at which the false positive rate equals the false negative rate and hence lower EER indicates a better model.
Experimental results
We consider four feature extractors in our experiments:
In initial experiments, we first consider the performance of these features on development data. Specifically, we use the features as representations in the DTW-based keyword spotter (DTW-KS). Results are shown in Table TABREF23 . BNFs trained on 10 languages outperform all other approaches, with speaker normalisation giving a further slight improvement. Both the stacked autoencoder and the BNFs trained on two languages perform worse than the MFCC baseline. This seems to indicate that a larger number of diverse languages is beneficial for training BNFs, and that supervised models are superior to unsupervised models when applied to an unseen target language. However, further experiments are required to verify this definitively. Based on these development experiments, we compare MFCCs and TDNN-BNF-10lang-SPN features when used for keyword spotting on evaluation data.
Table TABREF24 shows the performance of the three baseline systems and CNN-DTW when using MFCCs and BNFs. In all cases except the CNN classifier, BNFs lead to improvements over MFCCs. Furthermore, we see that, when using BNFs, the CNN-DTW system performs almost as well as its DTW-KS counterpart. The DTW-KS system provided the targets with which the CNN-DTW system was trained, and hence represents an upper bound on the performance we can expect from the CNN-DTW wordspotter. When using BNFs, we see that the difference between the DTW-KS and CNN-DTW approaches becomes smaller compared to the difference for MFCCs. This results in the CNN-DTW system using BNFs almost achieving the performance of the DTW-KS system; the former, however, is computationally much more efficient since alignment is not required. On a conventional desktop PC with a single NVIDIA GeForce GTX 1080 GPU, CNN-DTW takes approximately 5 minutes compared to DTW-KS which takes 900 minutes on a 20-core CPU machine. Table TABREF24 shows that, in contrast to when MFCCs are used, a Gaussian noise layer (CNN-DTW-GNL) does not give further performance benefits for the BNF systems.
Figures FIGREF25 (a-f) show ROC plots for a selection of keywords which are representative of cases with both good and bad performance. AUC improves in all cases when switching from MFCCs to BNFs, except for health, where the difference is relative small (all scores are close to chance on this keyword). In some cases, e.g. for wounded, the benefits of switching to BNFs in CNN-DTW is substantial. Interestingly, for keywords such as attack, the CNN-DTW system using BNFs actually marginally outperforms the DTW-KS system which is used to supervise it.
Conclusion
We investigated the use of multilingual bottleneck (BNFs) and autoencoder features in a CNN-DTW keyword spotter. While autoencoder features and BNFs trained on two languages did not improve performance over MFCCs, BNFs trained on a corpus of 10 languages lead to substantial improvements. We conclude that our overall CNN-DTW based approach, which combines the low-resource advantages of DTW with the speed advantages of CNNs, further benefits by incorporating labelled data from well-resourced languages through the use of BNFs when these are obtained from several diverse language.
Acknowledgements: We thank the NVIDIA corporation for the donation of GPU equipment used for this research. We also gratefully acknowledge the support of Telkom South Africa. | Bulgarian, Czech, French, German, Korean, Polish, Portuguese, Russian, Thai, Vietnamese, South African English, These features are typically obtained by training a deep neural network jointly on several languages for which labelled data is available., The final shared layer often has a lower dimensionality than the input layer, and is therefore referred to as a `bottleneck'. |
1a06b7a2097ebbad0afc787ea0756db6af3dadf4 | 1a06b7a2097ebbad0afc787ea0756db6af3dadf4_0 | Q: What languages are considered?
Text: Introduction
Social media has become a popular medium for individuals to express opinions and concerns on issues impacting their lives BIBREF0 , BIBREF1 , BIBREF2 . In countries without adequate internet infrastructure, like Uganda, communities often use phone-in talk shows on local radio stations for the same purpose. In an ongoing project by the United Nations (UN), radio-browsing systems have been developed to monitor such radio shows BIBREF3 , BIBREF4 . These systems are actively and successfully supporting UN relief and developmental programmes. The development of such systems, however, remains dependent on the availability of transcribed speech in the target languages. This dependence has proved to be a key impediment to the rapid deployment of radio-browsing systems in new languages, since skilled annotators proficient in the target languages are hard to find, especially in crisis conditions.
In a conventional keyword spotting system, where the goal is to search through a speech collection for a specified set of keywords, automatic speech recognition (ASR) is typically used to generate lattices which are then searched to predict the presence or absence of keywords BIBREF5 , BIBREF6 . State-of-the-art ASR, however, requires large amounts of transcribed speech audio BIBREF7 , BIBREF8 . In this paper we consider the development of a keyword spotter without such substantial and carefully-prepared data. Instead, we rely only on a small number of isolated repetitions of keywords and a large body of untranscribed data from the target domain. The motivation for this setting is that such isolated keywords should be easier to gather, even in a crisis scenario. Several studies have attempted ASR-free keyword spotting using a query-by-example (QbyE) retrieval procedure. In QbyE, the search query is provided as audio rather than text. Dynamic time warping (DTW) is typically used to search for instances of the query in a speech collection BIBREF9 , BIBREF10 . As an alternative, several ways of obtaining fixed-dimensional representations of input speech have been considered BIBREF11 . Recurrent neural networks (RNNs) BIBREF12 , BIBREF13 , autoencoding encoder-decoder RNNs BIBREF14 , BIBREF15 , and Siamese convolutional neural networks (CNNs) BIBREF16 have all been used to obtain such fixed-dimensional representations, which allow queries and search utterances to be directly compared without alignment. For keyword spotting, a variant of this approach has been used where textual and acoustic inputs are mapped into a shared space BIBREF17 . Most of these neural approaches, however, relies on large amounts of training data.
In this paper, we extend the ASR-free keyword spotting approach first presented in BIBREF18 . A small seed corpus of isolated spoken keywords is used to perform DTW template matching on a large corpus of untranscribed data from the target domain. The resulting DTW scores are used as targets for training a CNN-based keyword spotter. Hence we take advantage of DTW-based matching—which can be performed with limited data—and combine this with CNN-based searching—giving speed benefits since it does not require alignment. In our previous work, we used speech data only from the target language. Here we consider whether data available for other (potentially well-resourced) languages can be used to improve performance. Specifically, multilingual bottleneck features (BNFs) have been shown to provide improved performance by several authors BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 . We investigate whether such multilingual bottleneck feature extractors (trained on completely different languages) can be used to extract features for our target data, thereby improving the overall performance of our CNN-DTW keyword spotting approach.
To perform a thorough analysis of our proposed approach (which requires transcriptions), we use a corpus of South African English. We use BNFs trained on two languages and on ten languages as input features to the CNN-DTW system, and compare these to MFCCs. We also consider features from unsupervised autoencoders, trained on unlabelled datasets from five languages. We show that the 10-language BNFs work best overall, giving results that makes CNN-DTW a viable option for practical use.
Radio browsing system
The first radio browsing systems implemented as part of the UN's humanitarian monitoring programmes rely on ASR systems BIBREF3 . Human analysts filter speech segments identified by the system and add these to a searchable database to support decision making. To develop the ASR system, at least a small amount of annotated speech in the target language is required BIBREF4 . However, the collection of even a small fully transcribed corpus has proven difficult or impossible in some settings. In recent work, we have therefore proposed an ASR-free keyword spotting system based on CNNs BIBREF18 . CNN classifiers typically require a large number of training examples, which are not available in our setting. Instead, we therefore use a small set of recorded isolated keywords, which are then matched against a large collection of untranscribed speech drawn from the target domain using a DTW-based approach. The resulting DTW scores are then used as targets for a CNN. The key is that it is not necessary to know whether or not the keywords do in fact occur in this untranscribed corpus; the CNN is trained simply to emulate the behaviour of the DTW. Since the CNN does not perform any alignment, it is computationally much more efficient than DTW. The resulting CNN-DTW model can therefore be used to efficiently detect the presence of keywords in new input speech. Figure FIGREF2 show the structure of this CNN-DTW radio browsing system.
Data
We use the same datasets used in our previous work BIBREF18 . As templates, we use a small corpus of isolated utterances of 40 keywords, each spoken twice by 24 South African speakers (12 male and 12 female). This set of 1920 labelled isolated keywords constitutes the only transcribed target-domain data that we use to train our keyword spotter. As untranscribed data, we use a corpus of South African Broadcast News (SABN). This 23-hour corpus consists of a mix of English newsreader speech, interviews, and crossings to reporters, broadcast between 1996 and 2006 BIBREF23 . The division of the corpus into different sets is shown in Table TABREF3 . The SABN training set is used as our untranscribed data to obtain targets for the CNN. The models then perform keyword spotting on the SABN test set. Since this set is fully transcribed, performance evaluation and analysis is possible. The isolated keywords were recorded under fairly quiet conditions and there was no speaker overlap with the SABN dataset. Hence there is a definite mismatch between the datasets. This is intentional as it reflects the intended operational setting of our system.
Keyword Spotting Approaches
Here we describe the combined CNN-DTW keyword spotting method. We also use direct DTW and a CNN classifier as baselines, and hence these are also briefly discussed.
Dynamic time warping (DTW)
In low-resource settings with scarce training data, DTW is an attractive approach, but it can be prohibitively slow since it requires repeated alignment. We make use of a simple DTW implementation in which isolated keywords slide over search audio, with a 3-frame-skip, and a frame-wise comparison is performed while warping the time axis. From this, a normalized per-frame cosine cost is obtained, resulting in a value INLINEFORM0 , with 0 indicating a portion of speech that matches the keyword exactly. The presence or absence of the keyword is determined by applying an appropriate threshold to INLINEFORM1 .
Convolutional neural network (CNN) classifier
As a baseline, we train a CNN classifier as an end-to-end keyword spotter. This would be typical in high resource settings BIBREF16 , BIBREF24 , BIBREF25 . We perform supervised training using the 1920 recorded isolated keywords with negative examples drawn randomly from utterances in the SABN training set. For testing, a 60-frame window slides over the test utterances. The presence or absence of keyword is again based on a threshold.
CNN-DTW keyword spotting
Rather than using labels (as in the CNN classifier above), the CNN-DTW keyword spotting approach uses DTW to generate sufficient training data as targets for a CNN. The CNN-DTW is subsequently employed as the keyword spotter; this is computationally much more efficient than direct DTW. DTW similarity scores are computed between our small set of isolated keywords and a much larger untranscribed dataset, and these scores are subsequently used as targets to train a CNN, as shown in Figure FIGREF9 . Our contribution here over our previous work BIBREF18 is to use multilingual BNFs instead of MFCCs, both for performing the DTW matching and as inputs to the CNN-DTW model. In Figure FIGREF9 , the upper half shows how the supervisory signals are obtained using DTW, and the lower half shows how the CNN is trained. Equation ( EQREF8 ) shows how keyword scores are computed, resulting in a vector INLINEFORM0 for each utterance INLINEFORM1 . DISPLAYFORM0
Here, INLINEFORM0 is the sequence of speech features for the INLINEFORM1 exemplar of keyword INLINEFORM2 , INLINEFORM3 is a successive segment of utterance INLINEFORM4 , and INLINEFORM5 is the DTW alignment cost between the speech features of exemplar INLINEFORM6 and the segment INLINEFORM7 . Each value INLINEFORM8 , which is between INLINEFORM9 , is then mapped to INLINEFORM10 , with 1 indicating a perfect match and 0 indicating dissimilarity thus forming the target vector INLINEFORM11 for utterance INLINEFORM12 . A CNN is then trained using a summed cross-entropy loss (which is why the scores are mapped to the interval INLINEFORM13 ) with utterance INLINEFORM14 as input and INLINEFORM15 as target. The CNN architecture is the same as used in BIBREF18 . Finally, the trained CNN is applied to unseen utterances.
Bottleneck and Autoencoder Features
Our previous work focused purely on using data from the low-resource target language. However, large annotated speech resources exist for several well-resourced languages. We investigate whether such resources can be used to improve the CNN-DTW system in the unseen low-resource language.
Bottleneck features
One way to re-use information extracted from other multilingual corpora is to use multilingual bottleneck features (BNFs), which has shown to perform well in conventional ASR as well as intrinsic evaluations BIBREF19 , BIBREF26 , BIBREF27 , BIBREF20 , BIBREF28 , BIBREF29 . These features are typically obtained by training a deep neural network jointly on several languages for which labelled data is available. The bottom layers of the network are normally shared across all training languages. The network then splits into separate parts for each of the languages, or has a single shared output. The final output layer has phone labels or HMM states as targets. The final shared layer often has a lower dimensionality than the input layer, and is therefore referred to as a `bottleneck'. The intuition is that this layer should capture aspects that are common across all the languages. We use such features from a multilingual neural network in our CNN-DTW keyword spotting approach. The BNFs are trained on a set of well-resourced languages different from the target language.
Different neural architectures can be used to obtain BNFs following the above methodology. Here we use time-delay neural networks (TDNNs) BIBREF30 . We consider two models: a multilingual TDNN trained on only two languages, and a TDNN trained on ten diverse languages. Our aim is to investigate whether it is necessary to have a large set of diverse languages, or whether it is sufficient to simply obtain features from a supervised model trained on a smaller set of languages.
An 11-layer 2-language TDNN was trained using 40-high resolution MFCC features as input on a combined set of Dutch and Frisian speech, as described in BIBREF31 . Speaker adaptation is used with lattice-free maximum mutual information training, based on the Kaldi Switchboard recipe BIBREF32 . Each layer uses ReLU activations with batch normalisation. By combining the the FAME BIBREF33 and CGN BIBREF34 corpora, the training set consists of a combined 887 hours of data in the two languages. 40-dimensional BNFs are extracted from the resulting model.
A 6-layer 10-language TDNN was trained on the GlobalPhone corpus, also using 40-high resolution MFCC features as input, as described in BIBREF20 . For speaker adaptation, a 100-dimensional i-vector was appended to the the MFCC input features. The TDNN was trained with a block-softmax, with the hidden layers shared across all languages and a separate output layer for each language. Each of the six hidden layers had 625 dimensions, and was followed by a 39-dimensional bottleneck layer with ReLU activations and batch normalisation. Training was accomplished using the Kaldi Babel receipe using 198 hours of data in 10 languages (Bulgarian, Czech, French, German, Korean, Polish, Portuguese, Russian, Thai, Vietnamese) from GlobalPhone.
Autoencoder features
BNFs are trained in a supervised fashion with acoustic feature presented at the input and phone targets at the outputs. A more general scenario, however, is one in which training data is unlabelled, and these targets are therefore not known. In this case, it may be possible to learn useful representations by using an unsupervised model. An autoencoder is a neural network trained to reconstruct its input. By presenting the same data at the input and the output of the network while constraining intermediate connections, the network is trained to find an internal representation that is useful for reconstruction. These internal representations can be useful as features BIBREF35 , BIBREF36 , BIBREF37 , BIBREF38 , BIBREF39 , BIBREF40 . Like BNFs, autoencoders can be trained on languages different from the target language (often resulting in more data to train on).
Here we use a stacked denoising autoencoder BIBREF41 . In this model, each layer is trained individually like an autoencoder with added noise to reconstruct the output of the previous layer. Once a layer has been trained, its weights are fixed and its outputs become the inputs to the next layer to be trained. After all the layers are pre-trained in this fashion, the layers are stacked and fine-tuned. We use mean squared error loss and Adam optimisation BIBREF42 throughout. We trained a 7-layer stacked denoising autoencoder on an untranscribed dataset consisting of 160 h of Acholi, 154 h of Luganda, 9.45 h of Lugbara, 7.82 h of Rutaroo and 18 h of Somali data. We used 39-dimensional MFCCs (13 cepstra with deltas and delta-deltas) as input and extracted features from the 39-dimensional fourth layer.
Experimental setup
The experimental setup is similar to that of BIBREF18 . We consider three baseline systems: two DTW systems and a conventional CNN classifier.
Our proposed approach, CNN-DTW, is supervised by the DTW-KS system. Hyper-parameters for CNN-DTW were optimized using the target loss on the development set. Hence, the SABN transcriptions are not used for training or validation. Performance is reported in terms of the area under the curve (AUC) of the receiver operating characteristic (ROC) and equal error rate (EER). The ROC is obtained by varying the detection threshold and plotting the false positive rate against the true positive rate. AUC, therefore, indicates the performance of the model independent of a threshold, with higher AUC indicating a better model. EER is the point at which the false positive rate equals the false negative rate and hence lower EER indicates a better model.
Experimental results
We consider four feature extractors in our experiments:
In initial experiments, we first consider the performance of these features on development data. Specifically, we use the features as representations in the DTW-based keyword spotter (DTW-KS). Results are shown in Table TABREF23 . BNFs trained on 10 languages outperform all other approaches, with speaker normalisation giving a further slight improvement. Both the stacked autoencoder and the BNFs trained on two languages perform worse than the MFCC baseline. This seems to indicate that a larger number of diverse languages is beneficial for training BNFs, and that supervised models are superior to unsupervised models when applied to an unseen target language. However, further experiments are required to verify this definitively. Based on these development experiments, we compare MFCCs and TDNN-BNF-10lang-SPN features when used for keyword spotting on evaluation data.
Table TABREF24 shows the performance of the three baseline systems and CNN-DTW when using MFCCs and BNFs. In all cases except the CNN classifier, BNFs lead to improvements over MFCCs. Furthermore, we see that, when using BNFs, the CNN-DTW system performs almost as well as its DTW-KS counterpart. The DTW-KS system provided the targets with which the CNN-DTW system was trained, and hence represents an upper bound on the performance we can expect from the CNN-DTW wordspotter. When using BNFs, we see that the difference between the DTW-KS and CNN-DTW approaches becomes smaller compared to the difference for MFCCs. This results in the CNN-DTW system using BNFs almost achieving the performance of the DTW-KS system; the former, however, is computationally much more efficient since alignment is not required. On a conventional desktop PC with a single NVIDIA GeForce GTX 1080 GPU, CNN-DTW takes approximately 5 minutes compared to DTW-KS which takes 900 minutes on a 20-core CPU machine. Table TABREF24 shows that, in contrast to when MFCCs are used, a Gaussian noise layer (CNN-DTW-GNL) does not give further performance benefits for the BNF systems.
Figures FIGREF25 (a-f) show ROC plots for a selection of keywords which are representative of cases with both good and bad performance. AUC improves in all cases when switching from MFCCs to BNFs, except for health, where the difference is relative small (all scores are close to chance on this keyword). In some cases, e.g. for wounded, the benefits of switching to BNFs in CNN-DTW is substantial. Interestingly, for keywords such as attack, the CNN-DTW system using BNFs actually marginally outperforms the DTW-KS system which is used to supervise it.
Conclusion
We investigated the use of multilingual bottleneck (BNFs) and autoencoder features in a CNN-DTW keyword spotter. While autoencoder features and BNFs trained on two languages did not improve performance over MFCCs, BNFs trained on a corpus of 10 languages lead to substantial improvements. We conclude that our overall CNN-DTW based approach, which combines the low-resource advantages of DTW with the speed advantages of CNNs, further benefits by incorporating labelled data from well-resourced languages through the use of BNFs when these are obtained from several diverse language.
Acknowledgements: We thank the NVIDIA corporation for the donation of GPU equipment used for this research. We also gratefully acknowledge the support of Telkom South Africa. | Bulgarian, Czech, French, German, Korean, Polish, Portuguese, Russian, Thai, Vietnamese |
390aa2d733bd73699899a37e65c0dee4668d2cd8 | 390aa2d733bd73699899a37e65c0dee4668d2cd8_0 | Q: Do they compare speed performance of their model compared to the ones using the LID model?
Text: Introduction
Code-switching (CS) speech is defined as the alternation of languages in an utterance, it is a pervasive communicative phenomenon in multilingual communities. Therefore, developing a CS speech recognition (CSSR) system is of great interest.
However, the CS scenario presents challenges to recognition system BIBREF0. Some attempts based on DNN-HMM framework have been made to alleviate these problems BIBREF1, BIBREF2. The methods usually contain components including acoustic, language, and lexicon models that are trained with different object separately, which would lead to sub-optimal performance. And the design of complicated lexicon including different languages would consume lots of human efforts.
Therefore, end-to-end framework for CSSR has received increasing attention recently BIBREF3, BIBREF4, BIBREF5. Examples of such models include connectionist temporal classification (CTC) BIBREF6, attention-based encoder-decoder models BIBREF7, BIBREF8, and the recurrent neural network transducer (RNN-T) BIBREF9, BIBREF10, BIBREF11, BIBREF12.
These methods combine acoustic, language, and lexicon models into a single model with joint training. And the RNN-T and attention-based models trained with large speech corpus perform competitively compared to the state-of-art model in some tasks BIBREF13. However, the lack of CS training data poses serious problem to end-to-end methods. To address the problem, language identity information is utilized to improve the performance of recognition BIBREF3, BIBREF4, BIBREF5. They are usually based on CTC or attention-based encoder-decoder models or the combination of both. However, previous works use an additional language identification (LID) model as an auxiliary module, which causes the system complex.
In this paper, we propose an improved RNN-T model with language bias to alleviate the problem. The model is trained to predict language IDs as well as the subwords. To ensure the model can learn CS information, we add language IDs in the CS point of transcription, as illustrated in Fig. 1. In the figure, we use the arrangements of different geometric icons to represent the CS distribution. Compared with normal text, the tagged data can bias the RNN-T to predict language IDs in CS points. So our method can model the CS distribution directly, no additional LID model is needed. Then we constrain the input word embedding with its corresponding language ID, which is beneficial for model to learn the language identity information from transcription. In the inference process, the predicted language IDs are used to adjust the output posteriors. The experiment results on CS corpus show that our proposed method outperforms the RNN-T baseline (without language bias) significantly. Overall, our best model achieves 16.2% and 12.9% relative error reduction on two test sets, respectively. To our best knowledge, this is the first attempt of using the RNN-T model with language bias as an end-to-end CSSR strategy.
The rest of the paper is organized as follows. In Section 2, we review RNN-T model. In Section 3, we describe the intuition of the proposed model. In Section 4, we present the experimental setups, and in Section 5, we report and discuss the experiment results in detail. Finally, we conclude the paper in Section 6.
review of RNN-T
Although CTC has been applied successfully in the context of speech recognition, it assumes that outputs at each step are independent of the previous predictions BIBREF6. RNN-T is an improved model based on CTC, it augments with a prediction network, which is explicitly conditioned on the previous outputs BIBREF10, as illustrated in Fig. 2(a).
Let $\mathnormal { \mathbf {X} = (\mathbf {x}_{1}, \mathbf {x}_{2}, ... , \mathbf {x}_{T})}$ be the acoustic input sequence, where $T$ is the frame number of sequence. Let $\mathbf {Y} = (\mathnormal {y}_{1}, \mathnormal {y}_{2}, ... , \mathnormal {y}_{U})$ be the corresponding sequence of output targets (without language IDs) over the RNN-T output space $\mathcal {Y}$, and $\mathcal {Y}^{*}$ be the set of all possible sequence over $\mathcal {Y}$. In the context of ASR, the input sequence is much longer than output targets, i.e., $T>U$. Because the frame-level alignments of the target label are unknown, RNN-T augments the output set with an additional symbol, refer to as the $\mathit {blank}$ symbol, denoted as $\phi $, i.e., $\bar{\mathcal {Y}} \in \mathcal {Y} \cup \lbrace \phi \rbrace $. We denote $\hat{\mathbf {Y}} \in \bar{\mathcal {Y}}^{*}$ as an alignment, which are equivalent to $(\mathnormal {y}_{1}, \mathnormal {y}_{2},\mathnormal {y}_{3}) \in \mathcal {Y}^*$ after operation $\mathcal {B}$, such as $\hat{\mathbf {Y}} = (\mathnormal {y}_{1}, \phi , \mathnormal {y}_{2}, \phi , \phi , \mathnormal {y}_{3}) \in \bar{\mathcal {Y}}^{*}$. Given the input sequence $\mathbf {X}$, RNN-T models the conditional probability $P(\mathbf {Y} \in \mathcal {Y}^* | \mathbf {X})$ by marginalizing over all possible alignments:
where $\mathcal {B}$ is the function that removes consecutive identical symbols and then removing any blank from a given alignment in $\bar{\mathcal {Y}}^{*}$.
An RNN-T model consists of three different networks as illustrated in Fig. 2(a). (a) Encoder network (referred to as transcription network) maps the acoustic features into higher level representation $\mathbf {h}_{t}^{enc} = f^{enc}(\lbrace \mathbf {x}_{\tau }\rbrace _{1 \le \tau \le t})$. (b) Prediction network produces output vector $\mathbf {p}_{u} = f^{pred}(\lbrace \mathnormal {y}_{v}\rbrace _{1 \le v \le u-1})$ based on the previous non-blank input label. (c) Joint network computes logits by combining the outputs of the previous two networks $z_{t,u} = f^{joint} (\mathbf {h}_{t}^{enc}, \mathbf {p}_{u})$. These logits are then passed to a softmax layer to define a probability distribution. The model can be trained by maximizing the log-likelihood of $P(\mathbf {Y} \in \mathcal {Y}^* | \mathbf {X})$.
RNN-T with language bias
In this paper, we aim to build a concise end-to-end CSSR model that can handle the speech recognition and LID simultaneously. For this task, we augment the output symbols set with language IDs $<chn>$ and $<eng>$ as shown in Fig. 1, i.e., $\hat{\mathcal {Y}} \in \bar{\mathcal {Y}} \cup \lbrace <chn>,<eng>\rbrace $. The intuition behind it is that the CS in the transcript may obey a certain probability distribution, and this distribution can be learned by neural network.
The properties of RNN-T is key for the problem. It can predict rich set of target symbols such as speaker role and "end-of-word" symbol, which are not related to the input feature directly BIBREF14, BIBREF15. So the language IDs can also be treated as the output symbols. What's more, RNN-T can seamlessly integrate the acoustic and linguistic information. The prediction network of it can be viewed as an RNN language model which predict the current label given history labels BIBREF10. So it is effective in incorporating LID into the language model. In general, predicting language IDs only from text data is difficult. However, the joint training mechanism of RNN-T allows it to combine the language and acoustic information to model the CS distribution. Furthermore, the tagged text can bias the RNN-T to predict language IDs which indicates CS points, yet the model trained with normal text can not do this. That is why we choose RNN-T to build the end-to-end CSSR system.
To promote the model to learn CS distribution more efficient, We concatenate a short vector to all the English word embedding and the English tag $<eng>$ embedding, another different vector for Mandarin, as shown in the bottom of Fig. 2(b). This enhances the dependence of word embedding to its corresponding language ID. In the training process, RNN-T model can learn the distinction information between the two languages easily. The experiment results show that the word embedding constraint is an effective technology. In the inference process, we use the predicted language ID to adjust the output posteriors, as shown in the head of Fig. 2(b). This can bias the model to predict a certain language words more likely in the next-step decode. Overall, our proposed method can handle the speech recognition and LID simultaneously in a simple way, and without increasing additional burden. This study provides new insights into the CS information of text data and its application in end-to-end CSSR system. As a final note, the training and inference algorithms of the proposed model are similar to the standard RNN-T model.
Experiments setups ::: Dataset
We conduct experiments on SEAME (South East Asia Mandarin English), a spontaneous conversational bilingual speech corpus BIBREF16. Most of the utterances contain both Mandarin and English uttered by interviews and conversations. We use the standard data partitioning rule of previous works which consists of three parts: $train$, $test_{sge}$ and $test_{man}$ (see Table 1) BIBREF2. $test_{sge}$ is biased to Southeast Asian accent English speech and $test_{man}$ is biased to Mandarin speech.
Building an end-to-end model requires lots of training data, we apply speech speed perturbation to augment speech data BIBREF17. By manipulation, we get 3 times the data, with the speed rate of 0.9, 1, and 1.1 of the original speech respectively. We use the augmented data to build our DNN-HMM system and RNN-T system.
Experiments setups ::: DNN-HMM Baseline System
In addition to the RNN-T baseline system, we also build a conventional DNN-HMM baseline for comparison. The model is based on time delay neural network (TDNN) which trained with lattice-free maximum mutual information (LF-MMI) BIBREF18. The TDNN model has 7 hidden layers with 512 units and the input acoustic future is 13-dimensional Mel-frequency cepstrum coefficient (MFCC). For language modeling, we use SRI language modeling toolkit BIBREF19 to build 4-gram language model with the training transcription. And we construct the lexicon by combining CMU English lexicon and our Mandarin lexicon.
Experiments setups ::: RNN-T System
We construct the RNN-T baseline system as described in Section 3.1. The encoder network of RNN-T model consists of 4 layers of 512 long short-term memory (LSTM). The prediction network is 2 layers with 512 LSTM units. And the joint network consists of single feed-forward layer of 512 units with tanh activate function.
The input acoustic features of encoder network are 80-dimensional log Mel-filterbank with 25ms windowing and 10ms frame shift. Mean and normalization is applied to the futures. And the input words embedding of prediction network is in 512 dimensions continuous numerical vector space. During training, the ADAM algorithm is used as the optimization method, we set the initial learning rate as 0.001 and decrease it linearly when there is no improvement on the validation set. To reduce the over-fitting problem, the dropout rate is set to 0.2 throughout all the experiments. In the inference process, the beam-search algorithm BIBREF9 with beam size 35 is used to decode the model. All the RNN-T models are trained from scratch use PyTorch.
Experiments setups ::: Wordpieces
For Mandarin-English CSSR task, it is a natural way to construct output units by using characters. However, there are several thousands of Chinese characters and 26 English letters. Meanwhile, the acoustic counterpart of Chinese character is much longer than English letter. So, the character modeling unit will result in significant discrepancy problem between the two languages. To balance the problem, we adopt BPE subword BIBREF20 as the English modeling units. The targets of our RNN-T baseline system contains 3090 English wordpieces and 3643 Chinese characters. The BPE subword units can not only increase the duration of English modeling units but also maintain a balance unit number of two languages.
Experiments setups ::: Evaluation Metrics
In this paper, we use mixed error rate (MER) to evaluate the experiment results of our methods. The MER is defined as the combination of word error rate (WER) for English and character error rate (CER) for Mandarin. This metrics can balance the Mandarin and English error rates better compared to the WER or CER.
Results and Analysis ::: Results of RNN-T Model
Table 2 reports our main experiment results of different setups with the standard decode in inference. It is obvious that the MER of end-to-end systems are not as competitive as the LF-MMI TDNN system. The result is consistent with some other reports BIBREF3. However, data augmentation is more effective for end-to-end system than TDNN system. It suggests that the gap between our RNN-T and TDNN may further reduce with increasing data. Furthermore, We can also observe that all the experiment results in $test_{sge}$ is much worse than $test_{man}$. This is probably that the accent English in data $test_{sge}$ is more difficult for the recognition system. Bilinguals usually have serious accent problem, which poses challenge to CSSR approaches.
Because the data augmentation technology can significantly reduce the MER of end-to-end model, we conduct all the following experiments based on augmented training data. In order to fairly compare the results of proposed methods with baseline, we remove all the language IDs in the decoded transcription. We can find that The performance of RNN-T model trained (without word embedding constraint) with tagged transcription is much better than the RNN-T baseline. It achieves 9.3% and 7.6% relative MER reduction on two test sets respectively. This shows that the tagged text can improve the modeling ability of RNN-T for the CSSR problem. It is the main factor that causes the MER reduction in our experiments. Furthermore, word embedding constraint can also improve the performance of the system though not significant. Overall, our proposed methods yields improved results without increasing additional training or inference burden.
Results and Analysis ::: Effect of Language IDs Re-weighted Decode
We then evaluate the system performance by adjusting the weights of next-step predictions in decode process. Table 3 shows the results of RNN-T model with different language IDs weights in inference. It is obvious that the re-weighted methods outperform the model with standard decode process. This suggests that the predicted language IDs can effectively guide the model decoding.
Because the model assigns language IDs to the recognized words directly, the language IDs error rate is hard to compute. This result may imply that the prediction accuracy of our method is high enough to guide decoding. Meanwhile, We also find that the re-weighted method is more effective on the $test_{man}$ than $test_{sge}$. This could be caused by higher language IDs prediction accuracy in $test_{man}$. The results of the two different $\lambda $ have similarly MER, and we set $\lambda =0.2$ in the following experiments.
Results and Analysis ::: Results of Language Model Re-score
Table 4 shows the MER results of the N-best (N=35) re-scoring with N-gram and neural language models. The language models are both trained with the tagged training transcription. We see that the language re-scoring can further improve the performance of models. It reveals that the prediction network of RNN-T still has room to be further optimization. Finally, compared to the RNN-T baseline without data augment, the best results of proposed method can achieve 25.9% and 21.3% relative MER reduction on two dev sets respectively. compared to the RNN-T baseline with data augment, the proposed method can achieve 16.2% and 12.9% relative MER reduction. For both scenarios, our RNN-T methods can achieve better performance than baselines.
Conclusions and Future Work
In this work we develop an improved RNN-T model with language bias for end-to-end Mandarin-English CSSR task. Our method can handle the speech recognition and LID simultaneously, no additional LID system is needed. It yields consistent improved results of MER without increasing training or inference burden. Experiment results on SEAME show that proposed approaches significantly reduce the MER of two dev sets from 33.3% and 44.9% to 27.9% and 39.1% respectively.
In the future, we plan to pre-train the prediction network of RNN-T model using large text corpus, and then finetune the RNN-T model with labeled speech data by frozen the prediction network.
Acknowledgment
This work is supported by the National Key Research & Development Plan of China (No.2018YFB1005003) and the National Natural Science Foundation of China (NSFC) (No.61425017, No.61831022, No.61773379, No.61771472) | Unanswerable |
86083a02cc9a80b31cac912c42c710de2ef4adfd | 86083a02cc9a80b31cac912c42c710de2ef4adfd_0 | Q: How do they obtain language identities?
Text: Introduction
Code-switching (CS) speech is defined as the alternation of languages in an utterance, it is a pervasive communicative phenomenon in multilingual communities. Therefore, developing a CS speech recognition (CSSR) system is of great interest.
However, the CS scenario presents challenges to recognition system BIBREF0. Some attempts based on DNN-HMM framework have been made to alleviate these problems BIBREF1, BIBREF2. The methods usually contain components including acoustic, language, and lexicon models that are trained with different object separately, which would lead to sub-optimal performance. And the design of complicated lexicon including different languages would consume lots of human efforts.
Therefore, end-to-end framework for CSSR has received increasing attention recently BIBREF3, BIBREF4, BIBREF5. Examples of such models include connectionist temporal classification (CTC) BIBREF6, attention-based encoder-decoder models BIBREF7, BIBREF8, and the recurrent neural network transducer (RNN-T) BIBREF9, BIBREF10, BIBREF11, BIBREF12.
These methods combine acoustic, language, and lexicon models into a single model with joint training. And the RNN-T and attention-based models trained with large speech corpus perform competitively compared to the state-of-art model in some tasks BIBREF13. However, the lack of CS training data poses serious problem to end-to-end methods. To address the problem, language identity information is utilized to improve the performance of recognition BIBREF3, BIBREF4, BIBREF5. They are usually based on CTC or attention-based encoder-decoder models or the combination of both. However, previous works use an additional language identification (LID) model as an auxiliary module, which causes the system complex.
In this paper, we propose an improved RNN-T model with language bias to alleviate the problem. The model is trained to predict language IDs as well as the subwords. To ensure the model can learn CS information, we add language IDs in the CS point of transcription, as illustrated in Fig. 1. In the figure, we use the arrangements of different geometric icons to represent the CS distribution. Compared with normal text, the tagged data can bias the RNN-T to predict language IDs in CS points. So our method can model the CS distribution directly, no additional LID model is needed. Then we constrain the input word embedding with its corresponding language ID, which is beneficial for model to learn the language identity information from transcription. In the inference process, the predicted language IDs are used to adjust the output posteriors. The experiment results on CS corpus show that our proposed method outperforms the RNN-T baseline (without language bias) significantly. Overall, our best model achieves 16.2% and 12.9% relative error reduction on two test sets, respectively. To our best knowledge, this is the first attempt of using the RNN-T model with language bias as an end-to-end CSSR strategy.
The rest of the paper is organized as follows. In Section 2, we review RNN-T model. In Section 3, we describe the intuition of the proposed model. In Section 4, we present the experimental setups, and in Section 5, we report and discuss the experiment results in detail. Finally, we conclude the paper in Section 6.
review of RNN-T
Although CTC has been applied successfully in the context of speech recognition, it assumes that outputs at each step are independent of the previous predictions BIBREF6. RNN-T is an improved model based on CTC, it augments with a prediction network, which is explicitly conditioned on the previous outputs BIBREF10, as illustrated in Fig. 2(a).
Let $\mathnormal { \mathbf {X} = (\mathbf {x}_{1}, \mathbf {x}_{2}, ... , \mathbf {x}_{T})}$ be the acoustic input sequence, where $T$ is the frame number of sequence. Let $\mathbf {Y} = (\mathnormal {y}_{1}, \mathnormal {y}_{2}, ... , \mathnormal {y}_{U})$ be the corresponding sequence of output targets (without language IDs) over the RNN-T output space $\mathcal {Y}$, and $\mathcal {Y}^{*}$ be the set of all possible sequence over $\mathcal {Y}$. In the context of ASR, the input sequence is much longer than output targets, i.e., $T>U$. Because the frame-level alignments of the target label are unknown, RNN-T augments the output set with an additional symbol, refer to as the $\mathit {blank}$ symbol, denoted as $\phi $, i.e., $\bar{\mathcal {Y}} \in \mathcal {Y} \cup \lbrace \phi \rbrace $. We denote $\hat{\mathbf {Y}} \in \bar{\mathcal {Y}}^{*}$ as an alignment, which are equivalent to $(\mathnormal {y}_{1}, \mathnormal {y}_{2},\mathnormal {y}_{3}) \in \mathcal {Y}^*$ after operation $\mathcal {B}$, such as $\hat{\mathbf {Y}} = (\mathnormal {y}_{1}, \phi , \mathnormal {y}_{2}, \phi , \phi , \mathnormal {y}_{3}) \in \bar{\mathcal {Y}}^{*}$. Given the input sequence $\mathbf {X}$, RNN-T models the conditional probability $P(\mathbf {Y} \in \mathcal {Y}^* | \mathbf {X})$ by marginalizing over all possible alignments:
where $\mathcal {B}$ is the function that removes consecutive identical symbols and then removing any blank from a given alignment in $\bar{\mathcal {Y}}^{*}$.
An RNN-T model consists of three different networks as illustrated in Fig. 2(a). (a) Encoder network (referred to as transcription network) maps the acoustic features into higher level representation $\mathbf {h}_{t}^{enc} = f^{enc}(\lbrace \mathbf {x}_{\tau }\rbrace _{1 \le \tau \le t})$. (b) Prediction network produces output vector $\mathbf {p}_{u} = f^{pred}(\lbrace \mathnormal {y}_{v}\rbrace _{1 \le v \le u-1})$ based on the previous non-blank input label. (c) Joint network computes logits by combining the outputs of the previous two networks $z_{t,u} = f^{joint} (\mathbf {h}_{t}^{enc}, \mathbf {p}_{u})$. These logits are then passed to a softmax layer to define a probability distribution. The model can be trained by maximizing the log-likelihood of $P(\mathbf {Y} \in \mathcal {Y}^* | \mathbf {X})$.
RNN-T with language bias
In this paper, we aim to build a concise end-to-end CSSR model that can handle the speech recognition and LID simultaneously. For this task, we augment the output symbols set with language IDs $<chn>$ and $<eng>$ as shown in Fig. 1, i.e., $\hat{\mathcal {Y}} \in \bar{\mathcal {Y}} \cup \lbrace <chn>,<eng>\rbrace $. The intuition behind it is that the CS in the transcript may obey a certain probability distribution, and this distribution can be learned by neural network.
The properties of RNN-T is key for the problem. It can predict rich set of target symbols such as speaker role and "end-of-word" symbol, which are not related to the input feature directly BIBREF14, BIBREF15. So the language IDs can also be treated as the output symbols. What's more, RNN-T can seamlessly integrate the acoustic and linguistic information. The prediction network of it can be viewed as an RNN language model which predict the current label given history labels BIBREF10. So it is effective in incorporating LID into the language model. In general, predicting language IDs only from text data is difficult. However, the joint training mechanism of RNN-T allows it to combine the language and acoustic information to model the CS distribution. Furthermore, the tagged text can bias the RNN-T to predict language IDs which indicates CS points, yet the model trained with normal text can not do this. That is why we choose RNN-T to build the end-to-end CSSR system.
To promote the model to learn CS distribution more efficient, We concatenate a short vector to all the English word embedding and the English tag $<eng>$ embedding, another different vector for Mandarin, as shown in the bottom of Fig. 2(b). This enhances the dependence of word embedding to its corresponding language ID. In the training process, RNN-T model can learn the distinction information between the two languages easily. The experiment results show that the word embedding constraint is an effective technology. In the inference process, we use the predicted language ID to adjust the output posteriors, as shown in the head of Fig. 2(b). This can bias the model to predict a certain language words more likely in the next-step decode. Overall, our proposed method can handle the speech recognition and LID simultaneously in a simple way, and without increasing additional burden. This study provides new insights into the CS information of text data and its application in end-to-end CSSR system. As a final note, the training and inference algorithms of the proposed model are similar to the standard RNN-T model.
Experiments setups ::: Dataset
We conduct experiments on SEAME (South East Asia Mandarin English), a spontaneous conversational bilingual speech corpus BIBREF16. Most of the utterances contain both Mandarin and English uttered by interviews and conversations. We use the standard data partitioning rule of previous works which consists of three parts: $train$, $test_{sge}$ and $test_{man}$ (see Table 1) BIBREF2. $test_{sge}$ is biased to Southeast Asian accent English speech and $test_{man}$ is biased to Mandarin speech.
Building an end-to-end model requires lots of training data, we apply speech speed perturbation to augment speech data BIBREF17. By manipulation, we get 3 times the data, with the speed rate of 0.9, 1, and 1.1 of the original speech respectively. We use the augmented data to build our DNN-HMM system and RNN-T system.
Experiments setups ::: DNN-HMM Baseline System
In addition to the RNN-T baseline system, we also build a conventional DNN-HMM baseline for comparison. The model is based on time delay neural network (TDNN) which trained with lattice-free maximum mutual information (LF-MMI) BIBREF18. The TDNN model has 7 hidden layers with 512 units and the input acoustic future is 13-dimensional Mel-frequency cepstrum coefficient (MFCC). For language modeling, we use SRI language modeling toolkit BIBREF19 to build 4-gram language model with the training transcription. And we construct the lexicon by combining CMU English lexicon and our Mandarin lexicon.
Experiments setups ::: RNN-T System
We construct the RNN-T baseline system as described in Section 3.1. The encoder network of RNN-T model consists of 4 layers of 512 long short-term memory (LSTM). The prediction network is 2 layers with 512 LSTM units. And the joint network consists of single feed-forward layer of 512 units with tanh activate function.
The input acoustic features of encoder network are 80-dimensional log Mel-filterbank with 25ms windowing and 10ms frame shift. Mean and normalization is applied to the futures. And the input words embedding of prediction network is in 512 dimensions continuous numerical vector space. During training, the ADAM algorithm is used as the optimization method, we set the initial learning rate as 0.001 and decrease it linearly when there is no improvement on the validation set. To reduce the over-fitting problem, the dropout rate is set to 0.2 throughout all the experiments. In the inference process, the beam-search algorithm BIBREF9 with beam size 35 is used to decode the model. All the RNN-T models are trained from scratch use PyTorch.
Experiments setups ::: Wordpieces
For Mandarin-English CSSR task, it is a natural way to construct output units by using characters. However, there are several thousands of Chinese characters and 26 English letters. Meanwhile, the acoustic counterpart of Chinese character is much longer than English letter. So, the character modeling unit will result in significant discrepancy problem between the two languages. To balance the problem, we adopt BPE subword BIBREF20 as the English modeling units. The targets of our RNN-T baseline system contains 3090 English wordpieces and 3643 Chinese characters. The BPE subword units can not only increase the duration of English modeling units but also maintain a balance unit number of two languages.
Experiments setups ::: Evaluation Metrics
In this paper, we use mixed error rate (MER) to evaluate the experiment results of our methods. The MER is defined as the combination of word error rate (WER) for English and character error rate (CER) for Mandarin. This metrics can balance the Mandarin and English error rates better compared to the WER or CER.
Results and Analysis ::: Results of RNN-T Model
Table 2 reports our main experiment results of different setups with the standard decode in inference. It is obvious that the MER of end-to-end systems are not as competitive as the LF-MMI TDNN system. The result is consistent with some other reports BIBREF3. However, data augmentation is more effective for end-to-end system than TDNN system. It suggests that the gap between our RNN-T and TDNN may further reduce with increasing data. Furthermore, We can also observe that all the experiment results in $test_{sge}$ is much worse than $test_{man}$. This is probably that the accent English in data $test_{sge}$ is more difficult for the recognition system. Bilinguals usually have serious accent problem, which poses challenge to CSSR approaches.
Because the data augmentation technology can significantly reduce the MER of end-to-end model, we conduct all the following experiments based on augmented training data. In order to fairly compare the results of proposed methods with baseline, we remove all the language IDs in the decoded transcription. We can find that The performance of RNN-T model trained (without word embedding constraint) with tagged transcription is much better than the RNN-T baseline. It achieves 9.3% and 7.6% relative MER reduction on two test sets respectively. This shows that the tagged text can improve the modeling ability of RNN-T for the CSSR problem. It is the main factor that causes the MER reduction in our experiments. Furthermore, word embedding constraint can also improve the performance of the system though not significant. Overall, our proposed methods yields improved results without increasing additional training or inference burden.
Results and Analysis ::: Effect of Language IDs Re-weighted Decode
We then evaluate the system performance by adjusting the weights of next-step predictions in decode process. Table 3 shows the results of RNN-T model with different language IDs weights in inference. It is obvious that the re-weighted methods outperform the model with standard decode process. This suggests that the predicted language IDs can effectively guide the model decoding.
Because the model assigns language IDs to the recognized words directly, the language IDs error rate is hard to compute. This result may imply that the prediction accuracy of our method is high enough to guide decoding. Meanwhile, We also find that the re-weighted method is more effective on the $test_{man}$ than $test_{sge}$. This could be caused by higher language IDs prediction accuracy in $test_{man}$. The results of the two different $\lambda $ have similarly MER, and we set $\lambda =0.2$ in the following experiments.
Results and Analysis ::: Results of Language Model Re-score
Table 4 shows the MER results of the N-best (N=35) re-scoring with N-gram and neural language models. The language models are both trained with the tagged training transcription. We see that the language re-scoring can further improve the performance of models. It reveals that the prediction network of RNN-T still has room to be further optimization. Finally, compared to the RNN-T baseline without data augment, the best results of proposed method can achieve 25.9% and 21.3% relative MER reduction on two dev sets respectively. compared to the RNN-T baseline with data augment, the proposed method can achieve 16.2% and 12.9% relative MER reduction. For both scenarios, our RNN-T methods can achieve better performance than baselines.
Conclusions and Future Work
In this work we develop an improved RNN-T model with language bias for end-to-end Mandarin-English CSSR task. Our method can handle the speech recognition and LID simultaneously, no additional LID system is needed. It yields consistent improved results of MER without increasing training or inference burden. Experiment results on SEAME show that proposed approaches significantly reduce the MER of two dev sets from 33.3% and 44.9% to 27.9% and 39.1% respectively.
In the future, we plan to pre-train the prediction network of RNN-T model using large text corpus, and then finetune the RNN-T model with labeled speech data by frozen the prediction network.
Acknowledgment
This work is supported by the National Key Research & Development Plan of China (No.2018YFB1005003) and the National Natural Science Foundation of China (NSFC) (No.61425017, No.61831022, No.61773379, No.61771472) | model is trained to predict language IDs as well as the subwords, we add language IDs in the CS point of transcriptio |
29e5e055e01fdbf7b90d5907158676dd3169732d | 29e5e055e01fdbf7b90d5907158676dd3169732d_0 | Q: What other multimodal knowledge base embedding methods are there?
Text: Introduction
Knowledge bases (KB) are an essential part of many computational systems with applications in search, structured data management, recommendations, question answering, and information retrieval. However, KBs often suffer from incompleteness, noise in their entries, and inefficient inference under uncertainty. To address these issues, learning relational knowledge representations has been a focus of active research BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . These approaches represent relational triples, that consist of a subject entity, relation, and an object entity, by learning fixed, low-dimensional representations for each entity and relation from observations, encoding the uncertainty and inferring missing facts accurately and efficiently. The subject and the object entities come from a fixed, enumerable set of entities that appear in the knowledge base. Knowledge bases in the real world, however, contain a wide variety of data types beyond these direct links. Apart from relations to a fixed set of entities, KBs often not only include numerical attributes (such as ages, dates, financial, and geoinformation), but also textual attributes (such as names, descriptions, and titles/designations) and images (profile photos, flags, posters, etc.). These different types of data can play a crucial role as extra pieces of evidence for knowledge base completion. For example the textual descriptions and images might provide evidence for a person's age, profession, and designation. In the multimodal KB shown in Figure 1 for example, the image can be helpful in predicting of Carles Puyol's occupation, while the description contains his nationality. Incorporating this information into existing approaches as entities, unfortunately, is challenging as they assign each entity a distinct vector and predict missing links (or attributes) by enumerating over the possible values, both of which are only possible if the entities come from a small, enumerable set. There is thus a crucial need for relational modeling that goes beyond just the link-based view of KB completion, by not only utilizing multimodal information for better link prediction between existing entities, but also being able to generate missing multimodal values.
In this paper, we introduce multimodal knowledge base embeddings (MKBE) for modeling knowledge bases that contain a variety of data types, such as links, text, images, numerical, and categorical values. We propose neural encoders and decoders to replace initial layers of any embedding-based relational model; we apply them to DistMult BIBREF2 and ConvE BIBREF5 here. Specifically, instead of learning a distinct vector for each entity and using enumeration to predict links, MKBE includes the following extensions: (1) introduce additional neural encoders to embed multimodal evidence types that the relational model uses to predict links, and (2) introduce neural decoders that use an entity's embedding to generate its multimodal attributes (like image and text). For example, when the object of a triple is an image, we encode it into a fixed-length vector using a CNN, while textual objects are encoded using RNN-based sequence encoders. The scoring module remains identical to the underlying relational model; given the vector representations of the subject, relation, and object of a triple, we produce a score indicating the probability that the triple is correct using DistMult or ConvE. After learning the KB representation, neural decoders use entity embeddings to generate missing multimodal attributes, for example, generating the description of a person from their structured information in the KB. This unified framework allows for flow of the information across the different relation types (multimodal or otherwise), providing a more accurate modeling of relational data. We provide an evaluation of our proposed approach on two relational KBs. Since we are introducing the multimodal KB completion setting, we provide two benchmarks, created by extending the existing YAGO-10 and MovieLens-100k datasets to include additional relations such as textual descriptions, numerical attributes, and images of the entities. We demonstrate that MKBE utilizes the additional information effectively to provide gains in link-prediction accuracy, achieving state-of-the-art results on these datasets for both the DistMult and the ConvE scoring functions. We evaluate the quality of multimodal attributes generated by the decoders via user studies that demonstrate their realism and information content, along with presenting examples of such generated text and images.
Multimodal KB Completion
As described earlier, KBs often contain different types of information about entities including links, textual descriptions, categorical attributes, numerical values, and images. In this section, we briefly introduce existing relational embedding approaches that focus on modeling the linked data using distinct, dense vectors. We then describe MKBE that extends these approaches to the multimodal setting, i.e., modeling the KB using all the different information to predict the missing links and impute the missing attributes.
Background on Link Prediction
Factual statements in a knowledge base are represented using a triple of subject, relation, and object, $\langle s, r, o\rangle $ , where $s,o\in \xi $ , a set of entities, and $r\in $ , a set of relations. Respectively, we consider two goals for relational modeling, (1) to train a machine learning model that can score the truth value of any factual statement, and (2) to predict missing links between the entities. In existing approaches, a scoring function $\psi :\xi \times \times \xi \rightarrow $ (or sometimes, $[0,1]$ ) is learned to evaluate whether any given fact is true, as per the model. For predicting links between the entities, since the set $\xi $ is small enough to be enumerated, missing links of the form $\langle s,r,?\rangle $ are identified by enumerating all the objects and scoring the triples using $\psi $ (i.e. assume the resulting entity comes from a known set). For example, in Figure 1 , the goal is to predict that Carles Puyol plays for Barcelona.
Many of the recent advances in link prediction use an embedding-based approach; each entity in $\xi $ and relation in $$ are assigned distinct, dense vectors, which are then used by $\psi $ to compute the score. In DistMult BIBREF2 , for example, each entity $i$ is mapped to a $d$ -dimensional dense vector ( $\mathbf {e}_i\in ^{d}$ ) and each relation $r$ to a diagonal matrix $\mathbf {R}_r\in ^{d\times d}$ , and consequently, the score for any triple $\langle s,r,o\rangle $ is computed as $\psi (s,r,o) = \mathbf {e}_s^T \mathbf {R}_r \mathbf {e}_o$ . Along similar lines, ConvE BIBREF5 uses vectors to represent the entities and the relations, $$0 , then, after applying a CNN layer on $$1 and $$2 , combines it with $$3 to score a triplet, i.e. the scoring function $$4 is $$5 . Other relational embedding approaches primarily vary in their design of the scoring function BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , but share the shortcoming of assigning distinct vectors to every entity, and assuming that the possible object entities can be enumerated. In this work we focus on DistMult because of its simplicity, popularity, and high accuracy, and ConvE because of its state-of-the-art results.
Problem Setup
When faced with additional triples in form of multimodal data, the setup of link prediction is slightly different. Consider a set of all potential multimodal objects, $$ , i.e. possible images, text, numerical, and categorical values, and multimodal evidence triples, $\langle s,r,o\rangle $ , where $s\in \xi $ , $r\in $ , and $o\in $ . Our goals with incorporating multimodal information into KB remain the same: we want to be able to score the truth of any triple $\langle s,r,o\rangle $ , where $o$ is from $\xi $ (link data) or from $$ (multimodal data), and to be able to predict missing value $\langle s,r,?\rangle $ that may be from $\langle s,r,o\rangle $0 or $\langle s,r,o\rangle $1 (depending on $\langle s,r,o\rangle $2 ). For the example in Figure 1 , in addition to predicting that Carles Puyol plays for Barcelona from multimodal evidence, we are also interested in generating an image for Carles Puyol, if it is missing.
Existing approaches to this problem assume that the subjects and the objects are from a fixed set of entities $\xi $ , and thus are treated as indices into that set, which fails for the multimodal setting primarily for two reasons. First, learning distinct vectors for each object entity does not apply to multimodal values as they will ignore the actual content of the multimodal attribute. For example, there will be no way to generalize vectors learned during training to unseen values that might appear in the test; this is not a problem for the standard setup due to the assumption that all entities have been observed during training. Second, in order to predict a missing multimodal value, $\langle s,r,?\rangle $ , enumeration is not possible as the search space is potentially infinite (or at least intractable to search).
Multimodal KB Embeddings (MKBE)
To incorporate such multimodal objects into the existing relational models like DistMult and ConvE, we propose to learn embeddings for these types of data as well. We utilize recent advances in deep learning to construct encoders for these objects to represent them, essentially providing an embedding $\mathbf {e}_o$ for any object value.
The overall goal remains the same: the model needs to utilize all the observed subjects, objects, and relations, across different data types, in order to estimate whether any fact $\langle s, r, o\rangle $ holds. We present an example of an instantiation of MKBE for a knowledge base containing YAGO entities in Figure 2 a. For any triple $\langle s,r,o\rangle $ , we embed the subject (Carles Puyol) and the relation (such as playsFor, wasBornOn, or playsFor) using a direct lookup. For the object, depending on the domain (indexed, string, numerical, or image, respectively), we use approrpiate encoders to compute its embedding $\mathbf {e}_o$ . As in DistMult and ConvE, these embeddings are used to compute the score of the triple.
Via these neural encoders, the model can use the information content of multimodal objects to predict missing links where the objects are from $\xi $ , however, learning embeddings for objects in $$ is not sufficient to generate missing multimodal values, i.e. $\langle s, r, ?\rangle $ where the object is in $$ . Consequently, we introduce a set of neural decoders $D:\xi \times \rightarrow $ that use entity embeddings to generate multimodal values. An outline of our model for imputing missing values is depicted in Figure 2 b. We will describe these decoders in Section "Conclusion" .
Encoding Multimodal Data
Here we describe the encoders we use for multimodal objects. A simple example of MKBE is provided in Figure 2 a. As it shows, we use different encoder to embed each specific data type.
Structured Knowledge Consider a triplet of information in the form of $\langle s ,r ,o\rangle $ . To represent the subject entity $s$ and the relation $r$ as independent embedding vectors (as in previous work), we pass their one-hot encoding through a dense layer. Furthermore, for the case that the object entity is categorical, we embed it through a dense layer with a recently introduced selu activation BIBREF6 , with the same number of nodes as the embedding space dimension.
Numerical Objects in the form of real numbers can provide a useful source of information and are often quite readily available. We use a feed forward layer, after standardizing the input, in order to embed the numbers (in fact, we are projecting them to a higher-dimensional space, from $\rightarrow ^d$ ). It is worth noting that existing methods treat numbers as distinct entities, e.g., learn independent vectors for numbers 39 and 40, relying on data to learn that these values are similar to each other.
Text Since text can be used to store a wide variety of different types of information, for example names versus paragraph-long descriptions, we create different encoders depending on the lengths of the strings involved. For attributes that are fairly short, such as names and titles, we use character-based stacked, bidirectional GRUs to encode them, similar to BIBREF7 , using the final output of the top layer as the representation of the string. For strings that are much longer, such as detailed descriptions of entities consisting of multiple sentences, we treat them as a sequence of words, and use a CNN over the word embeddings, similar to BIBREF8 , in order to learn the embedding of such values. These two encoders provide a fixed length encoding that has been shown to be an accurate semantic representation of strings for multiple tasks BIBREF9 .
Images Images can also provide useful evidence for modeling entities. For example, we can extract person's details such as gender, age, job, etc., from image of the person BIBREF10 , or location information such as its approximate coordinates, neighboring locations, and size from map images BIBREF11 . A variety of models have been used to compactly represent the semantic information in the images, and have been successfully applied to tasks such as image classification, captioning BIBREF12 , and question-answering BIBREF13 . To embed images such that the encoding represents such semantic information, we use the last hidden layer of VGG pretrained network on Imagenet BIBREF14 , followed by compact bilinear pooling BIBREF15 , to obtain the embedding of the images.
Training We follow the setup from BIBREF5 that consists of binary cross-entropy loss without negative sampling for both ConvE and DisMult scoring. In particular, for a given subject-relation pair $(s,r)$ , we use a binary label vector $\mathbf {t}^{s,r}$ over all entities, indicating whether $\langle s,r,o\rangle $ is observed during training. Further, we denote the model's probability of truth for any triple $\langle s,r,o\rangle $ by $p^{s,r}_o$ , computed using a sigmoid over $\psi (s,r,o)$ . The binary cross-entropy loss is thus defined as: $ \nonumber \sum _{(s,r)}\sum _{o} t^{s,r}_o\log (p^{s,r}_o) + (1-t^{s,r}_o)\log (1 - p^{s,r}_o). $
We use the same loss for multimodal triples as well, except that the summation is restricted to the objects of the same modality, i.e. for an entity $s$ and its text description, $\mathbf {t}^{s,r}$ is a one-hot vector over all descriptions observed during training.
Decoding Multimodal Data
Here we describe the decoders we use to generate multimodal values for entities from their embeddings. The multimodal imputing model is shown in Figure 2 b, which uses different neural decoders to generate missing attributes (more details are provided in supplementary materials).
Numerical and Categorical data To recover the missing numerical and categorical data such as dates, gender, and occupation, we use a simple feed-forward network on the entity embedding to predict the missing attributes. In other words, we are asking the model, if the actual birth date of an entity is not in the KB, what will be the most likely date, given the rest of the relational information. These decoders are trained with embeddings from Section "Discussion and Limitations" , with appropriate losses (RMSE for numerical and cross-entropy for categories).
Text A number of methods have considered generative adversarial networks (GANs) to generate grammatical and linguistically coherent sentences BIBREF16 , BIBREF17 , BIBREF18 . In this work, we use the adversarially regularized autoencoder (ARAE) BIBREF19 to train generators that decodes text from continuous codes, however, instead of using the random noise vector $z$ , we condition the generator on the entity embeddings.
Images Similar to text recovery, to find the missing images we use conditional GAN structure. Specifically, we combine the BE-GAN BIBREF20 structure with pix2pix-GAN BIBREF21 model to generate high-quality images, conditioning the generator on the entity embeddings in the knowledge base representation.
Related Work
There is a rich literature on modeling knowledge bases using low-dimensional representations, differing in the operator used to score the triples. In particular, they use matrix and tensor multiplication BIBREF22 , BIBREF2 , BIBREF23 , Euclidean distance BIBREF1 , BIBREF24 , BIBREF25 , circular correlation BIBREF3 , or the Hermitian dot product BIBREF4 as scoring function. However, the objects for all of these approaches are a fixed set of entities, i.e., they only embed the structured links between the entities. Here, we use different types of information (text, numerical values, images, etc.) in the encoding component by treating them as relational triples.
A number of methods utilize an extra type of information as the observed features for entities, by either merging, concatenating, or averaging the entity and its features to compute its embeddings, such as numerical values BIBREF26 (we use KBLN from this work to compare it with our approach using only numerical as extra attributes), images BIBREF27 , BIBREF28 (we use IKRL from the first work to compare it with our approach using only images as extra attributes), text BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 , and a combination of text and image BIBREF35 . Further, BIBREF7 address the multilingual relation extraction task to attain a universal schema by considering raw text with no annotation as extra feature and using matrix factorization to jointly embed KB and textual relations BIBREF36 . In addition to treating the extra information as features, graph embedding approaches BIBREF37 , BIBREF38 consider observed attributes while encoding to achieve more accurate embeddings.
The difference between MKBE and these mentioned approaches is three-fold: (1) we are the first to use different types of information in a unified model, (2) we treat these different types of information (numerical, text, image) as relational triples of structured knowledge instead of predetermined features, i.e., first-class citizens of the KB, and not auxiliary features, and (3) our model represents uncertainty in them, supporting the missing values and facilitating recovery of missing values.
Evaluation Benchmarks
To evaluate the performance of our multimodal relational embeddings approach, we provide two new benchmarks by extending existing datasets. Table 1 provides the statistics of these datasets.
MovieLens-100k dataset BIBREF39 is a popular benchmark in recommendation systems to predict user ratings with contextual features, containing around 1000 users on 1700 movies. MovieLens already contains rich relational data about occupation, gender, zip code, and age for users and genre, release date, and the titles for movies. We augment this data with movie posters collected from TMDB (https://www.themoviedb.org/). We treat the 5-point ratings as five different relations in KB triple format, i.e., $\langle \text{user},r=5,\text{movie}\rangle $ , and evaluate the rating predictions as other relations are introduced.
YAGO-10 Even though MovieLens has a variety of data types, it is still quite small, and is over a specialized domain. We also consider a second dataset that is much more appropriate for knowledge graph completion and is popular for link prediction, the YAGO3-10 knowledge graph BIBREF40 , BIBREF41 . This graph consists of around 120,000 entities, such as people, locations, and organizations, and 37 relations, such as kinship, employment, and residency, and thus much closer to the traditional information extraction goals. We extend this dataset with the textual description (as an additional relation) and the images associated with each entity (for half of the entities), provided by DBpedia BIBREF42 . We also include additional relations such as wasBornOnDate that have dates as values.
Experiment Results
In this section, we first evaluate the ability of MKBE to utilize the multimodal information by comparing to DistMult and ConvE through a variety of tasks. Then, by considering the recovery of missing multimodal values (text, images, and numerical) as the motivation, we examine the capability of our models in generation. Details of the hyperparameters and model configurations is provided in the supplementary material, and the source code and the datasets to reproduce the results is available at https://github.com/pouyapez/mkbe.
Link Prediction
In this section, we evaluate the capability of MKBE in the link prediction task. The goal is to calculate MRR and Hits@ metric (ranking evaluations) of recovering the missing entities from triples in the test dataset, performed by ranking all the entities and computing the rank of the correct entity. Similar to previous work, here we focus on providing the results in a filtered setting, that is we only rank triples in the test data against the ones that never appear in either train or test datasets.
MovieLens-100k We train the model using Rating as the relation between users and movies. We use a character-level GRU for the movie titles, a separate feed-forward network for age, zip code, and release date, and finally, we use a VGG network on the posters (for every other relation we use a dense layer). Table 2 shows the link (rating) prediction evaluation on MovieLens when test data is consisting only of rating triples. We calculate our metrics by ranking the five relations that represent ratings instead of object entities. We label models that use ratings as R, movie-attributes as M, user-attributes as U, movie titles as T, and posters as P. As shown, the model R+M+U+T outperforms others with a considerable gap demonstrating the importance of incorporating extra information. Hits@1 for the baseline is 40%, matching existing recommendation systems BIBREF43 . From these results, we see that the models benefit more from titles as compared to the posters.
YAGO-10 The result of link prediction on our YAGO dataset is provided in Table 3 . We label models using structured information as S, entity-description as D, numerical information as N, and entity-image as I. We see that the model that encodes all type of information consistently performs better than other models, indicating that the model is effective in utilizing the extra information. On the other hand, the model that uses only text performs the second best, suggesting the entity descriptions contain more information than others. It is notable that model $S$ is outperformed by all other models, demonstrating the importance of using different data types for attaining higher accuracy. This observation is consistent across both DistMult and ConvE, and the results obtained on ConvE are the new state-of-art for this dataset (as compared to BIBREF5 ). Furthermore, we implement KBLN BIBREF26 and IKRL BIBREF27 to compare them with our S+N and S+I models. Our models outperform these approaches, in part because both of these methods require same multimodal attributes for both of the subject and object in each triple.
Relation Breakdown We perform additional analysis on the YAGO dataset to gain a deeper understanding of the performance of our model using ConvE method. Table 4 compares our models on some of the most frequent relations. As shown, the model that includes textual description significantly benefits isAffiliatedTo, and playsFor relations, as this information often appears in text. Moreover, images are useful for hasGender and isMarriedTo, while for the relation isConnectedTo, numerical (dates) are more effective than images.
Imputing Multimodal Attributes
Here we present an evaluation on imputing multimodal attributes (text, image and numerical).
Numerical and Categorical Table 6 shows performance of predicting missing numerical attributes in the data, evaluated via holding out $10\%$ of the data. We only consider numerical values (dates) that are more recent than $1000 AD$ to focus on more relevant entities. In addition to the neural decoder, we train a search-based decoder as well by considering all 1017 choices in the interval $[1000,2017]$ , and for each triple in the test data, finding the number that the model scores the highest; we use this value to compute the RMSE. As we can see, all info outperform other methods on both datasets, demonstrating MKBE is able to utilize different multimodal values for modeling numerical information. Further, the neural decoder performs better than the search-based one, showing the importance of proper decoder, even for finite, enumerable sets. Along the same line, Table 6 shows genre prediction accuracy on $10\%$ of held-out MovieLens dataset. Again, the model that uses all the information outperforms other methods.
MovieLens Titles For generating movie titles, we randomly consider 200 of them as test, 100 as validation, and the remaining ones as training data. The goal here is to generate titles for movies in the test data using the previously mentioned GAN structure. To evaluate our results we conduct a human experiment on Amazon Mechanical Turk (AMT) asking participant two questions: (1) whether they find the movie title real, and (2) which of the four genres is most appropriate for the given title. We consider 30 movies each as reference titles, fake titles generated from only ratings as conditional data, and fake titles conditioned on all the information. Further, each question was asked for 3 participants, and the results computed over the majority choice are shown in Table 7 . Fake titles generated with all the information are more similar to reference movie titles, demonstrating that the embeddings that have access to more information effectively generate higher-quality titles.
YAGO Descriptions The goal here is to generate descriptive text for entities from their embeddings. Since the original descriptions can be quite long, we consider first sentences that are less than 30 tokens, resulting in $96,405$ sentences. We randomly consider 3000 of them as test, 3000 as validation, and the remaining as training data for the decoder. To evaluate the quality of the generated descriptions, and whether they are appropriate for the entity, we conduct a user study asking participants if they can guess the realness of sentences and the occupation (entertainer, sportsman, or politician), gender, and age (above or below 35) of the subject entity from the description. We provide 30 examples for each model asking each question from 3 participants and calculate the accuracy of the majority vote. The results presented in Table 8 show that the models are fairly competent in informing the users of the entity information, and further, descriptions generated from embeddings that had access to more information outperforms the model with only structured data. Examples of generated descriptions are provided in Table 9 (in addition to screenshots of user study, more examples of generated descriptions, and MovieLens titles are provided in supplementary materials).
YAGO Images Here, we evaluate the quality of images generated from entity embeddings by humans ( $31,520$ , split into train/text). Similar to descriptions, we conduct a study asking users to guess the realness of images and the occupation, gender, and age of the subject. We provide 30 examples for each model asking each question from 3 participants, and use the majority choice.
The results in Table 8 indicate that the images generated with embeddings based on all the information are more accurate for gender and occupation. Guessing age from the images is difficult since the image on DBpedia may not correspond to the age of the person, i.e. some of the older celebrities had photos from their youth. Examples of generated images are shown in Table 10 .
Discussion and Limitations
An important concern regarding KB embedding approaches is their scalability. While large KBs are a problem for all embedding-based link prediction techniques, MKBE is not significantly worse than existing ones because we treat multimodal information as additional triples. Specifically, although multimodal encoders/decoders are more expensive to train than existing relational models, the cost is still additive as we are effectively increasing the size of the training dataset. In addition to scalability, there are few other challenges when working with multimodal attributes. Although multimodal evidence provides more information, it is not at all obvious which parts of this additional data are informative for predicting the relational structure of the KB, and the models are prone to overfitting. MKBE builds upon the design of neural encoders and decoders that have been effective for specific modalities, and the results demonstrate that it is able to utilize the information effectively. However, there is still a need to further study models that capture multimodal attributes in a more efficient and accurate manner.
Since our imputing multimodal attributes model is based on GAN structure and the embeddings learned from KB representation, the generated attributes are directly limited by the power of GAN models and the amount of information in the embedding vectors. Although our generated attributes convey several aspects of corresponding entities, their quality is far from ideal due to the size of our datasets (both of our image and text datasets are order of magnitude smaller than common datasets in the existing text/image genration literature) and the amount of information captured by embedding vectors (the knowledge graphs are sparse). In future, we would like to (1) expand multimodal datasets to have more attributes (use many more entities from YAGO), and (2) instead of using learned embeddings to generate missing attributes, utilize the knowledge graph directly for generation.
Conclusion
Motivated by the need to utilize multiple sources of information, such as text and images, to achieve more accurate link prediction, we present a novel neural approach to multimodal relational learning. We introduce MKBE, a link prediction model that consists of (1) a compositional encoding component to jointly learn the entity and multimodal embeddings to encode the information available for each entity, and (2) adversarially trained decoding component that use these entity embeddings to impute missing multimodal values. We enrich two existing datasets, YAGO-10 and MovieLens-100k, with multimodal information to introduce benchmarks. We show that MKBE, in comparison to existing link predictors DistMult and ConvE, can achieve higher accuracy on link prediction by utilizing the multimodal evidence. Further, we show that MKBE effectively incorporates relational information to generate high-quality multimodal attributes like images and text. We have release the datasets and the open-source implementation of our models at https://github.com/pouyapez/mkbe.
Acknowledgements
We would like to thank Zhengli Zhao, Robert L. Logan IV, Dheeru Dua, Casey Graff, and the anonymous reviewers for their detailed feedback and suggestions. This work is supported in part by Allen Institute for Artificial Intelligence (AI2) and in part by NSF award #IIS-1817183. The views expressed are those of the authors and do not reflect the official policy or position of the funding agencies. | merging, concatenating, or averaging the entity and its features to compute its embeddings, graph embedding approaches, matrix factorization to jointly embed KB and textual relations |
6c4d121d40ce6318ecdc141395cdd2982ba46cff | 6c4d121d40ce6318ecdc141395cdd2982ba46cff_0 | Q: What is the data selection paper in machine translation
Text: Introduction
Machine Reading Comprehension (MRC) has gained growing interest in the research community BIBREF0 , BIBREF1 . In an MRC task, the machine reads a text passage and a question, and generates (or selects) an answer based on the passage. This requires the machine to possess strong comprehension, inference and reasoning capabilities. Over the past few years, there has been much progress in building end-to-end neural network models BIBREF2 for MRC. However, most public MRC datasets (e.g., SQuAD, MS MARCO, TriviaQA) are typically small (less than 100K) compared to the model size (such as SAN BIBREF3 , BIBREF4 with around 10M parameters). To prevent over-fitting, recently there have been some studies on using pre-trained word embeddings BIBREF5 and contextual embeddings in the MRC model training, as well as back-translation approaches BIBREF1 for data augmentation.
Multi-task learning BIBREF6 is a widely studied area in machine learning, aiming at better model generalization by combining training datasets from multiple tasks. In this work, we explore a multi-task learning (MTL) framework to enable the training of one universal model across different MRC tasks for better generalization. Intuitively, this multi-task MRC model can be viewed as an implicit data augmentation technique, which can improve generalization on the target task by leveraging training data from auxiliary tasks.
We observe that merely adding more tasks cannot provide much improvement on the target task. Thus, we propose two MTL training algorithms to improve the performance. The first method simply adopts a sampling scheme, which randomly selects training data from the auxiliary tasks controlled by a ratio hyperparameter; The second algorithm incorporates recent ideas of data selection in machine translation BIBREF7 . It learns the sample weights from the auxiliary tasks automatically through language models. Prior to this work, many studies have used upstream datasets to augment the performance of MRC models, including word embedding BIBREF5 , language models (ELMo) BIBREF8 and machine translation BIBREF1 . These methods aim to obtain a robust semantic encoding of both passages and questions. Our MTL method is orthogonal to these methods: rather than enriching semantic embedding with external knowledge, we leverage existing MRC datasets across different domains, which help make the whole comprehension process more robust and universal. Our experiments show that MTL can bring further performance boost when combined with contextual representations from pre-trained language models, e.g., ELMo BIBREF8 .
To the best of our knowledge, this is the first work that systematically explores multi-task learning for MRC. In previous methods that use language models and word embedding, the external embedding/language models are pre-trained separately and remain fixed during the training of the MRC model. Our model, on the other hand, can be trained with more flexibility on various MRC tasks. MTL is also faster and easier to train than embedding/LM methods: our approach requires no pre-trained models, whereas back translation and ELMo both rely on large models that would need days to train on multiple GPUs BIBREF9 , BIBREF8 .
We validate our MTL framework with two state-of-the-art models on four datasets from different domains. Experiments show that our methods lead to a significant performance gain over single-task baselines on SQuAD BIBREF0 , NewsQA BIBREF10 and Who-Did-What BIBREF11 , while achieving state-of-the-art performance on the latter two. For example, on NewsQA BIBREF10 , our model surpassed human performance by 13.4 (46.5 vs 59.9) and 3.2 (72.6 vs 69.4) absolute points in terms of exact match and F1. The contribution of this work is three-fold. First, we apply multi-task learning to the MRC task, which brings significant improvements over single-task baselines. Second, the performance gain from MTL can be easily combined with existing methods to obtain further performance gain. Third, the proposed sampling and re-weighting scheme can further improve the multi-task learning performance.
Related Work
Studies in machine reading comprehension mostly focus on architecture design of neural networks, such as bidirectional attention BIBREF2 , dynamic reasoning BIBREF12 , and parallelization BIBREF1 . Some recent work has explored transfer learning that leverages out-domain data to learn MRC models when no training data is available for the target domain BIBREF13 . In this work, we explore multi-task learning to make use of the data from other domains, while we still have access to target domain training data.
Multi-task learning BIBREF6 has been widely used in machine learning to improve generalization using data from multiple tasks. For natural language processing, MTL has been successfully applied to low-level parsing tasks BIBREF14 , sequence-to-sequence learning BIBREF15 , and web search BIBREF16 . More recently, BIBREF17 proposes to cast all tasks from parsing to translation as a QA problem and use a single network to solve all of them. However, their results show that multi-task learning hurts the performance of most tasks when tackling them together. Differently, we focus on applying MTL to the MRC task and show significant improvement over single-task baselines.
Our sample re-weighting scheme bears some resemblance to previous MTL techniques that assign weights to tasks BIBREF18 . However, our method gives a more granular score for each sample and provides better performance for multi-task learning MRC.
Model Architecture
We call our model Multi-Task-SAN (MT-SAN), which is a variation of SAN BIBREF3 model with two main differences: i) we add a highway network layer after the embedding layer, the encoding layer and the attention layer; ii) we use exponential moving average BIBREF2 during evaluation. The SAN architecture and our modifications are briefly described below and in Section " Experiment Details" , and detailed description can be found in BIBREF3 .
Similar to MT-SAN, we add a highway network after the lexicon encoding layer and the contextual encoding layer and use a different answer module for each dataset. We apply MT-DrQA to a broader range of datasets. For span-detection datasets such as SQuAD, we use the same answer module as DrQA. For cloze-style datasets like Who-Did-What, we use the attention-sum reader BIBREF39 as the answer module. For classification tasks required by SQuAD v2.0 BIBREF42 , we apply a softmax to the last state in the memory layer and use it as the prediction.
Input Format
For most tasks we consider, our MRC model takes a triplet $(Q,P,A)$ as input, where $Q=(q_1,...,q_m), P=(p_1,...,p_n)$ are the word index representations of a question and a passage, respectively , and $A=(a_{\text{begin}}, a_{\text{end}})$ is the index of the answer span. The goal is to predict $A$ given $(Q,P)$ .
Lexicon Encoding Layer
We map the word indices of $P$ and $Q$ into their 300-dim Glove vectors BIBREF5 . We also use the following additional information for embedding words: i) 16-dim part-of-speech (POS) tagging embedding; ii) 8-dim named-entity-recognition (NER) embedding; iii) 3-dim exact match embedding: $f_{\text{exact\_match}}(p_i)=\mathbb {I}(p_i\in Q)$ , where matching is determined based on the original word, lower case, and lemma form, respectively; iv) Question enhanced passage word embeddings: $f_{\text{align}}(p_i)=\sum _{j} \gamma _{i,j} h(\text{GloVe}(q_j))$ , where
$${0.89}{!}{ \gamma _{i,j}=\frac{\exp (h(\text{GloVe}(p_j)),h(\text{GloVe}(q_i)))}{\sum _{j^{\prime }}\exp (h(\text{GloVe}(p_{j^{\prime }})),h(\text{GloVe}(q_i)))}}$$ (Eq. 3)
is the similarity between word $p_j$ and $q_i$ , and $g(\cdot )$ is a 300-dim single layer neural net with Rectified Linear Unit (ReLU) $g(x)=\text{ReLU}(W_1x)$ ; v) Passage-enhanced question word embeddings: the same as iv) but computed in the reverse direction. To reduce the dimension of the input to the next layer, the 624-dim input vectors of passages and questions are passed through a ReLu layer to reduce their dimensions to 125.
After the ReLU network, we pass the 125-dim vectors through a highway network BIBREF19 , to adapt to the multi-task setting: $g_i = \text{sigmoid}(W_2p_i^t), p_i^t=\text{ReLU}(W_3p_i^t)\odot g_i + g_i\odot p_i^t$ , where $p_i^t$ is the vector after ReLU transformation. Intuitively, the highway network here provides a neuron-wise weighting, which can potentially handle the large variation in data introduced by multiple datasets.
Contextual Encoding Layer
Both the passage and question encodings go through a 2-layer Bidirectional Long-Short Term Memory (BiLSTM, BIBREF20 , BIBREF20 ) network in this layer. We append a 600-dim CoVe vector BIBREF21 to the output of the lexicon encoding layer as input to the contextual encoders. For the experiments with ELMo, we also append a 1024-dim ELMo vector. Similar to the lexicon encoding layer, the outputs of both layers are passed through a highway network for multi-tasking. Then we concatenate the output of the two layers to obtain $H^q\in \mathbb {R}^{2d\times m}$ for the question and $H^p=\mathbb {R}^{2d\times n}$ the passage, where $d$ is the dimension of the BiLSTM.
Memory/Cross Attention Layer
We fuse $H^p$ and $H^q$ through cross attention and generate a working memory in this layer. We adopt the attention function from BIBREF22 and compute the attention matrix as $C=\text{dropout}\left(f_{\text{attention}}(\hat{H}^q, \hat{H}^p)\right) \in \mathbb {R}^{m\times n}.$ We then use $C$ to compute a question-aware passage representation as $U^p = \text{concat}(H^p, H^qC)$ . Since a passage usually includes several hundred tokens, we use the method of BIBREF23 to apply self attention to the representations of passage to rearrange its information: $ \hat{U}^p = U^p\text{drop}_{\text{diag}}(f_{\text{attention}}(U^p, U^p)),$ where $\text{drop}_{\text{diag}}$ means that we only drop diagonal elements on the similarity matrix (i.e., attention with itself). Then, we concatenate $U^p$ and $\hat{U}^p$ and pass them through a BiLSTM: $M=\text{BiLSTM}([U^p];\hat{U}^p])$ . Finally, output of the BiLSTM (after concatenating two directions) goes through a highway layer to produce the memory.
Answer Module
The base answer module is the same as SAN, which computes a distribution over spans in the passage. Firstly, we compute an initial state $s_0$ by self attention on $H^q$ : $s_0\leftarrow \text{Highway}\left(\sum _{j} \frac{\exp (w_4H^q_j)}{\sum _{j^{\prime }}\exp {w_4H^q_{j^{\prime }}}}\cdot H^q_j\right)$ . The final answer is computed through $T$ time steps. At step $t\in \lbrace 1,...,T-1\rbrace $ , we compute the new state using a Gated Recurrent Unit (GRU, BIBREF24 , BIBREF24 ) $s_t=\text{GRU}(s_{t-1},x_t)$ , where $x_t$ is computed by attention between $M$ and $s_{t-1}$ : $x_t=\sum _{j} \beta _j M_j, \beta _j=\text{softmax}(s_{t-1}W_5M)$ . Then each step produces a prediction of the start and end of answer spans through a bilinear function: $H^q$0 $H^q$1 The final prediction is the average of each time step: $H^q$2 . We randomly apply dropout on the step level in each time step during training, as done in BIBREF3 . During training, the objective is the log-likelihood of the ground truth: $H^q$3 .
Multi-task Learning Algorithms
We describe our MTL training algorithms in this section. We start with a very simple and straightforward algorithm that samples one task and one mini-batch from that task at each iteration. To improve the performance of MTL on a target dataset, we propose two methods to re-weight samples according to their importance. The first proposed method directly lowers the probability of sampling from a particular auxiliary task; however, this probability has to be chosen using grid search. We then propose another method that avoids such search by using a language model.
[h!] Multi-task Learning of MRC [1] k different datasets $\mathcal {D}_1,...,\mathcal {D}_K$ , max_epoch Initialize the model $\mathcal {M}$ epoch $=1,2,...$ , max_epoch Divide each dataset $\mathcal {D}_k$ into $N_k$ mini-batches $\mathcal {D}_k=\lbrace b_1^k,...,b_{N_k}^k\rbrace $ , $1\le k\le K$ Put all mini-batches together and randomly shuffle the order of them, to obtain a sequence $B=(b_1,...,b_L)$ , where $L=\sum _k N_k$ each mini-batch $b\in B$ Perform gradient update on $\mathcal {M}$0 with loss $\mathcal {M}$1 Evaluate development set performance Model with best evaluation performance
Suppose we have $K$ different tasks, the simplest version of our MTL training procedure is shown in Algorithm " Multi-task Learning Algorithms" . In each epoch, we take all the mini-batches from all datasets and shuffle them for model training, and the same set of parameters is used for all tasks. Perhaps surprisingly, as we will show in the experiment results, this simple baseline method can already lead to a considerable improvement over the single-task baselines.
Mixture Ratio
One observation is that the performance of our model using Algorithm " Multi-task Learning Algorithms" starts to deteriorate as we add more and more data from other tasks into our training pool. We hypothesize that the external data will inevitably bias the model towards auxiliary tasks instead of the target task.
[h!] Multi-task Learning of MRC with mixture ratio, targeting $\mathcal {D}_1$ [1] K different datasets $\mathcal {D}_1,...,\mathcal {D}_K$ , max_epoch, mixture ratio $\alpha $ Initialize the model $\mathcal {M}$ epoch $=1,2,...$ , max_epoch Divide each dataset $\mathcal {D}_k$ into $N_k$ mini-batches $\mathcal {D}_k=\lbrace b_1^k,...,b_{N_k}^k\rbrace $ , $1\le k\le K$ $S\leftarrow \lbrace b_1^1,...,b_{N_1}^1\rbrace $ Randomly pick $\mathcal {D}_1,...,\mathcal {D}_K$0 mini-batches from $\mathcal {D}_1,...,\mathcal {D}_K$1 and add to $\mathcal {D}_1,...,\mathcal {D}_K$2 Assign mini-batches in $\mathcal {D}_1,...,\mathcal {D}_K$3 in a random order to obtain a sequence $\mathcal {D}_1,...,\mathcal {D}_K$4 , where $\mathcal {D}_1,...,\mathcal {D}_K$5 each mini-batch $\mathcal {D}_1,...,\mathcal {D}_K$6 Perform gradient update on $\mathcal {D}_1,...,\mathcal {D}_K$7 with loss $\mathcal {D}_1,...,\mathcal {D}_K$8 Evaluate development set performance Model with best evaluation performance
To avoid such adverse effect, we introduce a mixture ratio parameter during training. The training algorithm with the mixture ratio is presented in Algorithm "Answer Module for WDW" , with $\mathcal {D}_1$ being the target dataset. In each epoch, we use all mini-batches from $\mathcal {D}_1$ , while only a ratio $\alpha $ of mini-batches from external datasets are used to train the model. In our experiment, we use hyperparameter search to find the best $\alpha $ for each dataset combination. This method resembles previous methods in multi-task learning to weight losses differently (e.g., BIBREF18 , BIBREF18 ), and is very easy to implement. In our experiments, we use Algorithm "Answer Module for WDW" to train our network when we only use 2 datasets for MTL.
Sample Re-Weighting
The mixture ratio (Algorithm "Answer Module for WDW" ) dramatically improves the performance of our system. However, it requires to find an ideal ratio by hyperparameter search which is time-consuming. Furthermore, the ratio gives the same weight to every auxiliary data, but the relevance of every data point to the target task can vary greatly.
We develop a novel re-weighting method to resolve these problems, using ideas inspired by data selection in machine translation BIBREF26 , BIBREF7 . We use $(Q^{k},P^{k},A^{k})$ to represent a data point from the $k$ -th task for $1\le k\le K$ , with $k=1$ being the target task. Since the passage styles are hard to evaluate, we only evaluate data points based on $Q^{k}$ and $A^k$ . Note that only data from auxiliary task ( $2\le k\le K$ ) is re-weighted; target task data always have weight 1.
Our scores consist of two parts, one for questions and one for answers. For questions, we create language models (detailed in Section " Experiment Details" ) using questions from each task, which we represent as $LM_k$ for the $k$ -th task. For each question $Q^{k}$ from auxiliary tasks, we compute a cross-entropy score:
$$H_{C,Q}(Q^{k})=-\frac{1}{m}\sum _{w\in Q^{k}}\log (LM_{C}(w)),$$ (Eq. 10)
where $C\in \lbrace 1,k\rbrace $ is the target or auxiliary task, $m$ is the length of question $Q^{k}$ , and $w$ iterates over all words in $Q^{k}$ .
It is hard to build language models for answers since they are typically very short (e.g., answers on SQuAD includes only one or two words in most cases). We instead just use the length of answers as a signal for scores. Let $l_{a}^{k}$ be the length of $A^{k}$ , the cross-entropy answer score is defined as:
$$H_{C,A}(A^{k})=-\log \text{freq}_C(l_a^{k}),$$ (Eq. 11)
where freq $_C$ is the frequency of answer lengths in task $C\in \lbrace 1,k\rbrace $ .
The cross entropy scores are then normalized over all samples in task $C$ to create a comparable metric across all auxiliary tasks:
$$H_{C,Q}^{\prime }(Q^k)=\frac{H_{C,Q}(Q^k)-\min (H_{C,Q})}{\max (H_{C,Q})-\min (H_{C,Q})} \\ H_{C,A}^{\prime }(A^k)=\frac{H_{C,A}(A^k)-\min (H_{C,A})}{\max (H_{C,A})-\min (H_{C,A})}$$ (Eq. 12)
for $C\in \lbrace 1,2,...,K\rbrace $ . For $C\in \lbrace 2,...,K\rbrace $ , the maximum and minimum are taken over all samples in task $k$ . For $C=1$ (target task), they are taken over all available samples.
Intuitively, $H^{\prime }_{C,Q}$ and $H^{\prime }_{C,A}$ represents the similarity of text $Q,A$ to task $C$ ; a low $H^{\prime }_{C,Q}$ (resp. $H^{\prime }_{C,A}$ ) means that $Q^k$ (resp. $A^k$ ) is easy to predict and similar to $C$ , and vice versa. We would like samples that are most similar from data in the target domain (low $H^{\prime }_1$ ), and most different (informative) from data in the auxiliary task (high $H^{\prime }_{C,A}$0 ). We thus compute the following cross-entropy difference for each external data:
$$\text{CED}(Q^{k},A^{k})=&(H^{\prime }_{1,Q}(Q^{k})-H^{\prime }_{k,Q}(Q^{k}))+\nonumber \\ &(H^{\prime }_{1,A}(A^{k})-H^{\prime }_{k,A}(A^{k})) $$ (Eq. 13)
for $k\in \lbrace 2,...,K\rbrace $ . Note that a low CED score indicates high importance. Finally, we transform the scores to weights by taking negative, and normalize between $[0,1]$ :
$${0.89}{!}{\displaystyle \text{CED}^{\prime }(Q^{k},A^{k}) =1-\frac{\text{CED}(Q^{k},A^{k})-\min (\text{CED})}{\max (\text{CED})-\min (\text{CED})}.}$$ (Eq. 14)
Here the maximum and minimum are taken over all available samples and task. Our training algorithm is the same as Algorithm 1, but for minibatch $b$ we instead use the loss
$$l(b)=\sum _{(P,Q,A)\in b} \text{CED}^{\prime }(Q,A)l(P,Q,A)$$ (Eq. 15)
in step " Multi-task Learning Algorithms" . We define $\text{CED}^{\prime }(Q^1,A^1)\equiv 1$ for all target samples $(P^1,Q^1,A^1)$ .
Experiments
Our experiments are designed to answer the following questions on multi-task learning for MRC:
1. Can we improve the performance of existing MRC systems using multi-task learning?
2. How does multi-task learning affect the performance if we combine it with other external data?
3. How does the learning algorithm change the performance of multi-task MRC?
4. How does our method compare with existing MTL methods?
We first present our experiment details and results for MT-SAN. Then, we provide a comprehensive study on the effectiveness of various MTL algorithms in Section " Comparison of Different MTL Algorithms" . At last, we provide some additional results on combining MTL with DrQA BIBREF29 to show the flexibility of our approach .
Datasets
We conducted experiments on SQuAD ( BIBREF0 , BIBREF0 ), NewsQA BIBREF10 , MS MARCO (v1, BIBREF30 , BIBREF30 ) and WDW BIBREF11 . Dataset statistics is shown in Table 1 . Although similar in size, these datasets are quite different in domains, lengths of text, and types of task. In the following experiments, we will validate whether including external datasets as additional input information (e.g., pre-trained language model on these datasets) helps boost the performance of MRC systems.
Experiment Details
We mostly focus on span-based datasets for MT-SAN, namely SQuAD, NewsQA, and MS MARCO. We convert MS MARCO into an answer-span dataset to be consistent with SQuAD and NewsQA, following BIBREF3 . For each question, we search for the best span using ROUGE-L score in all passage texts and use the span to train our model. We exclude questions with maximal ROUGE-L score less than 0.5 during training. For evaluation, we use our model to find a span in all passages. The prediction score is multiplied with the ranking score, trained following BIBREF31 's method to determine the final answer.
We train our networks using algorithms in Section " Multi-task Learning Algorithms" , using SQuAD as the target task. For experiments with two datasets, we use Algorithm "Answer Module for WDW" ; for experiments with three datasets we find the re-weighting mechanism in Section " Sample Re-Weighting" to have a better performance (a detailed comparison will be presented in Section " Comparison of Different MTL Algorithms" ). For generating sample weights, we build a LSTM language model on questions following the implementation of BIBREF32 with the same hyperparameters. We only keep the 10,000 most frequent words, and replace the other words with a special out-of-vocabulary token.
Parameters of MT-SAN are mostly the same as in the original paper BIBREF3 . We utilize spaCy to tokenize the text and generate part-of-speech and named entity labels. We use a 2-layer BiLSTM with 125 hidden units as the BiLSTM throughout the model. During training, we drop the activation of each neuron with 0.3 probability. For optimization, we use Adamax BIBREF33 with a batch size of 32 and a learning rate of 0.002. For prediction, we compute an exponential moving average (EMA, BIBREF2 BIBREF2 ) of model parameters with a decay rate of 0.995 and use it to compute the model performance. For experiments with ELMo, we use the model implemented by AllenNLP . We truncate passage to contain at most 1000 tokens during training and eliminate those data with answers located after the 1000th token. The training converges in around 50 epochs for models without ELMo (similar to the single-task SAN); For models with ELMo, the convergence is much faster (around 30 epochs).
Performance of MT-SAN
In the following sub-sections, we report our results on SQuAD and MARCO development sets, as well as on the development and test sets of NewsQA . All results are single-model performance unless otherwise noted.
The multi-task learning results of SAN on SQuAD are summarized in Table 2 . By using MTL on SQuAD and NewsQA, we can improve the exact-match (EM) and F1 score by (2%, 1.5%), respectively, both with and without ELMo. The similar gain indicates that our method is orthogonal to ELMo. Note that our single-model performance is slightly higher than the original SAN, by incorporating EMA and highway networks. By incorporating with multi-task learning, it further improves the performance. The performance gain by adding MARCO is relatively smaller, with 1% in EM and 0.5% in F1. We conjecture that MARCO is less helpful due to its differences in both the question and answer style. For example, questions in MS MARCO are real web search queries, which are short and may have typos or abbreviations; while questions in SQuAD and NewsQA are more formal and well written. Using 3 datasets altogether provides another marginal improvement. Our model obtains the best results among existing methods that do not use a large language model (e.g., ELMo). Our ELMo version also outperforms any other models which are under the same setting. We note that BERT BIBREF28 uses a much larger model than ours(around 20x), and we leave the performance of combining BERT with MTL as interesting future work.
The results of multi-task learning on NewsQA are in Table 3 . The performance gain with multi-task learning is even larger on NewsQA, with over 2% in both EM and F1. Experiments with and without ELMo give similar results. What is worth noting is that our approach not only achieves new state-of-art results with a large margin but also surpasses human performance on NewsQA.
Finally we report MT-SAN performance on MS MARCO in Table 4 . Multi-tasking on SQuAD and NewsQA provides a similar performance boost in terms of BLEU-1 and ROUGE-L score as in the case of NewsQA and SQuAD. Our method does not achieve very high performance compared to previous work, probably because we do not apply common techniques like yes/no classification or cross-passage ranking BIBREF36 .
We also test the robustness of our algorithm by performing another set of experiments on SQuAD and WDW. WDW is much more different than the other three datasets (SQuAD, NewsQA, MS MARCO): WDW guarantees that the answer is always a person, whereas the percentage of such questions in SQuAD is 12.9%. Moreover, WDW is a cloze dataset, whereas in SQuAD and NewsQA answers are spans in the passage. We use a task-specific answer layer in this experiment and use Algorithm "Answer Module for WDW" ; the WDW answer module is the same as in AS Reader BIBREF39 , which we describe in the appendix for completeness. Despite these large difference between datasets, our results (Table 5 ) show that MTL can still provide a moderate performance boost when jointly training on SQuAD (around 0.7%) and WDW (around 1%).
Comparison of methods using external data. As a method of data augmentation, we compare our approach to previous methods for MRC in Table 6 . Our model achieves better performance than back translation. We also observe that language models such as ELMo obtain a higher performance gain than multi-task learning, however, combining it with multi-task learning leads to the most significant performance gain. This validates our assumption that multi-task learning is more robust and is different from previous methods such as language modeling.
Comparison of Different MTL Algorithms
In this section, we provide ablation studies as well as comparisons with other existing algorithms on the MTL strategy. We focus on MT-SAN without ELMo for efficient training.
Table 7 compares different multi-task learning strategies for MRC. Both the mixture ratio (Sec "Answer Module for WDW" ) and sample re-weighting (Sec " Sample Re-Weighting" ) improves over the naive baseline of simply combining all the data (Algorithm " Multi-task Learning Algorithms" ). On SQuAD+MARCO, they provide around 0.6% performance boost in terms of both EM and F1, and around 1% on all 3 datasets. We note that this accounts for around a half of our overall improvement. Although sample re-weighting performs similar as mixture ratio, it significantly reduces the amount of training time as it eliminates the need for a grid searching the best ratio. Kendal et al., ( BIBREF18 ) use task uncertainty to weight tasks differently for MTL; our experiments show that this has some positive effect, but does not perform as well as our proposed two techniques. We note that Kendal et al. (as well as other previous MTL methods) optimizes the network to perform well for all the tasks, whereas our method focuses on the target domain which we are interested in, e.g., SQuAD.
Sensitivity of mixture ratio. We also investigate the effect of mixture ratio on the model performance. We plot the EM/F1 score on SQuAD dev set vs. mixture ratio in Figure 1 for MT-SAN when trained on all three datasets. The curve peaks at $\alpha =0.4$ ; however if we use $\alpha =0.2$ or $\alpha =0.5$ , the performance drops by around $0.5\%$ , well behind the performance of sample re-weighting. This shows that the performance of MT-SAN is sensitive to changes in $\alpha $ , making the hyperparameter search even more difficult. Such sensitivity suggests a preference for using our sample re-weighting technique. On the other hand, the ratio based approach is pretty straightforward to implement.
Analysis of sample weights. Dataset comparisons in Table 1 and performance in Table 2 suggests that NewsQA share more similarity with SQuAD than MARCO. Therefore, a MTL system should weight NewsQA samples more than MARCO samples for higher performance. We try to verify this in Table 8 by showing examples and statistics of the sample weights. We present the CED $^{\prime }$ scores, as well as normalized version of question and answer scores (resp. $(H^{\prime }_{1,Q}-H^{\prime }_{k,Q})$ and $(H^{\prime }_{1,A}-H^{\prime }_{k,A})$ in ( 13 ), and then negated and normalized over all samples in NewsQA and MARCO in the same way as in ( 14 )). A high $H_Q$ score indicates high importance of the question, and $H_A$ of the answer; CED $^{\prime }$ is a summary of the two. We first show one example from NewsQA and one from MARCO. The NewsQA question is a natural question (similar to SQuAD) with a short answer, leading to high scores both in questions and answers. The MARCO question is a phrase, with a very long answer, leading to lower scores. From overall statistics, we also find samples in NewsQA have a higher score than those in MARCO. However, if we look at MARCO questions that start with “when” or “who” (i.e., probability natural questions with short answers), the scores go up dramatically.
Conclusion
We proposed a multi-task learning framework to train MRC systems using datasets from different domains and developed two approaches to re-weight the samples for multi-task learning on MRC tasks. Empirical results demonstrated our approaches outperform existing MTL methods and the single-task baselines as well. Interesting future directions include combining with larger language models such as BERT, and MTL with broader tasks such as language inference BIBREF40 and machine translation.
Acknowledgements
Yichong Xu has been partially supported by DARPA (FA8750-17-2-0130).
Answer Module for WDW
We describe the answer module for WDW here for completeness. For WDW we need to choose an answer from a list of candidates; the candidates are people names that have appeared in the passage. We use the same way to summary information in questions as in span-based models: $s_0\leftarrow \text{Highway}\left(\sum _{j} \frac{\exp (w_4H^q_j)}{\sum _{j^{\prime }}\exp {w_4H^q_{j^{\prime }}}}\cdot H^q_j\right)$ . We then compute an attention score via simple dot product: $s=\text{softmax}(s_0^TM)$ . The probability of a candidate being the true answer is the aggregation of attention scores for all appearances of the candidate: $\Pr (c|Q,P) \propto \sum _{1\le i\le n} s_i\mathbb {I}(p_i\in C) $
for each candidate $C$ . Recall that $n$ is the length of passage $P$ , and $p_i$ is the i-th word; therefore $\mathbb {I}(p_i\in C)$ is the indicator function of $p_i$ appears in candidate $C$ . The candidate with the largest probability is chosen as the predicted answer.
Experiment Results on DrQA
To demonstrate the flexibility of our approach, we also adapt DrQA BIBREF29 into our MTL framework. We only test DrQA using the basic Algorithm "Answer Module for WDW" , since our goal is mainly to test the MTL framework. | BIBREF7, BIBREF26 |
b1457feb6cdbf4fb19c8e87e1cd43981bc991c4c | b1457feb6cdbf4fb19c8e87e1cd43981bc991c4c_0 | Q: Do they compare computational time of AM-softmax versus Softmax?
Text: Introduction
Speaker Recognition is an essential task with applications in biometric authentication, identification, and security among others BIBREF0 . The field is divided into two main subtasks: Speaker Identification and Speaker Verification. In Speaker Identification, given an audio sample, the model tries to identify to which one in a list of predetermined speakers the locution belongs. In the Speaker Verification, the model verifies if a sampled audio belongs to a given speaker or not. Most of the literature techniques to tackle this problem are based on INLINEFORM0 -vectors methods BIBREF1 , which extract features from the audio samples and classify the features using methods such as PLDA BIBREF2 , heavy-tailed PLDA BIBREF3 , and Gaussian PLDA BIBREF4 .
Despite the advances in recent years BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , Speaker Recognition is still a challenging problem. In the past years, Deep Neural Networks (DNN) has been taking place on pattern recognition tasks and signal processing. Convolutional Neural Networks (CNN) have already show that they are the actual best choice to image classification, detection or recognition tasks. In the same way, DNN models are being used combined with the traditional approaches or in end-to-end approaches for Speaker Recognition tasks BIBREF12 , BIBREF13 , BIBREF14 . In hybrid approaches, it is common to use the DNN model to extract features from a raw audio sample and then encode it on embedding vectors with low-dimensionality which samples sharing common features with closer samples. Usually, the embedding vectors are classified using traditional approaches.
The difficult behind the Speaker Recognition tasks is that audio signals are complex to model in low and high-level features that are discriminant enough to distinguish different speakers. Methods that use handcrafted features can extract more human-readable features and have a more appealing approach because humans can see what the method is doing and which features are used to make the inference. Nevertheless, handcrafted features lack in power. In fact, while we know what patterns they are looking for, we have no guarantee that these patterns are the best for the job. On the other hand, approaches based on Deep Learning have the power to learn patterns that humans may not be able to understand, but usually get better results than traditional methods, despite having more computational cost to training.
A promising approach to Speaker Recognition based on Deep Learning is the SincNet model BIBREF16 that unifies the power of Deep Learning with the interpretability of the handcrafted features. SincNet uses a Deep Learning model to process raw audio samples and learn powerful features. Therefore, it replaces the first layer of the DNN model, which is responsible for the convolution with parametrized sinc functions. The parametrized sinc functions implement band-pass filters and are used to convolve the waveform audio signal to extract basic low-level features to be later processed by the deeper layers of the network. The use of the sinc functions helps the network to learn more relevant features and also improves the convergence time of the model as the sinc functions have significantly fewer parameters than the first layer of traditional DNN. At the top of the model, the SincNet uses a Softmax layer which is responsible for mapping the final features processed by the network into a multi-dimensional space corresponding to the different classes or speakers.
The Softmax function is usually used as the last layer of DNN models. The function is used to delimit a linear surface that can be used as a decision boundary to separate samples from different classes. Although the Softmax function works well on optimizing a decision boundary that can be used to separate the classes, it is not appropriate to minimize the distance from samples of the same class. These characteristics may spoil the model efficiency on tasks like Speaker Verification that require to measure the distance between the samples to make a decision. To deal with this problem, new approaches such as Additive Margin Softmax BIBREF15 (AM-Softmax) are being proposed. The AM-Softmax introduces an additive margin to the decision boundary which forces the samples to be closer to each other, maximizing the distance between the classes and at the same time minimizing the distance from samples of the same class.
In this paper, we propose a new method for Speaker Verification called Additive Margin SincNet (AM-SincNet) that is highly inspirited on the SincNet architecture and the AM-Softmax loss function. In order to validate our hypothesis, the proposed method is evaluated on the TIMIT BIBREF17 dataset based in the Frame Error Rate. The following sections are organized as: In Section SECREF2 , we present the related works, the proposed method is introduced at Section SECREF3 , Section SECREF4 explains how we built our experiments, the results are discussed at Section SECREF5 , and finally at Section SECREF6 we made our conclusions.
Related Work
For some time, INLINEFORM0 -vectors BIBREF1 have been used as the state-of-the-art feature extraction method for speaker recognition tasks. Usually, the extracted features are classified using PLDA BIBREF2 or other similar techniques, such as heavy-tailed PLDA BIBREF3 and Gauss-PLDA BIBREF4 . The intuition behind these traditional methods and how they work can be better seem in BIBREF18 . Although they have been giving us some reasonable results, it is clear that there is still room for improvements BIBREF18 .
Recently, neural networks and deep learning techniques have shown to be a particularly attractive choice when dealing with feature extraction and patterns recognition in the most variety of data BIBREF19 , BIBREF20 . For instance, CNNs are proving to produce a high performance on image classification tasks. Moreover, deep learning architectures BIBREF21 , BIBREF22 and hybrid systems BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 are higher quality results on processing audio signals than traditional approaches. As an example, BIBREF28 built a speaker verification framework based on the Inception-Resnet-v1 deep neural network architecture using the triplet loss function.
SincNet BIBREF16 is one of these innovative deep learning architecture for speaker recognition which uses parametrized sinc functions as a foundation to its first convolutional layer. Sinc functions are designed to process digital signals just like audio, and thus the use of them as the first convolutional layer helps to capture more meaningful features to the network. Additionally, the extracted features are also more human-readable than the ones obtained from ordinary convolutions.
Besides, the sinc functions reduce the number of parameters on the SincNet first layer because each sinc function of any size only have two parameters to learn against INLINEFORM0 from the conventional convolutional filter, where INLINEFORM1 is the size of the filter. As a result, the sinc functions enables the network to converge faster. Another advantage of the sinc functions is the fact that they are symmetric, which means that we can reduce the computational effort to process it on INLINEFORM2 by simply calculating half of the filters and flipping it to the other side.
The first layer of SincNet is made by 80 filters of size 251, and then it has two more conventional convolutional layers of size five with 60 filters each. Normalization is also applied to the input samples and the convolutional layers, the traditional and the sinc one. After that, the result propagates to three more fully connected layers of size 2048, and it is normalized again. The hidden layers use the Leaky ReLU BIBREF29 as the activation function. The sinc convolutional layer is initialized using mel-scale cutoff frequencies. On the other hand, the traditional convolutional layers together with the fully connected layers are initialized using INLINEFORM0 scheme. Finally, a Softmax layer provides the set of posterior probabilities for the classification.
Additive Margin SincNet
The AM-SincNet is built by replacing the softmax layer of the SincNet with the Additive Margin Softmax BIBREF15 . The Additive Margin Softmax (AM-Softmax) is a loss function derived from the original Softmax which introduces an additive margin to its decision boundary.
The additive margin works as a better class separator than the traditional decision boundary from Softmax. Furthermore, it also forces the samples from the same class to become closer to each other thus improving results for tasks such as classification and verification. The AM-Softmax equation is written as: DISPLAYFORM0 DISPLAYFORM1
In the above equation, W is the weight matrix, and INLINEFORM0 is the input from the INLINEFORM1 -th sample for the last fully connected layer. The INLINEFORM2 is also known as the target logit for the INLINEFORM3 -th sample. The INLINEFORM4 and INLINEFORM5 are the parameters responsible for scaling and additive margin, respectively. Although the network can learn INLINEFORM6 during the optimization process, this can make the convergence to be very slow. Thus, a smart choice is to follow BIBREF15 and set INLINEFORM7 to be a fixed value. On the other hand, the INLINEFORM8 parameter is fundamental and has to be chosen carefully. On our context, we assume that both INLINEFORM9 and INLINEFORM10 are normalized to one. Figure FIGREF1 shows a comparison between the traditional Softmax and the AM-Softmax.
The SincNet approach has shown high-grade results on the speaker recognition task. Indeed, its architecture has been compared against ordinary CNNs and several other well-known methods for speaker recognition and verification such as MFCC and FBANK, and, in every scenario, the SincNet has overcome alternative approaches. The SincNet most significant contribution was the usage of sinc functions as its first convolutional layer. Nevertheless, to calculate the posterior probabilities over the target speaker, SincNet applies the Softmax loss function which, despite being a reasonable choice, is not particularly capable of producing a sharp distinction among the class in the final layer. Thus, we have decided to replace the last layer of SincNet from Softmax to AM-Softmax. Figure FIGREF4 is a minor modification of the original SincNet image that can be found in BIBREF16 which shows the archtecture of the proposed AM-SincNet.
Experiments
The proposed method AM-SincNet has been evaluated on the well known TIMIT dataset BIBREF17 , which contains audio samples from 630 different speakers of the eight main American dialects and where each speaker reads a few phonetically rich sentences. We used the same pre-processing procedures as BIBREF16 . For example, the non-speech interval from the beginning and the end of the sentences were removed. Following the same protocol of BIBREF16 , we have used five utterances of each speaker for training the network and the remaining three for evaluation. Moreover, we also split the waveform of each audio sample into 200ms chunks with 10ms overlap, and then these chunks were used to feed the network.
For training, we configured the network to use the RMSprop as optimizer with mini-batches of size 128 along with a learning rate of INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 . The AM-Softmax comes with two more parameters than the traditional Softmax, and the new parameters are the scaling factor INLINEFORM3 and the margin size INLINEFORM4 . As mentioned before, we set the scaling factor INLINEFORM5 to a fixed value of 30 in order to speed up the network training. On the other hand, for the margin parameter INLINEFORM6 we carefully did several experiments to evaluate the influence of it on the Frame Error Rate (FER).
We also have added an INLINEFORM0 constant of value INLINEFORM1 to the AM-Softmax equation in order to avoid a division by zero on the required places. For each one of the experiments, we trained the models for exactly 352 epochs as it appeared enough to exploit adequately the different training speed presented by both competing models. To run the experiments, we used an NVIDIA Titan XP GPU, and the training process lasts for about four days. The experiments performed by this paper may be reproduced by using the code that we made available online at the GitHub.
Results
Several experiments were made to evaluate the proposed method against the traditional SincNet approach. In every one of them, the proposed AM-SincNet has shown higher accurate results. The proposed AM-SincNet method requires two more parameters, the scaling parameter INLINEFORM0 and the margin parameter INLINEFORM1 . We have decided to use INLINEFORM2 , and we have done experiments to evaluate the influence of the margin parameter INLINEFORM3 on the Frame Error Rate.
The Table TABREF6 shows the Frame Error Rate (FER) in percentage for the original SincNet and our proposed method over 352 epochs on the test data. To verify the influence of the margin parameter on the proposed method, we performed several experiments using different values of INLINEFORM0 in the range INLINEFORM1 . The table shows the results from the first 96 and the last 32 epochs in steps of 16. The best result from each epoch is highlighted in bold.
It is possible to see that traditional SincNet only gets better results than the proposed AM-SincNet on the first epochs when none of them have given proper training time yet. After that, on epoch 48, the original SincNet starts to converge with an FER around INLINEFORM0 , while the proposed method keeps decreasing its error throughout training.
In the epoch 96, the proposed method has already an FER more than INLINEFORM0 better than the original SincNet for almost every value of INLINEFORM1 excluding INLINEFORM2 . The difference keeps increasing over the epochs, and at epoch 352 the proposed method has an FER of INLINEFORM3 ( INLINEFORM4 ) against INLINEFORM5 from SincNet, which means that at this epoch AM-SincNet has a Frame Error Rate approximately INLINEFORM6 better than traditional SincNet. The Figure FIGREF7 plots the Frame Error Rate on the test data for both methods along the training epochs. For the AM-SincNet, we used the margin parameter INLINEFORM7 .
From Table TABREF6 , we can also see the impact of the margin parameter INLINEFORM0 on our proposed method. It is possible to see that the FER calculated for INLINEFORM1 got the lowest (best) value at the epochs 32 and 320. In the same way, INLINEFORM2 and INLINEFORM3 got the lowest values at epochs 16 and 336, respectively. The value INLINEFORM4 scores the lowest result for epochs 64 and 96, while INLINEFORM5 got the lowest score at epoch 80, and INLINEFORM6 reached the lowest value of epochs 48 and 352.
The INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , and INLINEFORM3 does not reach the lowest values of any epoch in this table. Although the results in Table TABREF6 may indicate that there is a golden value of INLINEFORM4 which brings the best Frame Error Rate for the experiments, in fact, the difference of the FER calculated among the epochs may not be so significant. Indeed, at the end of training, all of the experiments with the AM-SincNet seem to approximate the FER to a value around INLINEFORM5 . In any case, AM-SincNet overcomes the baseline approach.
Conclusion
This paper has proposed a new approach for directly processing waveform audio that is highly inspirited in the neural network architecture SincNet and the Additive Margin Softmax loss function. The proposed method, AM-SincNet, has shown a Frame Error Rate about 40% smaller than the traditional SincNet. It shows that the loss function we use on a model can have a significant impact on the expected result.
From Figure FIGREF7 , it is possible to notice that the FER ( INLINEFORM0 ) from the proposed method may not have converged yet on the last epochs. Thus, if the training had last more, we may have noticed an even more significant difference between both methods. The proposed method comes with two more parameters for setting when compared with the traditional SincNet, although the experiments made here show that these extra parameters can be fixed values without compromising the performance of the model.
For future work, we would like to test our method using different datasets such as VoxCeleb2 BIBREF21 , which has over a million samples from over 6k speakers. If we increase the amount of data, the model may show a more significant result. We also intend to use more metrics such as the Classification Error Rate ( INLINEFORM0 ) (CER) and the Equal Error Rate ( INLINEFORM1 ) (EER) to compare the models.
Acknowledgment
This work was supported in part by CNPq and CETENE (Brazilian research agencies). We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan XP GPU used for this research. | No |
46bca122a87269b20e252838407a2f88f644ded8 | 46bca122a87269b20e252838407a2f88f644ded8_0 | Q: Do they visualize the difference between AM-Softmax and regular softmax?
Text: Introduction
Speaker Recognition is an essential task with applications in biometric authentication, identification, and security among others BIBREF0 . The field is divided into two main subtasks: Speaker Identification and Speaker Verification. In Speaker Identification, given an audio sample, the model tries to identify to which one in a list of predetermined speakers the locution belongs. In the Speaker Verification, the model verifies if a sampled audio belongs to a given speaker or not. Most of the literature techniques to tackle this problem are based on INLINEFORM0 -vectors methods BIBREF1 , which extract features from the audio samples and classify the features using methods such as PLDA BIBREF2 , heavy-tailed PLDA BIBREF3 , and Gaussian PLDA BIBREF4 .
Despite the advances in recent years BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , Speaker Recognition is still a challenging problem. In the past years, Deep Neural Networks (DNN) has been taking place on pattern recognition tasks and signal processing. Convolutional Neural Networks (CNN) have already show that they are the actual best choice to image classification, detection or recognition tasks. In the same way, DNN models are being used combined with the traditional approaches or in end-to-end approaches for Speaker Recognition tasks BIBREF12 , BIBREF13 , BIBREF14 . In hybrid approaches, it is common to use the DNN model to extract features from a raw audio sample and then encode it on embedding vectors with low-dimensionality which samples sharing common features with closer samples. Usually, the embedding vectors are classified using traditional approaches.
The difficult behind the Speaker Recognition tasks is that audio signals are complex to model in low and high-level features that are discriminant enough to distinguish different speakers. Methods that use handcrafted features can extract more human-readable features and have a more appealing approach because humans can see what the method is doing and which features are used to make the inference. Nevertheless, handcrafted features lack in power. In fact, while we know what patterns they are looking for, we have no guarantee that these patterns are the best for the job. On the other hand, approaches based on Deep Learning have the power to learn patterns that humans may not be able to understand, but usually get better results than traditional methods, despite having more computational cost to training.
A promising approach to Speaker Recognition based on Deep Learning is the SincNet model BIBREF16 that unifies the power of Deep Learning with the interpretability of the handcrafted features. SincNet uses a Deep Learning model to process raw audio samples and learn powerful features. Therefore, it replaces the first layer of the DNN model, which is responsible for the convolution with parametrized sinc functions. The parametrized sinc functions implement band-pass filters and are used to convolve the waveform audio signal to extract basic low-level features to be later processed by the deeper layers of the network. The use of the sinc functions helps the network to learn more relevant features and also improves the convergence time of the model as the sinc functions have significantly fewer parameters than the first layer of traditional DNN. At the top of the model, the SincNet uses a Softmax layer which is responsible for mapping the final features processed by the network into a multi-dimensional space corresponding to the different classes or speakers.
The Softmax function is usually used as the last layer of DNN models. The function is used to delimit a linear surface that can be used as a decision boundary to separate samples from different classes. Although the Softmax function works well on optimizing a decision boundary that can be used to separate the classes, it is not appropriate to minimize the distance from samples of the same class. These characteristics may spoil the model efficiency on tasks like Speaker Verification that require to measure the distance between the samples to make a decision. To deal with this problem, new approaches such as Additive Margin Softmax BIBREF15 (AM-Softmax) are being proposed. The AM-Softmax introduces an additive margin to the decision boundary which forces the samples to be closer to each other, maximizing the distance between the classes and at the same time minimizing the distance from samples of the same class.
In this paper, we propose a new method for Speaker Verification called Additive Margin SincNet (AM-SincNet) that is highly inspirited on the SincNet architecture and the AM-Softmax loss function. In order to validate our hypothesis, the proposed method is evaluated on the TIMIT BIBREF17 dataset based in the Frame Error Rate. The following sections are organized as: In Section SECREF2 , we present the related works, the proposed method is introduced at Section SECREF3 , Section SECREF4 explains how we built our experiments, the results are discussed at Section SECREF5 , and finally at Section SECREF6 we made our conclusions.
Related Work
For some time, INLINEFORM0 -vectors BIBREF1 have been used as the state-of-the-art feature extraction method for speaker recognition tasks. Usually, the extracted features are classified using PLDA BIBREF2 or other similar techniques, such as heavy-tailed PLDA BIBREF3 and Gauss-PLDA BIBREF4 . The intuition behind these traditional methods and how they work can be better seem in BIBREF18 . Although they have been giving us some reasonable results, it is clear that there is still room for improvements BIBREF18 .
Recently, neural networks and deep learning techniques have shown to be a particularly attractive choice when dealing with feature extraction and patterns recognition in the most variety of data BIBREF19 , BIBREF20 . For instance, CNNs are proving to produce a high performance on image classification tasks. Moreover, deep learning architectures BIBREF21 , BIBREF22 and hybrid systems BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 are higher quality results on processing audio signals than traditional approaches. As an example, BIBREF28 built a speaker verification framework based on the Inception-Resnet-v1 deep neural network architecture using the triplet loss function.
SincNet BIBREF16 is one of these innovative deep learning architecture for speaker recognition which uses parametrized sinc functions as a foundation to its first convolutional layer. Sinc functions are designed to process digital signals just like audio, and thus the use of them as the first convolutional layer helps to capture more meaningful features to the network. Additionally, the extracted features are also more human-readable than the ones obtained from ordinary convolutions.
Besides, the sinc functions reduce the number of parameters on the SincNet first layer because each sinc function of any size only have two parameters to learn against INLINEFORM0 from the conventional convolutional filter, where INLINEFORM1 is the size of the filter. As a result, the sinc functions enables the network to converge faster. Another advantage of the sinc functions is the fact that they are symmetric, which means that we can reduce the computational effort to process it on INLINEFORM2 by simply calculating half of the filters and flipping it to the other side.
The first layer of SincNet is made by 80 filters of size 251, and then it has two more conventional convolutional layers of size five with 60 filters each. Normalization is also applied to the input samples and the convolutional layers, the traditional and the sinc one. After that, the result propagates to three more fully connected layers of size 2048, and it is normalized again. The hidden layers use the Leaky ReLU BIBREF29 as the activation function. The sinc convolutional layer is initialized using mel-scale cutoff frequencies. On the other hand, the traditional convolutional layers together with the fully connected layers are initialized using INLINEFORM0 scheme. Finally, a Softmax layer provides the set of posterior probabilities for the classification.
Additive Margin SincNet
The AM-SincNet is built by replacing the softmax layer of the SincNet with the Additive Margin Softmax BIBREF15 . The Additive Margin Softmax (AM-Softmax) is a loss function derived from the original Softmax which introduces an additive margin to its decision boundary.
The additive margin works as a better class separator than the traditional decision boundary from Softmax. Furthermore, it also forces the samples from the same class to become closer to each other thus improving results for tasks such as classification and verification. The AM-Softmax equation is written as: DISPLAYFORM0 DISPLAYFORM1
In the above equation, W is the weight matrix, and INLINEFORM0 is the input from the INLINEFORM1 -th sample for the last fully connected layer. The INLINEFORM2 is also known as the target logit for the INLINEFORM3 -th sample. The INLINEFORM4 and INLINEFORM5 are the parameters responsible for scaling and additive margin, respectively. Although the network can learn INLINEFORM6 during the optimization process, this can make the convergence to be very slow. Thus, a smart choice is to follow BIBREF15 and set INLINEFORM7 to be a fixed value. On the other hand, the INLINEFORM8 parameter is fundamental and has to be chosen carefully. On our context, we assume that both INLINEFORM9 and INLINEFORM10 are normalized to one. Figure FIGREF1 shows a comparison between the traditional Softmax and the AM-Softmax.
The SincNet approach has shown high-grade results on the speaker recognition task. Indeed, its architecture has been compared against ordinary CNNs and several other well-known methods for speaker recognition and verification such as MFCC and FBANK, and, in every scenario, the SincNet has overcome alternative approaches. The SincNet most significant contribution was the usage of sinc functions as its first convolutional layer. Nevertheless, to calculate the posterior probabilities over the target speaker, SincNet applies the Softmax loss function which, despite being a reasonable choice, is not particularly capable of producing a sharp distinction among the class in the final layer. Thus, we have decided to replace the last layer of SincNet from Softmax to AM-Softmax. Figure FIGREF4 is a minor modification of the original SincNet image that can be found in BIBREF16 which shows the archtecture of the proposed AM-SincNet.
Experiments
The proposed method AM-SincNet has been evaluated on the well known TIMIT dataset BIBREF17 , which contains audio samples from 630 different speakers of the eight main American dialects and where each speaker reads a few phonetically rich sentences. We used the same pre-processing procedures as BIBREF16 . For example, the non-speech interval from the beginning and the end of the sentences were removed. Following the same protocol of BIBREF16 , we have used five utterances of each speaker for training the network and the remaining three for evaluation. Moreover, we also split the waveform of each audio sample into 200ms chunks with 10ms overlap, and then these chunks were used to feed the network.
For training, we configured the network to use the RMSprop as optimizer with mini-batches of size 128 along with a learning rate of INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 . The AM-Softmax comes with two more parameters than the traditional Softmax, and the new parameters are the scaling factor INLINEFORM3 and the margin size INLINEFORM4 . As mentioned before, we set the scaling factor INLINEFORM5 to a fixed value of 30 in order to speed up the network training. On the other hand, for the margin parameter INLINEFORM6 we carefully did several experiments to evaluate the influence of it on the Frame Error Rate (FER).
We also have added an INLINEFORM0 constant of value INLINEFORM1 to the AM-Softmax equation in order to avoid a division by zero on the required places. For each one of the experiments, we trained the models for exactly 352 epochs as it appeared enough to exploit adequately the different training speed presented by both competing models. To run the experiments, we used an NVIDIA Titan XP GPU, and the training process lasts for about four days. The experiments performed by this paper may be reproduced by using the code that we made available online at the GitHub.
Results
Several experiments were made to evaluate the proposed method against the traditional SincNet approach. In every one of them, the proposed AM-SincNet has shown higher accurate results. The proposed AM-SincNet method requires two more parameters, the scaling parameter INLINEFORM0 and the margin parameter INLINEFORM1 . We have decided to use INLINEFORM2 , and we have done experiments to evaluate the influence of the margin parameter INLINEFORM3 on the Frame Error Rate.
The Table TABREF6 shows the Frame Error Rate (FER) in percentage for the original SincNet and our proposed method over 352 epochs on the test data. To verify the influence of the margin parameter on the proposed method, we performed several experiments using different values of INLINEFORM0 in the range INLINEFORM1 . The table shows the results from the first 96 and the last 32 epochs in steps of 16. The best result from each epoch is highlighted in bold.
It is possible to see that traditional SincNet only gets better results than the proposed AM-SincNet on the first epochs when none of them have given proper training time yet. After that, on epoch 48, the original SincNet starts to converge with an FER around INLINEFORM0 , while the proposed method keeps decreasing its error throughout training.
In the epoch 96, the proposed method has already an FER more than INLINEFORM0 better than the original SincNet for almost every value of INLINEFORM1 excluding INLINEFORM2 . The difference keeps increasing over the epochs, and at epoch 352 the proposed method has an FER of INLINEFORM3 ( INLINEFORM4 ) against INLINEFORM5 from SincNet, which means that at this epoch AM-SincNet has a Frame Error Rate approximately INLINEFORM6 better than traditional SincNet. The Figure FIGREF7 plots the Frame Error Rate on the test data for both methods along the training epochs. For the AM-SincNet, we used the margin parameter INLINEFORM7 .
From Table TABREF6 , we can also see the impact of the margin parameter INLINEFORM0 on our proposed method. It is possible to see that the FER calculated for INLINEFORM1 got the lowest (best) value at the epochs 32 and 320. In the same way, INLINEFORM2 and INLINEFORM3 got the lowest values at epochs 16 and 336, respectively. The value INLINEFORM4 scores the lowest result for epochs 64 and 96, while INLINEFORM5 got the lowest score at epoch 80, and INLINEFORM6 reached the lowest value of epochs 48 and 352.
The INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , and INLINEFORM3 does not reach the lowest values of any epoch in this table. Although the results in Table TABREF6 may indicate that there is a golden value of INLINEFORM4 which brings the best Frame Error Rate for the experiments, in fact, the difference of the FER calculated among the epochs may not be so significant. Indeed, at the end of training, all of the experiments with the AM-SincNet seem to approximate the FER to a value around INLINEFORM5 . In any case, AM-SincNet overcomes the baseline approach.
Conclusion
This paper has proposed a new approach for directly processing waveform audio that is highly inspirited in the neural network architecture SincNet and the Additive Margin Softmax loss function. The proposed method, AM-SincNet, has shown a Frame Error Rate about 40% smaller than the traditional SincNet. It shows that the loss function we use on a model can have a significant impact on the expected result.
From Figure FIGREF7 , it is possible to notice that the FER ( INLINEFORM0 ) from the proposed method may not have converged yet on the last epochs. Thus, if the training had last more, we may have noticed an even more significant difference between both methods. The proposed method comes with two more parameters for setting when compared with the traditional SincNet, although the experiments made here show that these extra parameters can be fixed values without compromising the performance of the model.
For future work, we would like to test our method using different datasets such as VoxCeleb2 BIBREF21 , which has over a million samples from over 6k speakers. If we increase the amount of data, the model may show a more significant result. We also intend to use more metrics such as the Classification Error Rate ( INLINEFORM0 ) (CER) and the Equal Error Rate ( INLINEFORM1 ) (EER) to compare the models.
Acknowledgment
This work was supported in part by CNPq and CETENE (Brazilian research agencies). We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan XP GPU used for this research. | Yes |
7c792cda220916df40edb3107e405c86455822ed | 7c792cda220916df40edb3107e405c86455822ed_0 | Q: what metrics were used for evaluation?
Text: Introduction
With the development of digital media technology and popularity of Mobile Internet, online visual content has increased rapidly in recent couple of years. Subsequently, visual content analysis for retrieving BIBREF0 , BIBREF1 and understanding becomes a fundamental problem in the area of multimedia research, which has motivated world-wide researchers to develop advanced techniques. Most previous works, however, have focused on classification task, such as annotating an image BIBREF2 , BIBREF3 or video BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 with given fixed label sets. With some pioneering methods BIBREF8 , BIBREF9 tackling the challenge of describing images with natural language proposed, visual content understanding has attracted more and more attention. State-of-the-art techniques for image captioning have been surpassed by new advanced approaches in succession BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 . Recent researches BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 have been focusing on describing videos with more comprehensive sentences instead of simple keywords. Different from image, video is sequential data with temporal structure, which may pose significant challenge to video caption. Most of the existing works in video description employed max or mean pooling across video frames to obtain video-level representation, which failed to capture temporal knowledge. To address this problem, Yao et al. proposed to use 3-D Convolutional Neural Networks to explore local temporal information in video clips, where the most relevant temporal fragments were automatically chosen for generating natural language description with attention mechanism BIBREF17 . In BIBREF19 , Venugopanlan et al. implemented a Long-Short Term Memory (LSTM) network, a variant of Recurrent Neural Networks (RNNs), to model the global temporal structure in whole video snippet. However, these methods failed to exploit bidirectional global temporal structure, which could benefit from not only previous video frames, but also information in future frames. Also, existing video captioning schemes cannot adaptively learn dense video representation and generate sparse semantic sentences.
In this work, we propose to construct a novel bidirectional LSTM (BiLSTM) network for video captioning. More specifically, we design a joint visual modelling to comprehensively explore bidirectional global temporal information in video data by integrating a forward LSTM pass, a backward LSTM pass, together with CNNs features. In order to enhance the subsequent sentence generation, the obtained visual representations are then fed into LSTM-based language model as initialization. We summarize the main contributions of this work as follows: (1) To our best knowledge, our approach is one of the first to utilize bidirectional recurrent neural networks for exploring bidirectional global temporal structure in video captioning; (2) We construct two sequential processing models for adaptive video representation learning and language description generation, respectively, rather than using the same LSTM for both video frames encoding and text decoding in BIBREF19 ; and (3) Extensive experiments on a real-world video corpus illustrate the superiority of our proposal as compared to state-of-the-arts.
The Proposed Approach
In this section, we elaborate the proposed video captioning framework, including an introduction of the overall flowchart (as illustrated in Figure FIGREF1 ), a brief review of LSTM-based Sequential Model, the joint visual modelling with bidirectional LSTM and CNNs, as well as the sentence generation process.
LSTM-based Sequential Model
With the success in speech recognition and machine translation tasks, recurrent neural structure, especially LSTM and its variants, have dominated sequence processing field. LSTM has been demonstrated to be able to effectively address the gradients vanishing or explosion problem BIBREF20 during back-propagation through time (BPTT) BIBREF21 and to exploit temporal dependencies in very long temporal structure. LSTM incorporates several control gates and a constant memory cell, the details of which are following: DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 -like matrices are LSTM weight parameters, INLINEFORM1 and INLINEFORM2 are denote the sigmoid and hyperbolic non-linear functions, respectively, and INLINEFORM3 indicates element-wise multiplication operation. Inspired by the success of LSTM, we devise an LSTM-based network to investigate the video temporal structure for video representation. Then initializing language model with video representation to generate video description.
Bidirectional Video Modelling
Different from other video description approaches that represent video by implementing pooling across frames BIBREF16 or 3-D CNNs with local temporal structure BIBREF15 , we apply BiLSTM networks to exploit the bidirectional temporal structure of video clips. Convolutional Neural Networks (CNNs) has demonstrated overwhelming performance on image recognition, classification BIBREF2 and video content analysis BIBREF11 , BIBREF19 . Therefore, we extract caffe BIBREF22 INLINEFORM0 layer of each frame through VGG-16 layers BIBREF23 caffemodel. Following BIBREF19 , BIBREF16 , we sample one frame from every ten frames in the video and extract the INLINEFORM1 layer, the second fully-connected layer, to express selected frames. Then a INLINEFORM2 -by-4096 feature matrix generated to denote given video clip, where INLINEFORM3 is the number of frames we sampled in the video. As in Figure FIGREF1 , we then implement two LSTMs, forward pass and backward pass, to encode CNNs features of video frames, and then merge the output sequences at each time point with a learnt weight matrix. What is interesting is that at each time point in bidirectional structure, we not only “see” the past frames, but also “peek” at the future frames. In other words, our bidirectional LSTM structure encodes video by scanning the entire video sequence several times (same as the number of time steps at encoding stage), and each scan is relevant to its adjacent scans. To investigate the effect of reinforcement of original CNNs feature, we combine the merged hidden states of BiLSTM structure and INLINEFORM4 representation time step-wise. We further employ another forward pass LSTM network with incorporated sequence to generate our video representation. In BIBREF24 , BIBREF25 , Wu et al. had demonstrated that using the output of the last step could perform better than pooling approach across outputs of all the time steps in video classification task. Similarly, we represent the entire video clip using the state of memory cell and output of the last time point, and feed them into description generator as initialization of memory cell and hidden unit respectively.
Generating Video Description
Existing video captioning approaches usually share common part of visual model and language model as representation BIBREF19 , BIBREF15 , which may lead to severe information loss. Besides, they also input the same pooled visual vector of the whole video into every sentence processing unit, thereby ignoring temporal structure. Such methods may easily result in undesirable outputs due to the duplicate inputs in every time point of the new sequence BIBREF16 . To address these issues, we generate descriptions for video clips using a sequential model initialized with visual representation. Inspired by the superior performance of probabilistic sequence generation machine, we generate each word recurrently at each time point. Then the log probability of sentence INLINEFORM0 can be expressed as below: DISPLAYFORM0
where INLINEFORM0 denotes all parameters in sentence generation model and INLINEFORM1 is the representation of given video, and INLINEFORM2 indicates the number of words in sentence. We identify the most likely sentence by maximizing the log likelihood in Equation ( EQREF10 ), then our object function can be described as: DISPLAYFORM0
The optimizer updates INLINEFORM0 with INLINEFORM1 across the entire training process applying Stochastic Gradient Descent (SGD). During training phrase, the loss is back propagated through time and each LSTM unit learns to derive an appropriate hidden representation INLINEFORM2 from input sequence. We then implement the Softmax function to get the probability distribution over the words in the entire vocabulary.
At the beginning of the sentence generation, as depicted in Figure FIGREF1 , an explicit starting token (<BOS>) is needed and we terminate each sentence when the end-of-sentence token (<EOS>) is feeding in. During test phrase, similar to BIBREF19 , our language model takes the word INLINEFORM0 with maximum likelihood as input at time INLINEFORM1 repeatedly until the <EOS> token is emitted.
Dataset
Video Dataset: We evaluate our approach by conducting experiments on the Microsoft Research Video Description (MSVD) BIBREF26 corpus, which is description for a collection of 1,970 video clips. Each video clip depicts a single action or a simple event, such as “shooting”, “cutting”, “playing the piano” and “cooking”, which with the duration between 8 seconds to 25 seconds. There are roughly 43 available sentences per video and 7 words in each sentence at average. Following the majority of prior works BIBREF15 , BIBREF16 , BIBREF19 , BIBREF17 , we split entire dataset into training, validation and test set with 1200, 100 and 670 snippets, respectively.
Image Dataset: Comparing to other LSTM structure and deep networks, the size of video dataset for caption task is small, thereby we apply transferring learning from image description. COCO 2014 image description dataset BIBREF27 has been used to perform experiments frequently BIBREF12 , BIBREF11 , BIBREF10 , BIBREF14 , which consists of more than 120,000 images, about 82,000 and 40,000 images for training and test respectively. We pre-train our language model on COCO 2014 training set first, then transfer learning on MSVD with integral video description model.
Experimental Setup
Description Processing: Some minimal preprocessing has been implemented to the descriptions in both MSVD and COCO 2014 datasets. We first employ word_tokenize operation in NLTK toolbox to obtain individual words, and then convert all words to lower-case. All punctuation are removed, and then we start each sentence with <BOS> and end with <EOS>. Finally, we combine the sets of words in MSVD with COCO 2014, and generate a vocabulary with 12,984 unique words. Each word input to our system is represented by one-hot vector.
Video Preprocessing: As previous video description works BIBREF16 , BIBREF19 , BIBREF15 , we sample video frames once in every ten frames, then these frames could represent given video and 28.5 frames for each video averagely. We extract frame-wise caffe INLINEFORM0 layer features using VGG-16 layers model, then feed the sequential feature into our video caption system.
We employ a bidirectional S2VT BIBREF19 and a joint bidirectional LSTM structure to investigate the performance of our bidirectional approach. For convenient comparison, we set the size of hidden unit of all LSTMs in our system to 512 as BIBREF15 , BIBREF19 , except for the first video encoder in unidirectional joint LSTM. During training phrase, we set 80 as maximum number of time steps of LSTM in all our models and a mini-batch with 16 video-sentence pairs. We note that over 99% of the descriptions in MSVD and COCO 2014 contain no more than 40 words, and in BIBREF19 , Venugopalan et al. pointed out that 94% of the YouTube training videos satisfy our maximum length limit. To ensure sufficient visual content, we adopt two ways to truncate the videos and sentences adaptively when the sum of the number of frames and words exceed the limit. If the number of words is within 40, we arbitrarily truncate the frames to satisfy the maximum length. When the length of sentence is more than 40, we discard the words that beyond the length and take video frames with a maximum number of 40.
Bidirectional S2VT: Similar to BIBREF19 , we implement several S2VT-based models: S2VT, bidirectional S2VT and reinforced S2VT with bidirectional LSTM video encoder. We conduct experiment on S2VT using our video features and LSTM structure instead of the end-to-end model in BIBREF19 , which need original RGB frames as input. For bidirectional S2VT model, we first pre-train description generator on COCO 2014 for image caption. We next implement forward and backward pass for video encoding and merge the hidden states step-wise with a learnt weight while the language layer receives merged hidden representation with null padded as words. We also pad the inputs of forward LSTM and backward LSTM with zeros at decoding stage, and concatenate the merged hidden states to embedded words. In the last model, we regard merged bidirectional hidden states as complementary enhancement and concatenate to original INLINEFORM0 features to obtain a reinforced representation of video, then derive sentence from new feature using the last LSTM. The loss is computed only at decoding stage in all S2VT-based models.
Joint-BiLSTM: Different from S2VT-based models, we employ a joint bidirectional LSTM networks to encode video sequence and decode description applying another LSTM respectively rather than sharing the common one. We stack two layers of LSTM networks to encode video and pre-train language model as in S2VT-based models. Similarly, unidirectional LSTM, bidirectional LSTM and reinforced BiLSTM are executed to investigate the performance of each structure. We set 1024 hidden units of the first LSTM in unidirectional encoder so that the output could pass to the second encoder directly, and the memory cell and hidden state of the last time point are applied to initialize description decoder. Bidirectional structure and reinforced BiLSTM in encoder are implemented similarly to the corresponding type structure in S2VT-based models, respectively, and then feed the video representation into description generator as the unidirectional model aforementioned.
Results and Analysis
BLEU BIBREF28 , METEOR BIBREF29 , ROUGE-L BIBREF30 and CIDEr BIBREF31 are common evaluation metrics in image and video description, the first three were originally proposed to evaluate machine translation at the earliest and CIDEr was proposed to evaluate image description with sufficient reference sentences. To quantitatively evaluate the performance of our bidirectional recurrent based approach, we adopt METEOR metric because of its robust performance. Contrasting to the other three metrics, METEOR could capture semantic aspect since it identifies all possible matches by extracting exact matcher, stem matcher, paraphrase matcher and synonym matcher using WordNet database, and compute sentence level similarity scores according to matcher weights. The authors of CIDEr also argued for that METEOR outperforms CIDEr when the reference set is small BIBREF31 .
We first compare our unidirectional, bidirectional structures and reinforced BiLSTM. As shown in Table TABREF19 , in S2VT-based model, bidirectional structure performs very little lower score than unidirectional structure while it shows the opposite results in joint LSTM case. It may be caused by the pad at description generating stage in S2VT-based structure. We note that BiLSTM reinforced structure gains more than 3% improvement than unidirectional-only model in both S2VT-based and joint LSTMs structures, which means that combining bidirectional encoding of video representation is beneficial to exploit some additional temporal structure in video encoder (Figure FIGREF17 ). On structure level, Table TABREF19 illustrates that our Joint-LSTMs based models outperform all S2VT based models correspondingly. It demonstrates our Joint-LSTMs structure benefits from encoding video and decoding natural language separately.
We also evaluate our Joint-BiLSTM structure by comparing with several other state-of-the-art baseline approaches, which exploit either local or global temporal structure. As shown in Table TABREF20 , our Joint-BiLSTM reinforced model outperforms all of the baseline methods. The result of “LSTM” in first row refer from BIBREF15 and the last row but one denotes the best model combining local temporal structure using C3D with global temporal structure utilizing temporal attention in BIBREF17 . From the first two rows, our unidirectional joint LSTM shows rapid improvement, and comparing with S2VT-VGG model in line 3, it also demonstrates some superiority. Even LSTM-E jointly models video and descriptions representation by minimizing the distance between video and corresponding sentence, our Joint-BiLSTM reinforced obtains better performance from bidirectional encoding and separated visual and language models.
We observed that while our unidirectional S2VT has the same deployment as BIBREF19 , our model gives a little poorer performance(line 1, Table TABREF19 and line 3, Table TABREF20 ). As mentioned in Section 3.2.2, they employed an end-to-end model reading original RGB frames and fine-tuning on the VGG caffemodel. The features of frames from VGG INLINEFORM0 layer are more compatible to MSVD dataset and the description task. However, our joint LSTM demonstrates better performance with general features rather than specific ones for data, even superior to their model with multiple feature aspects (RGB + Flow, line 4, Table TABREF20 ), which means that our Joint-BiLSTM could show more powerful descriptive ability in end-to-end case. Certainly, We would investigate effect of end-to-end type of our Joint-BiLSTM in future works.
Conclusion and Future Works
In this paper, we introduced a sequence to sequence approach to describe video clips with natural language. The core of our method was, we applied two LSTM networks for the visual encoder and natural language generator component of our model. In particular, we encoded video sequences with a bidirectional Long-Short Term Memory (BiLSTM) network, which could effectively capture the bidirectional global temporal structure in video. Experimental results on MSVD dataset demonstrated the superior performance over many other state-of-the-art methods.
We also note some limitations in our model, such as end-to-end framework employed in BIBREF19 and distance measured in BIBREF15 . In the future we will make more effort to fix these limitations and exploit the linguistic domain knowledge in visual content understanding. | METEOR |
b3fcab006a9e51a0178a1f64d1d084a895bd8d5c | b3fcab006a9e51a0178a1f64d1d084a895bd8d5c_0 | Q: what are the state of the art methods?
Text: Introduction
With the development of digital media technology and popularity of Mobile Internet, online visual content has increased rapidly in recent couple of years. Subsequently, visual content analysis for retrieving BIBREF0 , BIBREF1 and understanding becomes a fundamental problem in the area of multimedia research, which has motivated world-wide researchers to develop advanced techniques. Most previous works, however, have focused on classification task, such as annotating an image BIBREF2 , BIBREF3 or video BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 with given fixed label sets. With some pioneering methods BIBREF8 , BIBREF9 tackling the challenge of describing images with natural language proposed, visual content understanding has attracted more and more attention. State-of-the-art techniques for image captioning have been surpassed by new advanced approaches in succession BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 . Recent researches BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 have been focusing on describing videos with more comprehensive sentences instead of simple keywords. Different from image, video is sequential data with temporal structure, which may pose significant challenge to video caption. Most of the existing works in video description employed max or mean pooling across video frames to obtain video-level representation, which failed to capture temporal knowledge. To address this problem, Yao et al. proposed to use 3-D Convolutional Neural Networks to explore local temporal information in video clips, where the most relevant temporal fragments were automatically chosen for generating natural language description with attention mechanism BIBREF17 . In BIBREF19 , Venugopanlan et al. implemented a Long-Short Term Memory (LSTM) network, a variant of Recurrent Neural Networks (RNNs), to model the global temporal structure in whole video snippet. However, these methods failed to exploit bidirectional global temporal structure, which could benefit from not only previous video frames, but also information in future frames. Also, existing video captioning schemes cannot adaptively learn dense video representation and generate sparse semantic sentences.
In this work, we propose to construct a novel bidirectional LSTM (BiLSTM) network for video captioning. More specifically, we design a joint visual modelling to comprehensively explore bidirectional global temporal information in video data by integrating a forward LSTM pass, a backward LSTM pass, together with CNNs features. In order to enhance the subsequent sentence generation, the obtained visual representations are then fed into LSTM-based language model as initialization. We summarize the main contributions of this work as follows: (1) To our best knowledge, our approach is one of the first to utilize bidirectional recurrent neural networks for exploring bidirectional global temporal structure in video captioning; (2) We construct two sequential processing models for adaptive video representation learning and language description generation, respectively, rather than using the same LSTM for both video frames encoding and text decoding in BIBREF19 ; and (3) Extensive experiments on a real-world video corpus illustrate the superiority of our proposal as compared to state-of-the-arts.
The Proposed Approach
In this section, we elaborate the proposed video captioning framework, including an introduction of the overall flowchart (as illustrated in Figure FIGREF1 ), a brief review of LSTM-based Sequential Model, the joint visual modelling with bidirectional LSTM and CNNs, as well as the sentence generation process.
LSTM-based Sequential Model
With the success in speech recognition and machine translation tasks, recurrent neural structure, especially LSTM and its variants, have dominated sequence processing field. LSTM has been demonstrated to be able to effectively address the gradients vanishing or explosion problem BIBREF20 during back-propagation through time (BPTT) BIBREF21 and to exploit temporal dependencies in very long temporal structure. LSTM incorporates several control gates and a constant memory cell, the details of which are following: DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 -like matrices are LSTM weight parameters, INLINEFORM1 and INLINEFORM2 are denote the sigmoid and hyperbolic non-linear functions, respectively, and INLINEFORM3 indicates element-wise multiplication operation. Inspired by the success of LSTM, we devise an LSTM-based network to investigate the video temporal structure for video representation. Then initializing language model with video representation to generate video description.
Bidirectional Video Modelling
Different from other video description approaches that represent video by implementing pooling across frames BIBREF16 or 3-D CNNs with local temporal structure BIBREF15 , we apply BiLSTM networks to exploit the bidirectional temporal structure of video clips. Convolutional Neural Networks (CNNs) has demonstrated overwhelming performance on image recognition, classification BIBREF2 and video content analysis BIBREF11 , BIBREF19 . Therefore, we extract caffe BIBREF22 INLINEFORM0 layer of each frame through VGG-16 layers BIBREF23 caffemodel. Following BIBREF19 , BIBREF16 , we sample one frame from every ten frames in the video and extract the INLINEFORM1 layer, the second fully-connected layer, to express selected frames. Then a INLINEFORM2 -by-4096 feature matrix generated to denote given video clip, where INLINEFORM3 is the number of frames we sampled in the video. As in Figure FIGREF1 , we then implement two LSTMs, forward pass and backward pass, to encode CNNs features of video frames, and then merge the output sequences at each time point with a learnt weight matrix. What is interesting is that at each time point in bidirectional structure, we not only “see” the past frames, but also “peek” at the future frames. In other words, our bidirectional LSTM structure encodes video by scanning the entire video sequence several times (same as the number of time steps at encoding stage), and each scan is relevant to its adjacent scans. To investigate the effect of reinforcement of original CNNs feature, we combine the merged hidden states of BiLSTM structure and INLINEFORM4 representation time step-wise. We further employ another forward pass LSTM network with incorporated sequence to generate our video representation. In BIBREF24 , BIBREF25 , Wu et al. had demonstrated that using the output of the last step could perform better than pooling approach across outputs of all the time steps in video classification task. Similarly, we represent the entire video clip using the state of memory cell and output of the last time point, and feed them into description generator as initialization of memory cell and hidden unit respectively.
Generating Video Description
Existing video captioning approaches usually share common part of visual model and language model as representation BIBREF19 , BIBREF15 , which may lead to severe information loss. Besides, they also input the same pooled visual vector of the whole video into every sentence processing unit, thereby ignoring temporal structure. Such methods may easily result in undesirable outputs due to the duplicate inputs in every time point of the new sequence BIBREF16 . To address these issues, we generate descriptions for video clips using a sequential model initialized with visual representation. Inspired by the superior performance of probabilistic sequence generation machine, we generate each word recurrently at each time point. Then the log probability of sentence INLINEFORM0 can be expressed as below: DISPLAYFORM0
where INLINEFORM0 denotes all parameters in sentence generation model and INLINEFORM1 is the representation of given video, and INLINEFORM2 indicates the number of words in sentence. We identify the most likely sentence by maximizing the log likelihood in Equation ( EQREF10 ), then our object function can be described as: DISPLAYFORM0
The optimizer updates INLINEFORM0 with INLINEFORM1 across the entire training process applying Stochastic Gradient Descent (SGD). During training phrase, the loss is back propagated through time and each LSTM unit learns to derive an appropriate hidden representation INLINEFORM2 from input sequence. We then implement the Softmax function to get the probability distribution over the words in the entire vocabulary.
At the beginning of the sentence generation, as depicted in Figure FIGREF1 , an explicit starting token (<BOS>) is needed and we terminate each sentence when the end-of-sentence token (<EOS>) is feeding in. During test phrase, similar to BIBREF19 , our language model takes the word INLINEFORM0 with maximum likelihood as input at time INLINEFORM1 repeatedly until the <EOS> token is emitted.
Dataset
Video Dataset: We evaluate our approach by conducting experiments on the Microsoft Research Video Description (MSVD) BIBREF26 corpus, which is description for a collection of 1,970 video clips. Each video clip depicts a single action or a simple event, such as “shooting”, “cutting”, “playing the piano” and “cooking”, which with the duration between 8 seconds to 25 seconds. There are roughly 43 available sentences per video and 7 words in each sentence at average. Following the majority of prior works BIBREF15 , BIBREF16 , BIBREF19 , BIBREF17 , we split entire dataset into training, validation and test set with 1200, 100 and 670 snippets, respectively.
Image Dataset: Comparing to other LSTM structure and deep networks, the size of video dataset for caption task is small, thereby we apply transferring learning from image description. COCO 2014 image description dataset BIBREF27 has been used to perform experiments frequently BIBREF12 , BIBREF11 , BIBREF10 , BIBREF14 , which consists of more than 120,000 images, about 82,000 and 40,000 images for training and test respectively. We pre-train our language model on COCO 2014 training set first, then transfer learning on MSVD with integral video description model.
Experimental Setup
Description Processing: Some minimal preprocessing has been implemented to the descriptions in both MSVD and COCO 2014 datasets. We first employ word_tokenize operation in NLTK toolbox to obtain individual words, and then convert all words to lower-case. All punctuation are removed, and then we start each sentence with <BOS> and end with <EOS>. Finally, we combine the sets of words in MSVD with COCO 2014, and generate a vocabulary with 12,984 unique words. Each word input to our system is represented by one-hot vector.
Video Preprocessing: As previous video description works BIBREF16 , BIBREF19 , BIBREF15 , we sample video frames once in every ten frames, then these frames could represent given video and 28.5 frames for each video averagely. We extract frame-wise caffe INLINEFORM0 layer features using VGG-16 layers model, then feed the sequential feature into our video caption system.
We employ a bidirectional S2VT BIBREF19 and a joint bidirectional LSTM structure to investigate the performance of our bidirectional approach. For convenient comparison, we set the size of hidden unit of all LSTMs in our system to 512 as BIBREF15 , BIBREF19 , except for the first video encoder in unidirectional joint LSTM. During training phrase, we set 80 as maximum number of time steps of LSTM in all our models and a mini-batch with 16 video-sentence pairs. We note that over 99% of the descriptions in MSVD and COCO 2014 contain no more than 40 words, and in BIBREF19 , Venugopalan et al. pointed out that 94% of the YouTube training videos satisfy our maximum length limit. To ensure sufficient visual content, we adopt two ways to truncate the videos and sentences adaptively when the sum of the number of frames and words exceed the limit. If the number of words is within 40, we arbitrarily truncate the frames to satisfy the maximum length. When the length of sentence is more than 40, we discard the words that beyond the length and take video frames with a maximum number of 40.
Bidirectional S2VT: Similar to BIBREF19 , we implement several S2VT-based models: S2VT, bidirectional S2VT and reinforced S2VT with bidirectional LSTM video encoder. We conduct experiment on S2VT using our video features and LSTM structure instead of the end-to-end model in BIBREF19 , which need original RGB frames as input. For bidirectional S2VT model, we first pre-train description generator on COCO 2014 for image caption. We next implement forward and backward pass for video encoding and merge the hidden states step-wise with a learnt weight while the language layer receives merged hidden representation with null padded as words. We also pad the inputs of forward LSTM and backward LSTM with zeros at decoding stage, and concatenate the merged hidden states to embedded words. In the last model, we regard merged bidirectional hidden states as complementary enhancement and concatenate to original INLINEFORM0 features to obtain a reinforced representation of video, then derive sentence from new feature using the last LSTM. The loss is computed only at decoding stage in all S2VT-based models.
Joint-BiLSTM: Different from S2VT-based models, we employ a joint bidirectional LSTM networks to encode video sequence and decode description applying another LSTM respectively rather than sharing the common one. We stack two layers of LSTM networks to encode video and pre-train language model as in S2VT-based models. Similarly, unidirectional LSTM, bidirectional LSTM and reinforced BiLSTM are executed to investigate the performance of each structure. We set 1024 hidden units of the first LSTM in unidirectional encoder so that the output could pass to the second encoder directly, and the memory cell and hidden state of the last time point are applied to initialize description decoder. Bidirectional structure and reinforced BiLSTM in encoder are implemented similarly to the corresponding type structure in S2VT-based models, respectively, and then feed the video representation into description generator as the unidirectional model aforementioned.
Results and Analysis
BLEU BIBREF28 , METEOR BIBREF29 , ROUGE-L BIBREF30 and CIDEr BIBREF31 are common evaluation metrics in image and video description, the first three were originally proposed to evaluate machine translation at the earliest and CIDEr was proposed to evaluate image description with sufficient reference sentences. To quantitatively evaluate the performance of our bidirectional recurrent based approach, we adopt METEOR metric because of its robust performance. Contrasting to the other three metrics, METEOR could capture semantic aspect since it identifies all possible matches by extracting exact matcher, stem matcher, paraphrase matcher and synonym matcher using WordNet database, and compute sentence level similarity scores according to matcher weights. The authors of CIDEr also argued for that METEOR outperforms CIDEr when the reference set is small BIBREF31 .
We first compare our unidirectional, bidirectional structures and reinforced BiLSTM. As shown in Table TABREF19 , in S2VT-based model, bidirectional structure performs very little lower score than unidirectional structure while it shows the opposite results in joint LSTM case. It may be caused by the pad at description generating stage in S2VT-based structure. We note that BiLSTM reinforced structure gains more than 3% improvement than unidirectional-only model in both S2VT-based and joint LSTMs structures, which means that combining bidirectional encoding of video representation is beneficial to exploit some additional temporal structure in video encoder (Figure FIGREF17 ). On structure level, Table TABREF19 illustrates that our Joint-LSTMs based models outperform all S2VT based models correspondingly. It demonstrates our Joint-LSTMs structure benefits from encoding video and decoding natural language separately.
We also evaluate our Joint-BiLSTM structure by comparing with several other state-of-the-art baseline approaches, which exploit either local or global temporal structure. As shown in Table TABREF20 , our Joint-BiLSTM reinforced model outperforms all of the baseline methods. The result of “LSTM” in first row refer from BIBREF15 and the last row but one denotes the best model combining local temporal structure using C3D with global temporal structure utilizing temporal attention in BIBREF17 . From the first two rows, our unidirectional joint LSTM shows rapid improvement, and comparing with S2VT-VGG model in line 3, it also demonstrates some superiority. Even LSTM-E jointly models video and descriptions representation by minimizing the distance between video and corresponding sentence, our Joint-BiLSTM reinforced obtains better performance from bidirectional encoding and separated visual and language models.
We observed that while our unidirectional S2VT has the same deployment as BIBREF19 , our model gives a little poorer performance(line 1, Table TABREF19 and line 3, Table TABREF20 ). As mentioned in Section 3.2.2, they employed an end-to-end model reading original RGB frames and fine-tuning on the VGG caffemodel. The features of frames from VGG INLINEFORM0 layer are more compatible to MSVD dataset and the description task. However, our joint LSTM demonstrates better performance with general features rather than specific ones for data, even superior to their model with multiple feature aspects (RGB + Flow, line 4, Table TABREF20 ), which means that our Joint-BiLSTM could show more powerful descriptive ability in end-to-end case. Certainly, We would investigate effect of end-to-end type of our Joint-BiLSTM in future works.
Conclusion and Future Works
In this paper, we introduced a sequence to sequence approach to describe video clips with natural language. The core of our method was, we applied two LSTM networks for the visual encoder and natural language generator component of our model. In particular, we encoded video sequences with a bidirectional Long-Short Term Memory (BiLSTM) network, which could effectively capture the bidirectional global temporal structure in video. Experimental results on MSVD dataset demonstrated the superior performance over many other state-of-the-art methods.
We also note some limitations in our model, such as end-to-end framework employed in BIBREF19 and distance measured in BIBREF15 . In the future we will make more effort to fix these limitations and exploit the linguistic domain knowledge in visual content understanding. | S2VT, RGB (VGG), RGB (VGG)+Flow (AlexNet), LSTM-E (VGG), LSTM-E (C3D) and Yao et al. |
864b5c1fe8c744f80a55e87421b29d6485b7efd0 | 864b5c1fe8c744f80a55e87421b29d6485b7efd0_0 | Q: What evaluation metrics do they use?
Text: [block] 1 5mm *
2pt*22pt
[block] 2.1 5mm
[block] 2.1.1 5mm
Introduction
Electronic Health Records (EHR) have become ubiquitous in recent years in the United States, owing much to the The Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009. BIBREF0 Their ubiquity have given researchers a treasure trove of new data, especially in the realm of unstructured textual data. However, this new data source comes with usage restrictions in order to preserve the privacy of individual patients as mandated by the Health Insurance Portability and Accountability Act (HIPAA). HIPAA demands any researcher using this sensitive data to first strip the medical records of any protected health information (PHI), a process known as de-identification.
HIPAA allows for two methods for de-identifying PHIs: the “Expert Determination” method in which an expert certifies that the information is rendered not individually identifiable, and the “Safe Harbor” method in which 18 identifiers are removed or replaced with random data in order for the data to be considered not individually identifiable. Our research pertains to the second method (a list of the relevant identifiers can be seen in Table TABREF4 ).
The process of de-identification has been largely a manual and labor intensive task due to both the sensitive nature of the data and the limited availability of software to automate the task. This has led to a relatively small number of open health data sets available for public use. Recently, there have been two well-known de-identification challenges organized by Informatics for Integrating Biology and the Bedside (i2b2) to encourage innovation in the field of de-identification.
In this paper, we build on the recent advances in natural language processing, especially with regards to word embeddings, by incorporating deep contextualized word embeddings developed by Peters et al. BIBREF1 into a deep learning architecture. More precisely, we present a deep learning architecture that differs from current architectures in literature by using bi-directional long short-term memory networks (Bi-LSTMs) with variational dropouts and deep contextualized word embeddings while also using components already present in other systems such traditional word embeddings, character LSTM embeddings and conditional random fields. We test this architecture on two gold standard data sets, the 2014 i2b2 de-identification Track 1 data set BIBREF2 and the nursing notes corpus BIBREF3 . The architecture achieves state-of-the-art performance on both data sets while also achieving faster convergence without the use of dictionaries (or gazetteers) or other rule-based methods that are typically used in other de-identification systems.
The paper is organized as follows: In Section SECREF4 , we review the latest literature around techniques for de-identification with an emphasis on related work using deep learning techniques. In Section SECREF5 , we detail our deep learning architecture and also describe how we use the deep contextualized word embeddings method to improve our results. Section SECREF6 describes the two data sets we will use to evaluate our method and our evaluation metrics. Section SECREF7 presents the performance of our architecture on the data sets. In Section SECREF8 , we discuss the results and provide an analysis of the errors. Finally, in Section SECREF9 , we summarize our contributions while also discussing possible future research.
Background and Related Work
The task of automatic de-identification has been heavily studied recently, in part due to two main challenges organized by i2b2 in 2006 and in 2014. The task of de-identification can be classified as a named entity recognition (NER) problem which has been extensively studied in machine learning literature. Automated de-identification systems can be roughly broken down into four main categories:
Rule-based Systems
Rule-based systems make heavy use of pattern matching such as dictionaries (or gazetteers), regular expressions and other patterns. BIBREF2 Systems such as the ones described in BIBREF5 , BIBREF6 do not require the use any labeled data. Hence, they are considered as unsupervised learning systems. Advantages of such systems include their ease of use, ease of adding new patterns and easy interpretability. However, these methods suffer from a lack of robustness with regards to the input. For example, different casings of the same word could be misinterpreted as an unknown word. Furthermore, typographical errors are almost always present in most documents and rule-based systems often cannot correctly handle these types of inaccuracies present in the data. Critically, these systems cannot handle context which could render a medical text unreadable. For example, a diagnosis of “Lou Gehring disease” could be misidentified by such a system as a PHI of type Name. The system might replace the tokens “Lou” and “Gehring” with randomized names rendering the text meaningless if enough of these tokens were replaced.
Machine Learning Systems
The drawbacks of such rule-based systems led researchers to adopt a machine learning approach. A comprehensive review of such systems can be found in BIBREF7 , BIBREF8 . In machine learning systems, given a sequence of input vectors INLINEFORM0 , a machine learning algorithm outputs label predictions INLINEFORM1 . Since the task of de-identification is a classification task, traditional classification algorithms such as support vector machines, conditional random fields (CRFs) and decision trees BIBREF9 have been used for building de-identification systems.
These machine learning-based systems have the advantage of being able to recognize complex patterns that are not as readily evident to the naked eye. However, the drawback of such ML-based systems is that since classification is a supervised learning task, most of the common classification algorithms require a large labeled data set for robust models. Furthermore, since most of the algorithms described in the last paragraph maximize the likelihood of a label INLINEFORM0 given an vector of inputs INLINEFORM1 , rare patterns that might not occur in the training data set would be misclassified as not being a PHI label. Furthermore, these models might not be generalizable to other text corpora that contain significantly different patterns such as sentence structures and uses of different abbreviated words found commonly in medical notes than the training data set.
Hybrid Systems
With both the advantages and disadvantages of stand alone rule-based and ML-based systems well-documented, systems such as the ones detailed in BIBREF2 combined both ML and rule-based systems to achieve impressive results. Systems such as the ones presented for 2014 i2b2 challenge by Yang et al. BIBREF10 and Liu et al. BIBREF11 used dictionary look-ups, regular expressions and CRFs to achieve accuracies of well over 90% in identifying PHIs.
It is important to note that such hybrid systems rely heavily on feature engineering, a process that manufactures new features from the data that are not present in the raw text. Most machine learning techniques, for example, cannot take text as an input. They require the text to be represented as a vector of numbers. An example of such features can be seen in the system that won the 2014 i2b2 de-identification challenge by Yang et al. BIBREF10 . Their system uses token features such as part-of-speech tagging and chunking, contextual features such as word lemma and POS tags of neighboring words, orthographic features such as capitalization and punctuation marks and task-specific features such as building a list that included all the full names, acronyms of US states and collecting TF-IDF-statistics. Although such hybrid systems achieve impressive results, the task of feature engineering is a time-intensive task that might not be generalizable to other text corpora.
Deep Learning Systems
With the disadvantages of the past three approaches to building a de-identification system in mind, the current state-of-the-art systems employ deep learning techniques to achieve better results than the best hybrid systems while also not requiring the time-consuming process of feature engineering. Deep learning is a subset of machine learning that uses multiple layers of Artificial Neural Networks (ANNs), which has been very succesful at most Natural Language Processing (NLP) tasks. Recent advances in the field of deep learning and NLP especially in regards to named entity recognition have allowed systems such as the one by Dernoncourt et al. BIBREF9 to achieve better results on the 2014 i2b2 de-identification challenge data set than the winning hybrid system proposed by Yang et al. BIBREF10 . The advances in NLP and deep learning which have allowed for this performance are detailed below.
ANNs cannot take words as inputs and require numeric inputs, therefore, past approaches to using ANNs for NLP have been to employ a bag-of-words (BoW) representation of words where a dictionary is built of all known words and each word in a sentence is assigned a unique vector that is inputted into the ANN. A drawback of such a technique is such that words that have similar meanings are represented completely different. As a solution to this problem, a technique called word embeddings have been used. Word embeddings gained popularity when Mikolov et al. BIBREF12 used ANNs to generate a distributed vector representation of a word based on the usage of the word in a text corpus. This way of representing words allowed for similar words to be represented using vectors of similar values while also allowing for complex operations such as the famous example: INLINEFORM0 , where INLINEFORM1 represents a vector for a particular word.
While pre-trained word embeddings such as the widely used GloVe BIBREF12 embeddings are revolutionary and powerful, such representations only capture one context representation, namely the one of the training corpus they were derived from. This shortcoming has led to the very recent development of context-dependent representations such as the ones developed by BIBREF1 , BIBREF13 , which can capture different features of a word.
The Embeddings from Language Models (ELMo) from the system by Peters et al. BIBREF1 are used by the architecture in this paper to achieve state-of-the-art results. The ELMo representations, learned by combining Bi-LSTMs with a language modeling objective, captures context-depended aspects at the higher-level LSTM while the lower-level LSTM captures aspects of syntax. Moreover, the outputs of the different layers of the system can be used independently or averaged to output embeddings that significantly improve some existing models for solving NLP problems. These results drive our motivation to include the ELMo representations in our architecture.
The use of ANNs for many machine learning tasks has gained popularity in recent years. Recently, a variant of recurrent neural networks (RNN) called Bi-directional Long Short-Term Memory (Bi-LSTM) networks has been successfully employed especially in the realm of NER.
In fact, several Bi-LSTM architectures have been proposed to tackle the problem of NER: LSTM-CRF, LSTM-CNNs-CRF and LSTM-CNNs BIBREF9 . The current best performing system on the i2b2 dataset is in fact a system based on LSTM-CRF BIBREF9 .
Method
Our architecture incorporates most of the recent advances in NLP and NER while also differing from other architectures described in the previous section by use of deep contextualized word embeddings, Bi-LSTMs with a variational dropout and the use of the Adam optimizer. Our architecture can be broken down into four distinct layers: pre-processing, embeddings, Bi-LSTM and CRF classifier. A graphical illustration of the architecture can be seen in Figure FIGREF16 while a summary of the parameters for our architecture can be found in Table TABREF17 .
Pre-processing Layer
For a given document INLINEFORM0 , we first break down the document into sentences INLINEFORM1 , tokens INLINEFORM2 and characters INLINEFORM3 where INLINEFORM4 represents the document number, INLINEFORM5 represents the sentence number, INLINEFORM6 represents the token number, and INLINEFORM7 represents the character number. For example, INLINEFORM8 Patient, where the token: “Patient” represents the 3rd token of the 2nd sentence of the 1st document.
After parsing the tokens, we use a widely used and readily available Python toolkit called Natural Langauge ToolKit (NLTK) to generate a part-of-speech (POS) tag for each token. This generates a POS feature for each token which we will transform into a 20-dimensional one-hot-encoded input vector, INLINEFORM0 , then feed into the main LSTM layer.
For the data labels, since the data labels can be made up of multiple tokens, we formatted the labels to the BIO scheme. The BIO scheme tags the beginning of a PHI with a B-, the rest of the same PHI tokens as I- and the rest of the tokens not associated with a PHI as O. For example, the sentence, “ INLINEFORM0 ”, would have the corresponding labels, “ INLINEFORM1 ”.
Embedding Layer
For the embedding layer, we use three main types of embeddings to represent our input text: traditional word embeddings, ELMo embeddings and character-level LSTM embeddings.
The traditional word embeddings use the latest GloVe 3 BIBREF12 pre-trained word vectors that were trained on the Common Crawl with about 840 billion tokens. For every token input, INLINEFORM0 , the GloVe system outputs INLINEFORM1 , a dense 300-dimensional word vector representation of that same token. We also experimented with other word embeddings by using the bio-medical corpus trained word embeddings BIBREF14 to see if having word embeddings trained on medical texts will have an impact on our results.
As mentioned in previous sections, we also incorporate the powerful ELMo representations as a feature to our Bi-LSTMs. The specifics of the ELMo representations are detailed in BIBREF1 . In short, we compute an ELMo representation by passing a token input INLINEFORM0 to the ELMo network and averaging the the layers of the network to produce an 1024-dimensional ELMo vector, INLINEFORM1 .
Character-level information can capture some information about the token itself while also mitigating issues such as unseen words and misspellings. While lemmatizing (i.e., the act of turning inflected forms of a word to their base or dictionary form) of a token can solve these issues, tokens such as the ones found in medical texts could have important distinctions between, for example, the grammar form of the token. As such, Ma et al. BIBREF15 have used Convolutional Neural Networks (CNN) while Lample et al. BIBREF16 have used Bi-LSTMs to produce character-enhanced representations of each unique token. We have utilized the latter approach of using Bi-LSTMs for produce a character-enhanced embedding for each unique word in our data set. Our parameters for the forward and backward LSTMs are 25 each and the maximum character length is 25, which results in an 50-dimensional embedding vector, INLINEFORM0 , for each token.
After creating the three embeddings for each token, INLINEFORM0 , we concatenate the GloVe and ELMo representations to produce a single 1324-dimensional word input vector, INLINEFORM1 . The concatenated word vector is then further concatenated with the character embedding vector, INLINEFORM2 , POS one-hot-encoded vector, INLINEFORM3 , and the casing embedded vector, INLINEFORM4 , to produce a single 1394-dimensional input vector, INLINEFORM5 , that we feed into our Bi-LSTM layer.
Bi-LSTM Layer
The Bi-LSTM layer is composed of two LSTM layers, which are a variant of the Bidirectional RNNs. In short, the Bi-LSTM layer contains two independent LSTMs in which one network is fed input in the normal time direction while the other network is fed input in the reverse time direction. The outputs of the two networks can then be combined using either summation, multiplication, concatenation or averaging. Our architecture uses simple concatenation to combine the outputs of the two networks.
Our architecture for the Bi-LSTM layer is similar to the ones used by BIBREF16 , BIBREF17 , BIBREF18 with each LSTM containing 100 hidden units. To ensure that the neural networks do not overfit, we use a variant of the popular dropout technique called variational dropout BIBREF19 to regularize our neural networks. Variational dropout differs from the traditional naïve dropout technique by having the same dropout mask for the inputs, outputs and the recurrent layers BIBREF19 . This is in contrast to the traditional technique of applying a different dropout mask for each of the input and output layers. BIBREF20 shows that variational dropout applied to the output and recurrent units performs significantly better than naïve dropout or no dropout for the NER tasks. As such, we apply a dropout probability of 0.5 for both the output and the recurrent units in our architecture.
CRF layer
As a final step, the outputs of the Bi-LSTM layer are inputted into a linear-chain CRF classifier, which maximizes the label probabilities of the entire input sentence. This approach is identical to the Bi-LSTM-CRF model by Huang et al. BIBREF21 CRFs have been incorporated in numerous state-of-the-art models BIBREF16 , BIBREF18 , BIBREF3 because of their ability to incorporate tag information at the sentence level.
While the Bi-LSTM layer takes information from the context into account when generating its label predictions, each decision is independent from the other labels in the sentence. The CRF allows us to find the labeling sequence in a sentence with the highest probability. This way, both previous and subsequent label information is used in determining the label of a given token. As a sequence model, the CRF posits a probability model for the label sequence of the tokens in a sentence, conditional on the word sequence and the output scores from the Bi-LTSM model for the given sentence. In doing so, the CRF models the conditional distribution of the label sequence instead of a joint distribution with the words and output scores. Thus, it does not assume independent features, while at the same time not making strong distributional assumptions about the relationship between the features and sequence labels.
Data and Evaluation Metrics
The two main data sets that we will use to evaluate our architecture are the 2014 i2b2 de-identification challenge data set BIBREF2 and the nursing notes corpus BIBREF3 .
The i2b2 corpus was used by all tracks of the 2014 i2b2 challenge. It consists of 1,304 patient progress notes for 296 diabetic patients. All the PHIs were removed and replaced with random replacements. The PHIs in this data set were broken down first into the HIPAA categories and then into the i2b2-PHI categories as shown in Table TABREF23 . Overall, the data set contains 56,348 sentences with 984,723 separate tokens of which 41,355 are separate PHI tokens, which represent 28,867 separate PHI instances. For our test-train-valid split, we chose 10% of the training sentences to serve as our validation set, which represents 3,381 sentences while a separately held-out official test data set was specified by the competition. This test data set contains 22,541 sentences including 15,275 separate PHI tokens.
The nursing notes were originally collected by Neamatullah et al. BIBREF3 . The data set contains 2,434 notes of which there are 1,724 separate PHI instances. A summary of the breakdown of the PHI categories of this nursing corpora can be seen in Table TABREF23 .
Evaluation Metrics
For de-identification tasks, the three metrics we will use to evaluate the performance of our architecture are Precision, Recall and INLINEFORM0 score as defined below. We will compute both the binary INLINEFORM1 score and the three metrics for each PHI type for both data sets. Note that binary INLINEFORM2 score calculates whether or not a token was identified as a PHI as opposed to correctly predicting the right PHI type. For de-identification, we place more importance on identifying if a token was a PHI instance with correctly predicting the right PHI type as a secondary objective. INLINEFORM3 INLINEFORM4
Notice that a high recall is paramount given the risk of accidentally disclosing sensitive patient information if not all PHI are detected and removed from the document or replaced by fake data. A high precision is also desired to preserve the integrity of the documents, as a large number of false positives might obscure the meaning of the text or even distort it. As the harmonic mean of precision and recall, the INLINEFORM0 score gives an overall measure for model performance that is frequently employed in the NLP literature.
As a benchmark, we will use the results of the systems by Burckhardt et al. BIBREF22 , Liu et al. BIBREF18 , Dernoncourt et al. BIBREF9 and Yang et al. BIBREF10 on the i2b2 dataset and the performance of Burckhardt et al. on the nursing corpus. Note that Burckhardt et al. used the entire data set for their results as it is an unsupervised learning system while we had to split our data set into 60% training data and 40% testing data.
Results
We evaluated the architecture on both the i2b2-PHI categories and the HIPAA-PHI categories for the i2b2 data set based on token-level labels. Note that the HIPAA categories are a super set of the i2b2-PHI categories. We also ran the analysis 5+ times to give us a range of maximum scores for the different data sets.
Table TABREF25 gives us a summary of how our architecture performed against other systems on the binary INLINEFORM0 score metrics while Table TABREF26 and Table TABREF27 summarizes the performance of our architecture against other systems on HIPAA-PHI categories and i2b2-PHI categories respectively. Table TABREF28 presents a summary of the performance on the nursing note corpus while also contrasting the performances achieved by the deidentify system.
Discussion and Error Analysis
As we can see in Table TABREF26 , with the exception of ID, our architecture performs considerably better than systems by Liu et al. and Yang et al. Dernoncourt et al. did not provide exact figures for the HIPAA-PHI categories so we have excluded them from our analysis. Furthermore, Table TABREF25 shows that our architecture performs similarly to the best scores achieved by Dernoncourt et al., with our architecture slightly edging out Dernoncourt et al. on the precision metric. For the nursing corpus, our system, while not performing as well as the performances on i2b2 data set, managed to best the scores achieved by the deidentify system while also achieving a binary INLINEFORM0 score of over 0.812. It is important to note that deidentify was a unsupervised learning system, it did not require the use of a train-valid-test split and therefore, used the whole data set for their performance numbers. The results of our architecture is assessed using a 60%/40% train/test split.
Our architecture noticeably converges faster than the NeuroNER, which was trained for 100 epochs and the system by Liu et al. BIBREF18 which was trained for 80 epochs. Different runs of training our architecture on the i2b2 dataset converge at around 23 INLINEFORM0 4 epochs. A possible explanation for this is due to our architecture using the Adam optimizer, whereas the NeuroNER system use the Stochastic Gradient Descent (SGD) optimizer. In fact, Reimers et al. BIBREF20 show that the SGD optimizer performed considerably worse than the Adam optimizer for different NLP tasks.
Furthermore, we also do not see any noticeable improvements from using the PubMed database trained word embeddings BIBREF14 instead of the general text trained GloVe word embeddings. In fact, we consistently saw better INLINEFORM0 scores using the GloVe embeddings. This could be due to the fact that our use case was for identifying general labels such as Names, Phones, Locations etc. instead of bio-medical specific terms such as diseases which are far better represented in the PubMed corpus.
Error Analysis
We will mainly focus on the two PHI categories: Profession and ID for our error analysis on the i2b2 data set. It is interesting to note that the best performing models on the i2b2 data set by Dernoncourt et al. BIBREF9 experienced similar lower performances on the same two categories. However, we note the performances by Dernoncourt et al. were achieved using a “combination of n-gram, morphological, orthographic and gazetteer features” BIBREF9 while our architecture uses only POS tagging as an external feature. Dernoncourt et al. posits that the lower performance on the Profession category might be due to the close embeddings of the Profession tokens to other PHI tokens which we can confirm on our architecture as well. Furthermore, our experiments show that the Profession PHI performs considerably better with the PubMed embedded model than GloVe embedded model. This could be due to the fact that PubMed embeddings were trained on the PubMed database, which is a database of medical literature. GloVe on the other hand was trained on a general database, which means the PubMed embeddings for Profession tokens might not be as close to other tokens as is the case for the GloVe embeddings.
For the ID PHI, our analysis shows that some of the errors were due to tokenization errors. For example, a “:” was counted as PHI token which our architecture correctly predicted as not a PHI token. Since our architecture is not custom tailored to detect sophisticated ID patterns such as the systems in BIBREF9 , BIBREF10 , we have failed to detect some ID PHIs such as “265-01-73”, a medical record number, which our architecture predicted as a phone number due to the format of the number. Such errors could easily be mitigated by the use of simple regular expressions.
We can see that our architecture outperforms the deidentify system by a considerable margin on most categories as measured by the INLINEFORM0 score. For example, the authors of deidentify note that Date PHIs have considerably low precision values while our architecture achieve a precision value of greater than 0.915% for the Date PHI. However, Burckhardt et al. BIBREF22 achieve an impressive precision of 0.899 and recall of 1.0 for the Phone PHI while our architecture only manages 0.778 and 0.583 respectively. Our analysis of this category shows that this is mainly due a difference in tokenization, stand alone number are being classified as not a PHI.
We tried to use the model that we trained on the i2b2 data set to predict the categories of the nursing data set. However, due to difference in the text structure, the actual text and the format, we achieved less than random performance on the nursing data set. This brings up an important point about the transferability of such models.
Ablation Analysis
Our ablation analysis shows us that the layers of our models adds to the overall performance. Figure FIGREF33 shows the binary INLINEFORM0 scores on the i2b2 data set with each bar being a feature toggled off. For example, the “No Char Embd” bar shows the performance of the model with no character embeddings and everything else the same as our best model.
We can see a noticeable change in the performance if we do not include the ELMo embeddings versus no GloVe embeddings. The slight decrease in performance when we use no GloVe embeddings shows us that this is a feature we might choose to exclude if computation time is limited. Furthermore, we can see the impact of having no variational dropout and only using a naïve dropout, it shows that variational dropout is better at regularizing our neural network.
Conclusion
In this study, we show that our deep learning architecture, which incorporates the latest developments in contextual word embeddings and NLP, achieves state-of-the-art performance on two widely available gold standard de-identification data sets while also achieving similar performance as the best system available in less epochs. Our architecture also significantly improves over the performance of the hybrid system deidentify on the nursing data set.
This architecture could be integrated into a client-ready system such as the deidentify system. However, as mentioned in Section SECREF8 , the use of a dictionary (or gazetter) might help improve the model even further specially with regards to the Location and Profession PHI types. Such a hybrid system would be highly beneficial to practitioners that needs to de-identify patient data on a daily basis. | Precision, Recall and INLINEFORM0 score |
d469c7de5c9e6dd8a901190e95688c446f12118f | d469c7de5c9e6dd8a901190e95688c446f12118f_0 | Q: What performance is achieved?
Text: [block] 1 5mm *
2pt*22pt
[block] 2.1 5mm
[block] 2.1.1 5mm
Introduction
Electronic Health Records (EHR) have become ubiquitous in recent years in the United States, owing much to the The Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009. BIBREF0 Their ubiquity have given researchers a treasure trove of new data, especially in the realm of unstructured textual data. However, this new data source comes with usage restrictions in order to preserve the privacy of individual patients as mandated by the Health Insurance Portability and Accountability Act (HIPAA). HIPAA demands any researcher using this sensitive data to first strip the medical records of any protected health information (PHI), a process known as de-identification.
HIPAA allows for two methods for de-identifying PHIs: the “Expert Determination” method in which an expert certifies that the information is rendered not individually identifiable, and the “Safe Harbor” method in which 18 identifiers are removed or replaced with random data in order for the data to be considered not individually identifiable. Our research pertains to the second method (a list of the relevant identifiers can be seen in Table TABREF4 ).
The process of de-identification has been largely a manual and labor intensive task due to both the sensitive nature of the data and the limited availability of software to automate the task. This has led to a relatively small number of open health data sets available for public use. Recently, there have been two well-known de-identification challenges organized by Informatics for Integrating Biology and the Bedside (i2b2) to encourage innovation in the field of de-identification.
In this paper, we build on the recent advances in natural language processing, especially with regards to word embeddings, by incorporating deep contextualized word embeddings developed by Peters et al. BIBREF1 into a deep learning architecture. More precisely, we present a deep learning architecture that differs from current architectures in literature by using bi-directional long short-term memory networks (Bi-LSTMs) with variational dropouts and deep contextualized word embeddings while also using components already present in other systems such traditional word embeddings, character LSTM embeddings and conditional random fields. We test this architecture on two gold standard data sets, the 2014 i2b2 de-identification Track 1 data set BIBREF2 and the nursing notes corpus BIBREF3 . The architecture achieves state-of-the-art performance on both data sets while also achieving faster convergence without the use of dictionaries (or gazetteers) or other rule-based methods that are typically used in other de-identification systems.
The paper is organized as follows: In Section SECREF4 , we review the latest literature around techniques for de-identification with an emphasis on related work using deep learning techniques. In Section SECREF5 , we detail our deep learning architecture and also describe how we use the deep contextualized word embeddings method to improve our results. Section SECREF6 describes the two data sets we will use to evaluate our method and our evaluation metrics. Section SECREF7 presents the performance of our architecture on the data sets. In Section SECREF8 , we discuss the results and provide an analysis of the errors. Finally, in Section SECREF9 , we summarize our contributions while also discussing possible future research.
Background and Related Work
The task of automatic de-identification has been heavily studied recently, in part due to two main challenges organized by i2b2 in 2006 and in 2014. The task of de-identification can be classified as a named entity recognition (NER) problem which has been extensively studied in machine learning literature. Automated de-identification systems can be roughly broken down into four main categories:
Rule-based Systems
Rule-based systems make heavy use of pattern matching such as dictionaries (or gazetteers), regular expressions and other patterns. BIBREF2 Systems such as the ones described in BIBREF5 , BIBREF6 do not require the use any labeled data. Hence, they are considered as unsupervised learning systems. Advantages of such systems include their ease of use, ease of adding new patterns and easy interpretability. However, these methods suffer from a lack of robustness with regards to the input. For example, different casings of the same word could be misinterpreted as an unknown word. Furthermore, typographical errors are almost always present in most documents and rule-based systems often cannot correctly handle these types of inaccuracies present in the data. Critically, these systems cannot handle context which could render a medical text unreadable. For example, a diagnosis of “Lou Gehring disease” could be misidentified by such a system as a PHI of type Name. The system might replace the tokens “Lou” and “Gehring” with randomized names rendering the text meaningless if enough of these tokens were replaced.
Machine Learning Systems
The drawbacks of such rule-based systems led researchers to adopt a machine learning approach. A comprehensive review of such systems can be found in BIBREF7 , BIBREF8 . In machine learning systems, given a sequence of input vectors INLINEFORM0 , a machine learning algorithm outputs label predictions INLINEFORM1 . Since the task of de-identification is a classification task, traditional classification algorithms such as support vector machines, conditional random fields (CRFs) and decision trees BIBREF9 have been used for building de-identification systems.
These machine learning-based systems have the advantage of being able to recognize complex patterns that are not as readily evident to the naked eye. However, the drawback of such ML-based systems is that since classification is a supervised learning task, most of the common classification algorithms require a large labeled data set for robust models. Furthermore, since most of the algorithms described in the last paragraph maximize the likelihood of a label INLINEFORM0 given an vector of inputs INLINEFORM1 , rare patterns that might not occur in the training data set would be misclassified as not being a PHI label. Furthermore, these models might not be generalizable to other text corpora that contain significantly different patterns such as sentence structures and uses of different abbreviated words found commonly in medical notes than the training data set.
Hybrid Systems
With both the advantages and disadvantages of stand alone rule-based and ML-based systems well-documented, systems such as the ones detailed in BIBREF2 combined both ML and rule-based systems to achieve impressive results. Systems such as the ones presented for 2014 i2b2 challenge by Yang et al. BIBREF10 and Liu et al. BIBREF11 used dictionary look-ups, regular expressions and CRFs to achieve accuracies of well over 90% in identifying PHIs.
It is important to note that such hybrid systems rely heavily on feature engineering, a process that manufactures new features from the data that are not present in the raw text. Most machine learning techniques, for example, cannot take text as an input. They require the text to be represented as a vector of numbers. An example of such features can be seen in the system that won the 2014 i2b2 de-identification challenge by Yang et al. BIBREF10 . Their system uses token features such as part-of-speech tagging and chunking, contextual features such as word lemma and POS tags of neighboring words, orthographic features such as capitalization and punctuation marks and task-specific features such as building a list that included all the full names, acronyms of US states and collecting TF-IDF-statistics. Although such hybrid systems achieve impressive results, the task of feature engineering is a time-intensive task that might not be generalizable to other text corpora.
Deep Learning Systems
With the disadvantages of the past three approaches to building a de-identification system in mind, the current state-of-the-art systems employ deep learning techniques to achieve better results than the best hybrid systems while also not requiring the time-consuming process of feature engineering. Deep learning is a subset of machine learning that uses multiple layers of Artificial Neural Networks (ANNs), which has been very succesful at most Natural Language Processing (NLP) tasks. Recent advances in the field of deep learning and NLP especially in regards to named entity recognition have allowed systems such as the one by Dernoncourt et al. BIBREF9 to achieve better results on the 2014 i2b2 de-identification challenge data set than the winning hybrid system proposed by Yang et al. BIBREF10 . The advances in NLP and deep learning which have allowed for this performance are detailed below.
ANNs cannot take words as inputs and require numeric inputs, therefore, past approaches to using ANNs for NLP have been to employ a bag-of-words (BoW) representation of words where a dictionary is built of all known words and each word in a sentence is assigned a unique vector that is inputted into the ANN. A drawback of such a technique is such that words that have similar meanings are represented completely different. As a solution to this problem, a technique called word embeddings have been used. Word embeddings gained popularity when Mikolov et al. BIBREF12 used ANNs to generate a distributed vector representation of a word based on the usage of the word in a text corpus. This way of representing words allowed for similar words to be represented using vectors of similar values while also allowing for complex operations such as the famous example: INLINEFORM0 , where INLINEFORM1 represents a vector for a particular word.
While pre-trained word embeddings such as the widely used GloVe BIBREF12 embeddings are revolutionary and powerful, such representations only capture one context representation, namely the one of the training corpus they were derived from. This shortcoming has led to the very recent development of context-dependent representations such as the ones developed by BIBREF1 , BIBREF13 , which can capture different features of a word.
The Embeddings from Language Models (ELMo) from the system by Peters et al. BIBREF1 are used by the architecture in this paper to achieve state-of-the-art results. The ELMo representations, learned by combining Bi-LSTMs with a language modeling objective, captures context-depended aspects at the higher-level LSTM while the lower-level LSTM captures aspects of syntax. Moreover, the outputs of the different layers of the system can be used independently or averaged to output embeddings that significantly improve some existing models for solving NLP problems. These results drive our motivation to include the ELMo representations in our architecture.
The use of ANNs for many machine learning tasks has gained popularity in recent years. Recently, a variant of recurrent neural networks (RNN) called Bi-directional Long Short-Term Memory (Bi-LSTM) networks has been successfully employed especially in the realm of NER.
In fact, several Bi-LSTM architectures have been proposed to tackle the problem of NER: LSTM-CRF, LSTM-CNNs-CRF and LSTM-CNNs BIBREF9 . The current best performing system on the i2b2 dataset is in fact a system based on LSTM-CRF BIBREF9 .
Method
Our architecture incorporates most of the recent advances in NLP and NER while also differing from other architectures described in the previous section by use of deep contextualized word embeddings, Bi-LSTMs with a variational dropout and the use of the Adam optimizer. Our architecture can be broken down into four distinct layers: pre-processing, embeddings, Bi-LSTM and CRF classifier. A graphical illustration of the architecture can be seen in Figure FIGREF16 while a summary of the parameters for our architecture can be found in Table TABREF17 .
Pre-processing Layer
For a given document INLINEFORM0 , we first break down the document into sentences INLINEFORM1 , tokens INLINEFORM2 and characters INLINEFORM3 where INLINEFORM4 represents the document number, INLINEFORM5 represents the sentence number, INLINEFORM6 represents the token number, and INLINEFORM7 represents the character number. For example, INLINEFORM8 Patient, where the token: “Patient” represents the 3rd token of the 2nd sentence of the 1st document.
After parsing the tokens, we use a widely used and readily available Python toolkit called Natural Langauge ToolKit (NLTK) to generate a part-of-speech (POS) tag for each token. This generates a POS feature for each token which we will transform into a 20-dimensional one-hot-encoded input vector, INLINEFORM0 , then feed into the main LSTM layer.
For the data labels, since the data labels can be made up of multiple tokens, we formatted the labels to the BIO scheme. The BIO scheme tags the beginning of a PHI with a B-, the rest of the same PHI tokens as I- and the rest of the tokens not associated with a PHI as O. For example, the sentence, “ INLINEFORM0 ”, would have the corresponding labels, “ INLINEFORM1 ”.
Embedding Layer
For the embedding layer, we use three main types of embeddings to represent our input text: traditional word embeddings, ELMo embeddings and character-level LSTM embeddings.
The traditional word embeddings use the latest GloVe 3 BIBREF12 pre-trained word vectors that were trained on the Common Crawl with about 840 billion tokens. For every token input, INLINEFORM0 , the GloVe system outputs INLINEFORM1 , a dense 300-dimensional word vector representation of that same token. We also experimented with other word embeddings by using the bio-medical corpus trained word embeddings BIBREF14 to see if having word embeddings trained on medical texts will have an impact on our results.
As mentioned in previous sections, we also incorporate the powerful ELMo representations as a feature to our Bi-LSTMs. The specifics of the ELMo representations are detailed in BIBREF1 . In short, we compute an ELMo representation by passing a token input INLINEFORM0 to the ELMo network and averaging the the layers of the network to produce an 1024-dimensional ELMo vector, INLINEFORM1 .
Character-level information can capture some information about the token itself while also mitigating issues such as unseen words and misspellings. While lemmatizing (i.e., the act of turning inflected forms of a word to their base or dictionary form) of a token can solve these issues, tokens such as the ones found in medical texts could have important distinctions between, for example, the grammar form of the token. As such, Ma et al. BIBREF15 have used Convolutional Neural Networks (CNN) while Lample et al. BIBREF16 have used Bi-LSTMs to produce character-enhanced representations of each unique token. We have utilized the latter approach of using Bi-LSTMs for produce a character-enhanced embedding for each unique word in our data set. Our parameters for the forward and backward LSTMs are 25 each and the maximum character length is 25, which results in an 50-dimensional embedding vector, INLINEFORM0 , for each token.
After creating the three embeddings for each token, INLINEFORM0 , we concatenate the GloVe and ELMo representations to produce a single 1324-dimensional word input vector, INLINEFORM1 . The concatenated word vector is then further concatenated with the character embedding vector, INLINEFORM2 , POS one-hot-encoded vector, INLINEFORM3 , and the casing embedded vector, INLINEFORM4 , to produce a single 1394-dimensional input vector, INLINEFORM5 , that we feed into our Bi-LSTM layer.
Bi-LSTM Layer
The Bi-LSTM layer is composed of two LSTM layers, which are a variant of the Bidirectional RNNs. In short, the Bi-LSTM layer contains two independent LSTMs in which one network is fed input in the normal time direction while the other network is fed input in the reverse time direction. The outputs of the two networks can then be combined using either summation, multiplication, concatenation or averaging. Our architecture uses simple concatenation to combine the outputs of the two networks.
Our architecture for the Bi-LSTM layer is similar to the ones used by BIBREF16 , BIBREF17 , BIBREF18 with each LSTM containing 100 hidden units. To ensure that the neural networks do not overfit, we use a variant of the popular dropout technique called variational dropout BIBREF19 to regularize our neural networks. Variational dropout differs from the traditional naïve dropout technique by having the same dropout mask for the inputs, outputs and the recurrent layers BIBREF19 . This is in contrast to the traditional technique of applying a different dropout mask for each of the input and output layers. BIBREF20 shows that variational dropout applied to the output and recurrent units performs significantly better than naïve dropout or no dropout for the NER tasks. As such, we apply a dropout probability of 0.5 for both the output and the recurrent units in our architecture.
CRF layer
As a final step, the outputs of the Bi-LSTM layer are inputted into a linear-chain CRF classifier, which maximizes the label probabilities of the entire input sentence. This approach is identical to the Bi-LSTM-CRF model by Huang et al. BIBREF21 CRFs have been incorporated in numerous state-of-the-art models BIBREF16 , BIBREF18 , BIBREF3 because of their ability to incorporate tag information at the sentence level.
While the Bi-LSTM layer takes information from the context into account when generating its label predictions, each decision is independent from the other labels in the sentence. The CRF allows us to find the labeling sequence in a sentence with the highest probability. This way, both previous and subsequent label information is used in determining the label of a given token. As a sequence model, the CRF posits a probability model for the label sequence of the tokens in a sentence, conditional on the word sequence and the output scores from the Bi-LTSM model for the given sentence. In doing so, the CRF models the conditional distribution of the label sequence instead of a joint distribution with the words and output scores. Thus, it does not assume independent features, while at the same time not making strong distributional assumptions about the relationship between the features and sequence labels.
Data and Evaluation Metrics
The two main data sets that we will use to evaluate our architecture are the 2014 i2b2 de-identification challenge data set BIBREF2 and the nursing notes corpus BIBREF3 .
The i2b2 corpus was used by all tracks of the 2014 i2b2 challenge. It consists of 1,304 patient progress notes for 296 diabetic patients. All the PHIs were removed and replaced with random replacements. The PHIs in this data set were broken down first into the HIPAA categories and then into the i2b2-PHI categories as shown in Table TABREF23 . Overall, the data set contains 56,348 sentences with 984,723 separate tokens of which 41,355 are separate PHI tokens, which represent 28,867 separate PHI instances. For our test-train-valid split, we chose 10% of the training sentences to serve as our validation set, which represents 3,381 sentences while a separately held-out official test data set was specified by the competition. This test data set contains 22,541 sentences including 15,275 separate PHI tokens.
The nursing notes were originally collected by Neamatullah et al. BIBREF3 . The data set contains 2,434 notes of which there are 1,724 separate PHI instances. A summary of the breakdown of the PHI categories of this nursing corpora can be seen in Table TABREF23 .
Evaluation Metrics
For de-identification tasks, the three metrics we will use to evaluate the performance of our architecture are Precision, Recall and INLINEFORM0 score as defined below. We will compute both the binary INLINEFORM1 score and the three metrics for each PHI type for both data sets. Note that binary INLINEFORM2 score calculates whether or not a token was identified as a PHI as opposed to correctly predicting the right PHI type. For de-identification, we place more importance on identifying if a token was a PHI instance with correctly predicting the right PHI type as a secondary objective. INLINEFORM3 INLINEFORM4
Notice that a high recall is paramount given the risk of accidentally disclosing sensitive patient information if not all PHI are detected and removed from the document or replaced by fake data. A high precision is also desired to preserve the integrity of the documents, as a large number of false positives might obscure the meaning of the text or even distort it. As the harmonic mean of precision and recall, the INLINEFORM0 score gives an overall measure for model performance that is frequently employed in the NLP literature.
As a benchmark, we will use the results of the systems by Burckhardt et al. BIBREF22 , Liu et al. BIBREF18 , Dernoncourt et al. BIBREF9 and Yang et al. BIBREF10 on the i2b2 dataset and the performance of Burckhardt et al. on the nursing corpus. Note that Burckhardt et al. used the entire data set for their results as it is an unsupervised learning system while we had to split our data set into 60% training data and 40% testing data.
Results
We evaluated the architecture on both the i2b2-PHI categories and the HIPAA-PHI categories for the i2b2 data set based on token-level labels. Note that the HIPAA categories are a super set of the i2b2-PHI categories. We also ran the analysis 5+ times to give us a range of maximum scores for the different data sets.
Table TABREF25 gives us a summary of how our architecture performed against other systems on the binary INLINEFORM0 score metrics while Table TABREF26 and Table TABREF27 summarizes the performance of our architecture against other systems on HIPAA-PHI categories and i2b2-PHI categories respectively. Table TABREF28 presents a summary of the performance on the nursing note corpus while also contrasting the performances achieved by the deidentify system.
Discussion and Error Analysis
As we can see in Table TABREF26 , with the exception of ID, our architecture performs considerably better than systems by Liu et al. and Yang et al. Dernoncourt et al. did not provide exact figures for the HIPAA-PHI categories so we have excluded them from our analysis. Furthermore, Table TABREF25 shows that our architecture performs similarly to the best scores achieved by Dernoncourt et al., with our architecture slightly edging out Dernoncourt et al. on the precision metric. For the nursing corpus, our system, while not performing as well as the performances on i2b2 data set, managed to best the scores achieved by the deidentify system while also achieving a binary INLINEFORM0 score of over 0.812. It is important to note that deidentify was a unsupervised learning system, it did not require the use of a train-valid-test split and therefore, used the whole data set for their performance numbers. The results of our architecture is assessed using a 60%/40% train/test split.
Our architecture noticeably converges faster than the NeuroNER, which was trained for 100 epochs and the system by Liu et al. BIBREF18 which was trained for 80 epochs. Different runs of training our architecture on the i2b2 dataset converge at around 23 INLINEFORM0 4 epochs. A possible explanation for this is due to our architecture using the Adam optimizer, whereas the NeuroNER system use the Stochastic Gradient Descent (SGD) optimizer. In fact, Reimers et al. BIBREF20 show that the SGD optimizer performed considerably worse than the Adam optimizer for different NLP tasks.
Furthermore, we also do not see any noticeable improvements from using the PubMed database trained word embeddings BIBREF14 instead of the general text trained GloVe word embeddings. In fact, we consistently saw better INLINEFORM0 scores using the GloVe embeddings. This could be due to the fact that our use case was for identifying general labels such as Names, Phones, Locations etc. instead of bio-medical specific terms such as diseases which are far better represented in the PubMed corpus.
Error Analysis
We will mainly focus on the two PHI categories: Profession and ID for our error analysis on the i2b2 data set. It is interesting to note that the best performing models on the i2b2 data set by Dernoncourt et al. BIBREF9 experienced similar lower performances on the same two categories. However, we note the performances by Dernoncourt et al. were achieved using a “combination of n-gram, morphological, orthographic and gazetteer features” BIBREF9 while our architecture uses only POS tagging as an external feature. Dernoncourt et al. posits that the lower performance on the Profession category might be due to the close embeddings of the Profession tokens to other PHI tokens which we can confirm on our architecture as well. Furthermore, our experiments show that the Profession PHI performs considerably better with the PubMed embedded model than GloVe embedded model. This could be due to the fact that PubMed embeddings were trained on the PubMed database, which is a database of medical literature. GloVe on the other hand was trained on a general database, which means the PubMed embeddings for Profession tokens might not be as close to other tokens as is the case for the GloVe embeddings.
For the ID PHI, our analysis shows that some of the errors were due to tokenization errors. For example, a “:” was counted as PHI token which our architecture correctly predicted as not a PHI token. Since our architecture is not custom tailored to detect sophisticated ID patterns such as the systems in BIBREF9 , BIBREF10 , we have failed to detect some ID PHIs such as “265-01-73”, a medical record number, which our architecture predicted as a phone number due to the format of the number. Such errors could easily be mitigated by the use of simple regular expressions.
We can see that our architecture outperforms the deidentify system by a considerable margin on most categories as measured by the INLINEFORM0 score. For example, the authors of deidentify note that Date PHIs have considerably low precision values while our architecture achieve a precision value of greater than 0.915% for the Date PHI. However, Burckhardt et al. BIBREF22 achieve an impressive precision of 0.899 and recall of 1.0 for the Phone PHI while our architecture only manages 0.778 and 0.583 respectively. Our analysis of this category shows that this is mainly due a difference in tokenization, stand alone number are being classified as not a PHI.
We tried to use the model that we trained on the i2b2 data set to predict the categories of the nursing data set. However, due to difference in the text structure, the actual text and the format, we achieved less than random performance on the nursing data set. This brings up an important point about the transferability of such models.
Ablation Analysis
Our ablation analysis shows us that the layers of our models adds to the overall performance. Figure FIGREF33 shows the binary INLINEFORM0 scores on the i2b2 data set with each bar being a feature toggled off. For example, the “No Char Embd” bar shows the performance of the model with no character embeddings and everything else the same as our best model.
We can see a noticeable change in the performance if we do not include the ELMo embeddings versus no GloVe embeddings. The slight decrease in performance when we use no GloVe embeddings shows us that this is a feature we might choose to exclude if computation time is limited. Furthermore, we can see the impact of having no variational dropout and only using a naïve dropout, it shows that variational dropout is better at regularizing our neural network.
Conclusion
In this study, we show that our deep learning architecture, which incorporates the latest developments in contextual word embeddings and NLP, achieves state-of-the-art performance on two widely available gold standard de-identification data sets while also achieving similar performance as the best system available in less epochs. Our architecture also significantly improves over the performance of the hybrid system deidentify on the nursing data set.
This architecture could be integrated into a client-ready system such as the deidentify system. However, as mentioned in Section SECREF8 , the use of a dictionary (or gazetter) might help improve the model even further specially with regards to the Location and Profession PHI types. Such a hybrid system would be highly beneficial to practitioners that needs to de-identify patient data on a daily basis. | Unanswerable |
0a050658d09f3c6e21e9ab828dc18e59b147cf7c | 0a050658d09f3c6e21e9ab828dc18e59b147cf7c_0 | Q: Do they use BERT?
Text: [block] 1 5mm *
2pt*22pt
[block] 2.1 5mm
[block] 2.1.1 5mm
Introduction
Electronic Health Records (EHR) have become ubiquitous in recent years in the United States, owing much to the The Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009. BIBREF0 Their ubiquity have given researchers a treasure trove of new data, especially in the realm of unstructured textual data. However, this new data source comes with usage restrictions in order to preserve the privacy of individual patients as mandated by the Health Insurance Portability and Accountability Act (HIPAA). HIPAA demands any researcher using this sensitive data to first strip the medical records of any protected health information (PHI), a process known as de-identification.
HIPAA allows for two methods for de-identifying PHIs: the “Expert Determination” method in which an expert certifies that the information is rendered not individually identifiable, and the “Safe Harbor” method in which 18 identifiers are removed or replaced with random data in order for the data to be considered not individually identifiable. Our research pertains to the second method (a list of the relevant identifiers can be seen in Table TABREF4 ).
The process of de-identification has been largely a manual and labor intensive task due to both the sensitive nature of the data and the limited availability of software to automate the task. This has led to a relatively small number of open health data sets available for public use. Recently, there have been two well-known de-identification challenges organized by Informatics for Integrating Biology and the Bedside (i2b2) to encourage innovation in the field of de-identification.
In this paper, we build on the recent advances in natural language processing, especially with regards to word embeddings, by incorporating deep contextualized word embeddings developed by Peters et al. BIBREF1 into a deep learning architecture. More precisely, we present a deep learning architecture that differs from current architectures in literature by using bi-directional long short-term memory networks (Bi-LSTMs) with variational dropouts and deep contextualized word embeddings while also using components already present in other systems such traditional word embeddings, character LSTM embeddings and conditional random fields. We test this architecture on two gold standard data sets, the 2014 i2b2 de-identification Track 1 data set BIBREF2 and the nursing notes corpus BIBREF3 . The architecture achieves state-of-the-art performance on both data sets while also achieving faster convergence without the use of dictionaries (or gazetteers) or other rule-based methods that are typically used in other de-identification systems.
The paper is organized as follows: In Section SECREF4 , we review the latest literature around techniques for de-identification with an emphasis on related work using deep learning techniques. In Section SECREF5 , we detail our deep learning architecture and also describe how we use the deep contextualized word embeddings method to improve our results. Section SECREF6 describes the two data sets we will use to evaluate our method and our evaluation metrics. Section SECREF7 presents the performance of our architecture on the data sets. In Section SECREF8 , we discuss the results and provide an analysis of the errors. Finally, in Section SECREF9 , we summarize our contributions while also discussing possible future research.
Background and Related Work
The task of automatic de-identification has been heavily studied recently, in part due to two main challenges organized by i2b2 in 2006 and in 2014. The task of de-identification can be classified as a named entity recognition (NER) problem which has been extensively studied in machine learning literature. Automated de-identification systems can be roughly broken down into four main categories:
Rule-based Systems
Rule-based systems make heavy use of pattern matching such as dictionaries (or gazetteers), regular expressions and other patterns. BIBREF2 Systems such as the ones described in BIBREF5 , BIBREF6 do not require the use any labeled data. Hence, they are considered as unsupervised learning systems. Advantages of such systems include their ease of use, ease of adding new patterns and easy interpretability. However, these methods suffer from a lack of robustness with regards to the input. For example, different casings of the same word could be misinterpreted as an unknown word. Furthermore, typographical errors are almost always present in most documents and rule-based systems often cannot correctly handle these types of inaccuracies present in the data. Critically, these systems cannot handle context which could render a medical text unreadable. For example, a diagnosis of “Lou Gehring disease” could be misidentified by such a system as a PHI of type Name. The system might replace the tokens “Lou” and “Gehring” with randomized names rendering the text meaningless if enough of these tokens were replaced.
Machine Learning Systems
The drawbacks of such rule-based systems led researchers to adopt a machine learning approach. A comprehensive review of such systems can be found in BIBREF7 , BIBREF8 . In machine learning systems, given a sequence of input vectors INLINEFORM0 , a machine learning algorithm outputs label predictions INLINEFORM1 . Since the task of de-identification is a classification task, traditional classification algorithms such as support vector machines, conditional random fields (CRFs) and decision trees BIBREF9 have been used for building de-identification systems.
These machine learning-based systems have the advantage of being able to recognize complex patterns that are not as readily evident to the naked eye. However, the drawback of such ML-based systems is that since classification is a supervised learning task, most of the common classification algorithms require a large labeled data set for robust models. Furthermore, since most of the algorithms described in the last paragraph maximize the likelihood of a label INLINEFORM0 given an vector of inputs INLINEFORM1 , rare patterns that might not occur in the training data set would be misclassified as not being a PHI label. Furthermore, these models might not be generalizable to other text corpora that contain significantly different patterns such as sentence structures and uses of different abbreviated words found commonly in medical notes than the training data set.
Hybrid Systems
With both the advantages and disadvantages of stand alone rule-based and ML-based systems well-documented, systems such as the ones detailed in BIBREF2 combined both ML and rule-based systems to achieve impressive results. Systems such as the ones presented for 2014 i2b2 challenge by Yang et al. BIBREF10 and Liu et al. BIBREF11 used dictionary look-ups, regular expressions and CRFs to achieve accuracies of well over 90% in identifying PHIs.
It is important to note that such hybrid systems rely heavily on feature engineering, a process that manufactures new features from the data that are not present in the raw text. Most machine learning techniques, for example, cannot take text as an input. They require the text to be represented as a vector of numbers. An example of such features can be seen in the system that won the 2014 i2b2 de-identification challenge by Yang et al. BIBREF10 . Their system uses token features such as part-of-speech tagging and chunking, contextual features such as word lemma and POS tags of neighboring words, orthographic features such as capitalization and punctuation marks and task-specific features such as building a list that included all the full names, acronyms of US states and collecting TF-IDF-statistics. Although such hybrid systems achieve impressive results, the task of feature engineering is a time-intensive task that might not be generalizable to other text corpora.
Deep Learning Systems
With the disadvantages of the past three approaches to building a de-identification system in mind, the current state-of-the-art systems employ deep learning techniques to achieve better results than the best hybrid systems while also not requiring the time-consuming process of feature engineering. Deep learning is a subset of machine learning that uses multiple layers of Artificial Neural Networks (ANNs), which has been very succesful at most Natural Language Processing (NLP) tasks. Recent advances in the field of deep learning and NLP especially in regards to named entity recognition have allowed systems such as the one by Dernoncourt et al. BIBREF9 to achieve better results on the 2014 i2b2 de-identification challenge data set than the winning hybrid system proposed by Yang et al. BIBREF10 . The advances in NLP and deep learning which have allowed for this performance are detailed below.
ANNs cannot take words as inputs and require numeric inputs, therefore, past approaches to using ANNs for NLP have been to employ a bag-of-words (BoW) representation of words where a dictionary is built of all known words and each word in a sentence is assigned a unique vector that is inputted into the ANN. A drawback of such a technique is such that words that have similar meanings are represented completely different. As a solution to this problem, a technique called word embeddings have been used. Word embeddings gained popularity when Mikolov et al. BIBREF12 used ANNs to generate a distributed vector representation of a word based on the usage of the word in a text corpus. This way of representing words allowed for similar words to be represented using vectors of similar values while also allowing for complex operations such as the famous example: INLINEFORM0 , where INLINEFORM1 represents a vector for a particular word.
While pre-trained word embeddings such as the widely used GloVe BIBREF12 embeddings are revolutionary and powerful, such representations only capture one context representation, namely the one of the training corpus they were derived from. This shortcoming has led to the very recent development of context-dependent representations such as the ones developed by BIBREF1 , BIBREF13 , which can capture different features of a word.
The Embeddings from Language Models (ELMo) from the system by Peters et al. BIBREF1 are used by the architecture in this paper to achieve state-of-the-art results. The ELMo representations, learned by combining Bi-LSTMs with a language modeling objective, captures context-depended aspects at the higher-level LSTM while the lower-level LSTM captures aspects of syntax. Moreover, the outputs of the different layers of the system can be used independently or averaged to output embeddings that significantly improve some existing models for solving NLP problems. These results drive our motivation to include the ELMo representations in our architecture.
The use of ANNs for many machine learning tasks has gained popularity in recent years. Recently, a variant of recurrent neural networks (RNN) called Bi-directional Long Short-Term Memory (Bi-LSTM) networks has been successfully employed especially in the realm of NER.
In fact, several Bi-LSTM architectures have been proposed to tackle the problem of NER: LSTM-CRF, LSTM-CNNs-CRF and LSTM-CNNs BIBREF9 . The current best performing system on the i2b2 dataset is in fact a system based on LSTM-CRF BIBREF9 .
Method
Our architecture incorporates most of the recent advances in NLP and NER while also differing from other architectures described in the previous section by use of deep contextualized word embeddings, Bi-LSTMs with a variational dropout and the use of the Adam optimizer. Our architecture can be broken down into four distinct layers: pre-processing, embeddings, Bi-LSTM and CRF classifier. A graphical illustration of the architecture can be seen in Figure FIGREF16 while a summary of the parameters for our architecture can be found in Table TABREF17 .
Pre-processing Layer
For a given document INLINEFORM0 , we first break down the document into sentences INLINEFORM1 , tokens INLINEFORM2 and characters INLINEFORM3 where INLINEFORM4 represents the document number, INLINEFORM5 represents the sentence number, INLINEFORM6 represents the token number, and INLINEFORM7 represents the character number. For example, INLINEFORM8 Patient, where the token: “Patient” represents the 3rd token of the 2nd sentence of the 1st document.
After parsing the tokens, we use a widely used and readily available Python toolkit called Natural Langauge ToolKit (NLTK) to generate a part-of-speech (POS) tag for each token. This generates a POS feature for each token which we will transform into a 20-dimensional one-hot-encoded input vector, INLINEFORM0 , then feed into the main LSTM layer.
For the data labels, since the data labels can be made up of multiple tokens, we formatted the labels to the BIO scheme. The BIO scheme tags the beginning of a PHI with a B-, the rest of the same PHI tokens as I- and the rest of the tokens not associated with a PHI as O. For example, the sentence, “ INLINEFORM0 ”, would have the corresponding labels, “ INLINEFORM1 ”.
Embedding Layer
For the embedding layer, we use three main types of embeddings to represent our input text: traditional word embeddings, ELMo embeddings and character-level LSTM embeddings.
The traditional word embeddings use the latest GloVe 3 BIBREF12 pre-trained word vectors that were trained on the Common Crawl with about 840 billion tokens. For every token input, INLINEFORM0 , the GloVe system outputs INLINEFORM1 , a dense 300-dimensional word vector representation of that same token. We also experimented with other word embeddings by using the bio-medical corpus trained word embeddings BIBREF14 to see if having word embeddings trained on medical texts will have an impact on our results.
As mentioned in previous sections, we also incorporate the powerful ELMo representations as a feature to our Bi-LSTMs. The specifics of the ELMo representations are detailed in BIBREF1 . In short, we compute an ELMo representation by passing a token input INLINEFORM0 to the ELMo network and averaging the the layers of the network to produce an 1024-dimensional ELMo vector, INLINEFORM1 .
Character-level information can capture some information about the token itself while also mitigating issues such as unseen words and misspellings. While lemmatizing (i.e., the act of turning inflected forms of a word to their base or dictionary form) of a token can solve these issues, tokens such as the ones found in medical texts could have important distinctions between, for example, the grammar form of the token. As such, Ma et al. BIBREF15 have used Convolutional Neural Networks (CNN) while Lample et al. BIBREF16 have used Bi-LSTMs to produce character-enhanced representations of each unique token. We have utilized the latter approach of using Bi-LSTMs for produce a character-enhanced embedding for each unique word in our data set. Our parameters for the forward and backward LSTMs are 25 each and the maximum character length is 25, which results in an 50-dimensional embedding vector, INLINEFORM0 , for each token.
After creating the three embeddings for each token, INLINEFORM0 , we concatenate the GloVe and ELMo representations to produce a single 1324-dimensional word input vector, INLINEFORM1 . The concatenated word vector is then further concatenated with the character embedding vector, INLINEFORM2 , POS one-hot-encoded vector, INLINEFORM3 , and the casing embedded vector, INLINEFORM4 , to produce a single 1394-dimensional input vector, INLINEFORM5 , that we feed into our Bi-LSTM layer.
Bi-LSTM Layer
The Bi-LSTM layer is composed of two LSTM layers, which are a variant of the Bidirectional RNNs. In short, the Bi-LSTM layer contains two independent LSTMs in which one network is fed input in the normal time direction while the other network is fed input in the reverse time direction. The outputs of the two networks can then be combined using either summation, multiplication, concatenation or averaging. Our architecture uses simple concatenation to combine the outputs of the two networks.
Our architecture for the Bi-LSTM layer is similar to the ones used by BIBREF16 , BIBREF17 , BIBREF18 with each LSTM containing 100 hidden units. To ensure that the neural networks do not overfit, we use a variant of the popular dropout technique called variational dropout BIBREF19 to regularize our neural networks. Variational dropout differs from the traditional naïve dropout technique by having the same dropout mask for the inputs, outputs and the recurrent layers BIBREF19 . This is in contrast to the traditional technique of applying a different dropout mask for each of the input and output layers. BIBREF20 shows that variational dropout applied to the output and recurrent units performs significantly better than naïve dropout or no dropout for the NER tasks. As such, we apply a dropout probability of 0.5 for both the output and the recurrent units in our architecture.
CRF layer
As a final step, the outputs of the Bi-LSTM layer are inputted into a linear-chain CRF classifier, which maximizes the label probabilities of the entire input sentence. This approach is identical to the Bi-LSTM-CRF model by Huang et al. BIBREF21 CRFs have been incorporated in numerous state-of-the-art models BIBREF16 , BIBREF18 , BIBREF3 because of their ability to incorporate tag information at the sentence level.
While the Bi-LSTM layer takes information from the context into account when generating its label predictions, each decision is independent from the other labels in the sentence. The CRF allows us to find the labeling sequence in a sentence with the highest probability. This way, both previous and subsequent label information is used in determining the label of a given token. As a sequence model, the CRF posits a probability model for the label sequence of the tokens in a sentence, conditional on the word sequence and the output scores from the Bi-LTSM model for the given sentence. In doing so, the CRF models the conditional distribution of the label sequence instead of a joint distribution with the words and output scores. Thus, it does not assume independent features, while at the same time not making strong distributional assumptions about the relationship between the features and sequence labels.
Data and Evaluation Metrics
The two main data sets that we will use to evaluate our architecture are the 2014 i2b2 de-identification challenge data set BIBREF2 and the nursing notes corpus BIBREF3 .
The i2b2 corpus was used by all tracks of the 2014 i2b2 challenge. It consists of 1,304 patient progress notes for 296 diabetic patients. All the PHIs were removed and replaced with random replacements. The PHIs in this data set were broken down first into the HIPAA categories and then into the i2b2-PHI categories as shown in Table TABREF23 . Overall, the data set contains 56,348 sentences with 984,723 separate tokens of which 41,355 are separate PHI tokens, which represent 28,867 separate PHI instances. For our test-train-valid split, we chose 10% of the training sentences to serve as our validation set, which represents 3,381 sentences while a separately held-out official test data set was specified by the competition. This test data set contains 22,541 sentences including 15,275 separate PHI tokens.
The nursing notes were originally collected by Neamatullah et al. BIBREF3 . The data set contains 2,434 notes of which there are 1,724 separate PHI instances. A summary of the breakdown of the PHI categories of this nursing corpora can be seen in Table TABREF23 .
Evaluation Metrics
For de-identification tasks, the three metrics we will use to evaluate the performance of our architecture are Precision, Recall and INLINEFORM0 score as defined below. We will compute both the binary INLINEFORM1 score and the three metrics for each PHI type for both data sets. Note that binary INLINEFORM2 score calculates whether or not a token was identified as a PHI as opposed to correctly predicting the right PHI type. For de-identification, we place more importance on identifying if a token was a PHI instance with correctly predicting the right PHI type as a secondary objective. INLINEFORM3 INLINEFORM4
Notice that a high recall is paramount given the risk of accidentally disclosing sensitive patient information if not all PHI are detected and removed from the document or replaced by fake data. A high precision is also desired to preserve the integrity of the documents, as a large number of false positives might obscure the meaning of the text or even distort it. As the harmonic mean of precision and recall, the INLINEFORM0 score gives an overall measure for model performance that is frequently employed in the NLP literature.
As a benchmark, we will use the results of the systems by Burckhardt et al. BIBREF22 , Liu et al. BIBREF18 , Dernoncourt et al. BIBREF9 and Yang et al. BIBREF10 on the i2b2 dataset and the performance of Burckhardt et al. on the nursing corpus. Note that Burckhardt et al. used the entire data set for their results as it is an unsupervised learning system while we had to split our data set into 60% training data and 40% testing data.
Results
We evaluated the architecture on both the i2b2-PHI categories and the HIPAA-PHI categories for the i2b2 data set based on token-level labels. Note that the HIPAA categories are a super set of the i2b2-PHI categories. We also ran the analysis 5+ times to give us a range of maximum scores for the different data sets.
Table TABREF25 gives us a summary of how our architecture performed against other systems on the binary INLINEFORM0 score metrics while Table TABREF26 and Table TABREF27 summarizes the performance of our architecture against other systems on HIPAA-PHI categories and i2b2-PHI categories respectively. Table TABREF28 presents a summary of the performance on the nursing note corpus while also contrasting the performances achieved by the deidentify system.
Discussion and Error Analysis
As we can see in Table TABREF26 , with the exception of ID, our architecture performs considerably better than systems by Liu et al. and Yang et al. Dernoncourt et al. did not provide exact figures for the HIPAA-PHI categories so we have excluded them from our analysis. Furthermore, Table TABREF25 shows that our architecture performs similarly to the best scores achieved by Dernoncourt et al., with our architecture slightly edging out Dernoncourt et al. on the precision metric. For the nursing corpus, our system, while not performing as well as the performances on i2b2 data set, managed to best the scores achieved by the deidentify system while also achieving a binary INLINEFORM0 score of over 0.812. It is important to note that deidentify was a unsupervised learning system, it did not require the use of a train-valid-test split and therefore, used the whole data set for their performance numbers. The results of our architecture is assessed using a 60%/40% train/test split.
Our architecture noticeably converges faster than the NeuroNER, which was trained for 100 epochs and the system by Liu et al. BIBREF18 which was trained for 80 epochs. Different runs of training our architecture on the i2b2 dataset converge at around 23 INLINEFORM0 4 epochs. A possible explanation for this is due to our architecture using the Adam optimizer, whereas the NeuroNER system use the Stochastic Gradient Descent (SGD) optimizer. In fact, Reimers et al. BIBREF20 show that the SGD optimizer performed considerably worse than the Adam optimizer for different NLP tasks.
Furthermore, we also do not see any noticeable improvements from using the PubMed database trained word embeddings BIBREF14 instead of the general text trained GloVe word embeddings. In fact, we consistently saw better INLINEFORM0 scores using the GloVe embeddings. This could be due to the fact that our use case was for identifying general labels such as Names, Phones, Locations etc. instead of bio-medical specific terms such as diseases which are far better represented in the PubMed corpus.
Error Analysis
We will mainly focus on the two PHI categories: Profession and ID for our error analysis on the i2b2 data set. It is interesting to note that the best performing models on the i2b2 data set by Dernoncourt et al. BIBREF9 experienced similar lower performances on the same two categories. However, we note the performances by Dernoncourt et al. were achieved using a “combination of n-gram, morphological, orthographic and gazetteer features” BIBREF9 while our architecture uses only POS tagging as an external feature. Dernoncourt et al. posits that the lower performance on the Profession category might be due to the close embeddings of the Profession tokens to other PHI tokens which we can confirm on our architecture as well. Furthermore, our experiments show that the Profession PHI performs considerably better with the PubMed embedded model than GloVe embedded model. This could be due to the fact that PubMed embeddings were trained on the PubMed database, which is a database of medical literature. GloVe on the other hand was trained on a general database, which means the PubMed embeddings for Profession tokens might not be as close to other tokens as is the case for the GloVe embeddings.
For the ID PHI, our analysis shows that some of the errors were due to tokenization errors. For example, a “:” was counted as PHI token which our architecture correctly predicted as not a PHI token. Since our architecture is not custom tailored to detect sophisticated ID patterns such as the systems in BIBREF9 , BIBREF10 , we have failed to detect some ID PHIs such as “265-01-73”, a medical record number, which our architecture predicted as a phone number due to the format of the number. Such errors could easily be mitigated by the use of simple regular expressions.
We can see that our architecture outperforms the deidentify system by a considerable margin on most categories as measured by the INLINEFORM0 score. For example, the authors of deidentify note that Date PHIs have considerably low precision values while our architecture achieve a precision value of greater than 0.915% for the Date PHI. However, Burckhardt et al. BIBREF22 achieve an impressive precision of 0.899 and recall of 1.0 for the Phone PHI while our architecture only manages 0.778 and 0.583 respectively. Our analysis of this category shows that this is mainly due a difference in tokenization, stand alone number are being classified as not a PHI.
We tried to use the model that we trained on the i2b2 data set to predict the categories of the nursing data set. However, due to difference in the text structure, the actual text and the format, we achieved less than random performance on the nursing data set. This brings up an important point about the transferability of such models.
Ablation Analysis
Our ablation analysis shows us that the layers of our models adds to the overall performance. Figure FIGREF33 shows the binary INLINEFORM0 scores on the i2b2 data set with each bar being a feature toggled off. For example, the “No Char Embd” bar shows the performance of the model with no character embeddings and everything else the same as our best model.
We can see a noticeable change in the performance if we do not include the ELMo embeddings versus no GloVe embeddings. The slight decrease in performance when we use no GloVe embeddings shows us that this is a feature we might choose to exclude if computation time is limited. Furthermore, we can see the impact of having no variational dropout and only using a naïve dropout, it shows that variational dropout is better at regularizing our neural network.
Conclusion
In this study, we show that our deep learning architecture, which incorporates the latest developments in contextual word embeddings and NLP, achieves state-of-the-art performance on two widely available gold standard de-identification data sets while also achieving similar performance as the best system available in less epochs. Our architecture also significantly improves over the performance of the hybrid system deidentify on the nursing data set.
This architecture could be integrated into a client-ready system such as the deidentify system. However, as mentioned in Section SECREF8 , the use of a dictionary (or gazetter) might help improve the model even further specially with regards to the Location and Profession PHI types. Such a hybrid system would be highly beneficial to practitioners that needs to de-identify patient data on a daily basis. | No |
fd80a7162fde83077ed82ae41d521d774f74340a | fd80a7162fde83077ed82ae41d521d774f74340a_0 | Q: What is their baseline?
Text: [block] 1 5mm *
2pt*22pt
[block] 2.1 5mm
[block] 2.1.1 5mm
Introduction
Electronic Health Records (EHR) have become ubiquitous in recent years in the United States, owing much to the The Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009. BIBREF0 Their ubiquity have given researchers a treasure trove of new data, especially in the realm of unstructured textual data. However, this new data source comes with usage restrictions in order to preserve the privacy of individual patients as mandated by the Health Insurance Portability and Accountability Act (HIPAA). HIPAA demands any researcher using this sensitive data to first strip the medical records of any protected health information (PHI), a process known as de-identification.
HIPAA allows for two methods for de-identifying PHIs: the “Expert Determination” method in which an expert certifies that the information is rendered not individually identifiable, and the “Safe Harbor” method in which 18 identifiers are removed or replaced with random data in order for the data to be considered not individually identifiable. Our research pertains to the second method (a list of the relevant identifiers can be seen in Table TABREF4 ).
The process of de-identification has been largely a manual and labor intensive task due to both the sensitive nature of the data and the limited availability of software to automate the task. This has led to a relatively small number of open health data sets available for public use. Recently, there have been two well-known de-identification challenges organized by Informatics for Integrating Biology and the Bedside (i2b2) to encourage innovation in the field of de-identification.
In this paper, we build on the recent advances in natural language processing, especially with regards to word embeddings, by incorporating deep contextualized word embeddings developed by Peters et al. BIBREF1 into a deep learning architecture. More precisely, we present a deep learning architecture that differs from current architectures in literature by using bi-directional long short-term memory networks (Bi-LSTMs) with variational dropouts and deep contextualized word embeddings while also using components already present in other systems such traditional word embeddings, character LSTM embeddings and conditional random fields. We test this architecture on two gold standard data sets, the 2014 i2b2 de-identification Track 1 data set BIBREF2 and the nursing notes corpus BIBREF3 . The architecture achieves state-of-the-art performance on both data sets while also achieving faster convergence without the use of dictionaries (or gazetteers) or other rule-based methods that are typically used in other de-identification systems.
The paper is organized as follows: In Section SECREF4 , we review the latest literature around techniques for de-identification with an emphasis on related work using deep learning techniques. In Section SECREF5 , we detail our deep learning architecture and also describe how we use the deep contextualized word embeddings method to improve our results. Section SECREF6 describes the two data sets we will use to evaluate our method and our evaluation metrics. Section SECREF7 presents the performance of our architecture on the data sets. In Section SECREF8 , we discuss the results and provide an analysis of the errors. Finally, in Section SECREF9 , we summarize our contributions while also discussing possible future research.
Background and Related Work
The task of automatic de-identification has been heavily studied recently, in part due to two main challenges organized by i2b2 in 2006 and in 2014. The task of de-identification can be classified as a named entity recognition (NER) problem which has been extensively studied in machine learning literature. Automated de-identification systems can be roughly broken down into four main categories:
Rule-based Systems
Rule-based systems make heavy use of pattern matching such as dictionaries (or gazetteers), regular expressions and other patterns. BIBREF2 Systems such as the ones described in BIBREF5 , BIBREF6 do not require the use any labeled data. Hence, they are considered as unsupervised learning systems. Advantages of such systems include their ease of use, ease of adding new patterns and easy interpretability. However, these methods suffer from a lack of robustness with regards to the input. For example, different casings of the same word could be misinterpreted as an unknown word. Furthermore, typographical errors are almost always present in most documents and rule-based systems often cannot correctly handle these types of inaccuracies present in the data. Critically, these systems cannot handle context which could render a medical text unreadable. For example, a diagnosis of “Lou Gehring disease” could be misidentified by such a system as a PHI of type Name. The system might replace the tokens “Lou” and “Gehring” with randomized names rendering the text meaningless if enough of these tokens were replaced.
Machine Learning Systems
The drawbacks of such rule-based systems led researchers to adopt a machine learning approach. A comprehensive review of such systems can be found in BIBREF7 , BIBREF8 . In machine learning systems, given a sequence of input vectors INLINEFORM0 , a machine learning algorithm outputs label predictions INLINEFORM1 . Since the task of de-identification is a classification task, traditional classification algorithms such as support vector machines, conditional random fields (CRFs) and decision trees BIBREF9 have been used for building de-identification systems.
These machine learning-based systems have the advantage of being able to recognize complex patterns that are not as readily evident to the naked eye. However, the drawback of such ML-based systems is that since classification is a supervised learning task, most of the common classification algorithms require a large labeled data set for robust models. Furthermore, since most of the algorithms described in the last paragraph maximize the likelihood of a label INLINEFORM0 given an vector of inputs INLINEFORM1 , rare patterns that might not occur in the training data set would be misclassified as not being a PHI label. Furthermore, these models might not be generalizable to other text corpora that contain significantly different patterns such as sentence structures and uses of different abbreviated words found commonly in medical notes than the training data set.
Hybrid Systems
With both the advantages and disadvantages of stand alone rule-based and ML-based systems well-documented, systems such as the ones detailed in BIBREF2 combined both ML and rule-based systems to achieve impressive results. Systems such as the ones presented for 2014 i2b2 challenge by Yang et al. BIBREF10 and Liu et al. BIBREF11 used dictionary look-ups, regular expressions and CRFs to achieve accuracies of well over 90% in identifying PHIs.
It is important to note that such hybrid systems rely heavily on feature engineering, a process that manufactures new features from the data that are not present in the raw text. Most machine learning techniques, for example, cannot take text as an input. They require the text to be represented as a vector of numbers. An example of such features can be seen in the system that won the 2014 i2b2 de-identification challenge by Yang et al. BIBREF10 . Their system uses token features such as part-of-speech tagging and chunking, contextual features such as word lemma and POS tags of neighboring words, orthographic features such as capitalization and punctuation marks and task-specific features such as building a list that included all the full names, acronyms of US states and collecting TF-IDF-statistics. Although such hybrid systems achieve impressive results, the task of feature engineering is a time-intensive task that might not be generalizable to other text corpora.
Deep Learning Systems
With the disadvantages of the past three approaches to building a de-identification system in mind, the current state-of-the-art systems employ deep learning techniques to achieve better results than the best hybrid systems while also not requiring the time-consuming process of feature engineering. Deep learning is a subset of machine learning that uses multiple layers of Artificial Neural Networks (ANNs), which has been very succesful at most Natural Language Processing (NLP) tasks. Recent advances in the field of deep learning and NLP especially in regards to named entity recognition have allowed systems such as the one by Dernoncourt et al. BIBREF9 to achieve better results on the 2014 i2b2 de-identification challenge data set than the winning hybrid system proposed by Yang et al. BIBREF10 . The advances in NLP and deep learning which have allowed for this performance are detailed below.
ANNs cannot take words as inputs and require numeric inputs, therefore, past approaches to using ANNs for NLP have been to employ a bag-of-words (BoW) representation of words where a dictionary is built of all known words and each word in a sentence is assigned a unique vector that is inputted into the ANN. A drawback of such a technique is such that words that have similar meanings are represented completely different. As a solution to this problem, a technique called word embeddings have been used. Word embeddings gained popularity when Mikolov et al. BIBREF12 used ANNs to generate a distributed vector representation of a word based on the usage of the word in a text corpus. This way of representing words allowed for similar words to be represented using vectors of similar values while also allowing for complex operations such as the famous example: INLINEFORM0 , where INLINEFORM1 represents a vector for a particular word.
While pre-trained word embeddings such as the widely used GloVe BIBREF12 embeddings are revolutionary and powerful, such representations only capture one context representation, namely the one of the training corpus they were derived from. This shortcoming has led to the very recent development of context-dependent representations such as the ones developed by BIBREF1 , BIBREF13 , which can capture different features of a word.
The Embeddings from Language Models (ELMo) from the system by Peters et al. BIBREF1 are used by the architecture in this paper to achieve state-of-the-art results. The ELMo representations, learned by combining Bi-LSTMs with a language modeling objective, captures context-depended aspects at the higher-level LSTM while the lower-level LSTM captures aspects of syntax. Moreover, the outputs of the different layers of the system can be used independently or averaged to output embeddings that significantly improve some existing models for solving NLP problems. These results drive our motivation to include the ELMo representations in our architecture.
The use of ANNs for many machine learning tasks has gained popularity in recent years. Recently, a variant of recurrent neural networks (RNN) called Bi-directional Long Short-Term Memory (Bi-LSTM) networks has been successfully employed especially in the realm of NER.
In fact, several Bi-LSTM architectures have been proposed to tackle the problem of NER: LSTM-CRF, LSTM-CNNs-CRF and LSTM-CNNs BIBREF9 . The current best performing system on the i2b2 dataset is in fact a system based on LSTM-CRF BIBREF9 .
Method
Our architecture incorporates most of the recent advances in NLP and NER while also differing from other architectures described in the previous section by use of deep contextualized word embeddings, Bi-LSTMs with a variational dropout and the use of the Adam optimizer. Our architecture can be broken down into four distinct layers: pre-processing, embeddings, Bi-LSTM and CRF classifier. A graphical illustration of the architecture can be seen in Figure FIGREF16 while a summary of the parameters for our architecture can be found in Table TABREF17 .
Pre-processing Layer
For a given document INLINEFORM0 , we first break down the document into sentences INLINEFORM1 , tokens INLINEFORM2 and characters INLINEFORM3 where INLINEFORM4 represents the document number, INLINEFORM5 represents the sentence number, INLINEFORM6 represents the token number, and INLINEFORM7 represents the character number. For example, INLINEFORM8 Patient, where the token: “Patient” represents the 3rd token of the 2nd sentence of the 1st document.
After parsing the tokens, we use a widely used and readily available Python toolkit called Natural Langauge ToolKit (NLTK) to generate a part-of-speech (POS) tag for each token. This generates a POS feature for each token which we will transform into a 20-dimensional one-hot-encoded input vector, INLINEFORM0 , then feed into the main LSTM layer.
For the data labels, since the data labels can be made up of multiple tokens, we formatted the labels to the BIO scheme. The BIO scheme tags the beginning of a PHI with a B-, the rest of the same PHI tokens as I- and the rest of the tokens not associated with a PHI as O. For example, the sentence, “ INLINEFORM0 ”, would have the corresponding labels, “ INLINEFORM1 ”.
Embedding Layer
For the embedding layer, we use three main types of embeddings to represent our input text: traditional word embeddings, ELMo embeddings and character-level LSTM embeddings.
The traditional word embeddings use the latest GloVe 3 BIBREF12 pre-trained word vectors that were trained on the Common Crawl with about 840 billion tokens. For every token input, INLINEFORM0 , the GloVe system outputs INLINEFORM1 , a dense 300-dimensional word vector representation of that same token. We also experimented with other word embeddings by using the bio-medical corpus trained word embeddings BIBREF14 to see if having word embeddings trained on medical texts will have an impact on our results.
As mentioned in previous sections, we also incorporate the powerful ELMo representations as a feature to our Bi-LSTMs. The specifics of the ELMo representations are detailed in BIBREF1 . In short, we compute an ELMo representation by passing a token input INLINEFORM0 to the ELMo network and averaging the the layers of the network to produce an 1024-dimensional ELMo vector, INLINEFORM1 .
Character-level information can capture some information about the token itself while also mitigating issues such as unseen words and misspellings. While lemmatizing (i.e., the act of turning inflected forms of a word to their base or dictionary form) of a token can solve these issues, tokens such as the ones found in medical texts could have important distinctions between, for example, the grammar form of the token. As such, Ma et al. BIBREF15 have used Convolutional Neural Networks (CNN) while Lample et al. BIBREF16 have used Bi-LSTMs to produce character-enhanced representations of each unique token. We have utilized the latter approach of using Bi-LSTMs for produce a character-enhanced embedding for each unique word in our data set. Our parameters for the forward and backward LSTMs are 25 each and the maximum character length is 25, which results in an 50-dimensional embedding vector, INLINEFORM0 , for each token.
After creating the three embeddings for each token, INLINEFORM0 , we concatenate the GloVe and ELMo representations to produce a single 1324-dimensional word input vector, INLINEFORM1 . The concatenated word vector is then further concatenated with the character embedding vector, INLINEFORM2 , POS one-hot-encoded vector, INLINEFORM3 , and the casing embedded vector, INLINEFORM4 , to produce a single 1394-dimensional input vector, INLINEFORM5 , that we feed into our Bi-LSTM layer.
Bi-LSTM Layer
The Bi-LSTM layer is composed of two LSTM layers, which are a variant of the Bidirectional RNNs. In short, the Bi-LSTM layer contains two independent LSTMs in which one network is fed input in the normal time direction while the other network is fed input in the reverse time direction. The outputs of the two networks can then be combined using either summation, multiplication, concatenation or averaging. Our architecture uses simple concatenation to combine the outputs of the two networks.
Our architecture for the Bi-LSTM layer is similar to the ones used by BIBREF16 , BIBREF17 , BIBREF18 with each LSTM containing 100 hidden units. To ensure that the neural networks do not overfit, we use a variant of the popular dropout technique called variational dropout BIBREF19 to regularize our neural networks. Variational dropout differs from the traditional naïve dropout technique by having the same dropout mask for the inputs, outputs and the recurrent layers BIBREF19 . This is in contrast to the traditional technique of applying a different dropout mask for each of the input and output layers. BIBREF20 shows that variational dropout applied to the output and recurrent units performs significantly better than naïve dropout or no dropout for the NER tasks. As such, we apply a dropout probability of 0.5 for both the output and the recurrent units in our architecture.
CRF layer
As a final step, the outputs of the Bi-LSTM layer are inputted into a linear-chain CRF classifier, which maximizes the label probabilities of the entire input sentence. This approach is identical to the Bi-LSTM-CRF model by Huang et al. BIBREF21 CRFs have been incorporated in numerous state-of-the-art models BIBREF16 , BIBREF18 , BIBREF3 because of their ability to incorporate tag information at the sentence level.
While the Bi-LSTM layer takes information from the context into account when generating its label predictions, each decision is independent from the other labels in the sentence. The CRF allows us to find the labeling sequence in a sentence with the highest probability. This way, both previous and subsequent label information is used in determining the label of a given token. As a sequence model, the CRF posits a probability model for the label sequence of the tokens in a sentence, conditional on the word sequence and the output scores from the Bi-LTSM model for the given sentence. In doing so, the CRF models the conditional distribution of the label sequence instead of a joint distribution with the words and output scores. Thus, it does not assume independent features, while at the same time not making strong distributional assumptions about the relationship between the features and sequence labels.
Data and Evaluation Metrics
The two main data sets that we will use to evaluate our architecture are the 2014 i2b2 de-identification challenge data set BIBREF2 and the nursing notes corpus BIBREF3 .
The i2b2 corpus was used by all tracks of the 2014 i2b2 challenge. It consists of 1,304 patient progress notes for 296 diabetic patients. All the PHIs were removed and replaced with random replacements. The PHIs in this data set were broken down first into the HIPAA categories and then into the i2b2-PHI categories as shown in Table TABREF23 . Overall, the data set contains 56,348 sentences with 984,723 separate tokens of which 41,355 are separate PHI tokens, which represent 28,867 separate PHI instances. For our test-train-valid split, we chose 10% of the training sentences to serve as our validation set, which represents 3,381 sentences while a separately held-out official test data set was specified by the competition. This test data set contains 22,541 sentences including 15,275 separate PHI tokens.
The nursing notes were originally collected by Neamatullah et al. BIBREF3 . The data set contains 2,434 notes of which there are 1,724 separate PHI instances. A summary of the breakdown of the PHI categories of this nursing corpora can be seen in Table TABREF23 .
Evaluation Metrics
For de-identification tasks, the three metrics we will use to evaluate the performance of our architecture are Precision, Recall and INLINEFORM0 score as defined below. We will compute both the binary INLINEFORM1 score and the three metrics for each PHI type for both data sets. Note that binary INLINEFORM2 score calculates whether or not a token was identified as a PHI as opposed to correctly predicting the right PHI type. For de-identification, we place more importance on identifying if a token was a PHI instance with correctly predicting the right PHI type as a secondary objective. INLINEFORM3 INLINEFORM4
Notice that a high recall is paramount given the risk of accidentally disclosing sensitive patient information if not all PHI are detected and removed from the document or replaced by fake data. A high precision is also desired to preserve the integrity of the documents, as a large number of false positives might obscure the meaning of the text or even distort it. As the harmonic mean of precision and recall, the INLINEFORM0 score gives an overall measure for model performance that is frequently employed in the NLP literature.
As a benchmark, we will use the results of the systems by Burckhardt et al. BIBREF22 , Liu et al. BIBREF18 , Dernoncourt et al. BIBREF9 and Yang et al. BIBREF10 on the i2b2 dataset and the performance of Burckhardt et al. on the nursing corpus. Note that Burckhardt et al. used the entire data set for their results as it is an unsupervised learning system while we had to split our data set into 60% training data and 40% testing data.
Results
We evaluated the architecture on both the i2b2-PHI categories and the HIPAA-PHI categories for the i2b2 data set based on token-level labels. Note that the HIPAA categories are a super set of the i2b2-PHI categories. We also ran the analysis 5+ times to give us a range of maximum scores for the different data sets.
Table TABREF25 gives us a summary of how our architecture performed against other systems on the binary INLINEFORM0 score metrics while Table TABREF26 and Table TABREF27 summarizes the performance of our architecture against other systems on HIPAA-PHI categories and i2b2-PHI categories respectively. Table TABREF28 presents a summary of the performance on the nursing note corpus while also contrasting the performances achieved by the deidentify system.
Discussion and Error Analysis
As we can see in Table TABREF26 , with the exception of ID, our architecture performs considerably better than systems by Liu et al. and Yang et al. Dernoncourt et al. did not provide exact figures for the HIPAA-PHI categories so we have excluded them from our analysis. Furthermore, Table TABREF25 shows that our architecture performs similarly to the best scores achieved by Dernoncourt et al., with our architecture slightly edging out Dernoncourt et al. on the precision metric. For the nursing corpus, our system, while not performing as well as the performances on i2b2 data set, managed to best the scores achieved by the deidentify system while also achieving a binary INLINEFORM0 score of over 0.812. It is important to note that deidentify was a unsupervised learning system, it did not require the use of a train-valid-test split and therefore, used the whole data set for their performance numbers. The results of our architecture is assessed using a 60%/40% train/test split.
Our architecture noticeably converges faster than the NeuroNER, which was trained for 100 epochs and the system by Liu et al. BIBREF18 which was trained for 80 epochs. Different runs of training our architecture on the i2b2 dataset converge at around 23 INLINEFORM0 4 epochs. A possible explanation for this is due to our architecture using the Adam optimizer, whereas the NeuroNER system use the Stochastic Gradient Descent (SGD) optimizer. In fact, Reimers et al. BIBREF20 show that the SGD optimizer performed considerably worse than the Adam optimizer for different NLP tasks.
Furthermore, we also do not see any noticeable improvements from using the PubMed database trained word embeddings BIBREF14 instead of the general text trained GloVe word embeddings. In fact, we consistently saw better INLINEFORM0 scores using the GloVe embeddings. This could be due to the fact that our use case was for identifying general labels such as Names, Phones, Locations etc. instead of bio-medical specific terms such as diseases which are far better represented in the PubMed corpus.
Error Analysis
We will mainly focus on the two PHI categories: Profession and ID for our error analysis on the i2b2 data set. It is interesting to note that the best performing models on the i2b2 data set by Dernoncourt et al. BIBREF9 experienced similar lower performances on the same two categories. However, we note the performances by Dernoncourt et al. were achieved using a “combination of n-gram, morphological, orthographic and gazetteer features” BIBREF9 while our architecture uses only POS tagging as an external feature. Dernoncourt et al. posits that the lower performance on the Profession category might be due to the close embeddings of the Profession tokens to other PHI tokens which we can confirm on our architecture as well. Furthermore, our experiments show that the Profession PHI performs considerably better with the PubMed embedded model than GloVe embedded model. This could be due to the fact that PubMed embeddings were trained on the PubMed database, which is a database of medical literature. GloVe on the other hand was trained on a general database, which means the PubMed embeddings for Profession tokens might not be as close to other tokens as is the case for the GloVe embeddings.
For the ID PHI, our analysis shows that some of the errors were due to tokenization errors. For example, a “:” was counted as PHI token which our architecture correctly predicted as not a PHI token. Since our architecture is not custom tailored to detect sophisticated ID patterns such as the systems in BIBREF9 , BIBREF10 , we have failed to detect some ID PHIs such as “265-01-73”, a medical record number, which our architecture predicted as a phone number due to the format of the number. Such errors could easily be mitigated by the use of simple regular expressions.
We can see that our architecture outperforms the deidentify system by a considerable margin on most categories as measured by the INLINEFORM0 score. For example, the authors of deidentify note that Date PHIs have considerably low precision values while our architecture achieve a precision value of greater than 0.915% for the Date PHI. However, Burckhardt et al. BIBREF22 achieve an impressive precision of 0.899 and recall of 1.0 for the Phone PHI while our architecture only manages 0.778 and 0.583 respectively. Our analysis of this category shows that this is mainly due a difference in tokenization, stand alone number are being classified as not a PHI.
We tried to use the model that we trained on the i2b2 data set to predict the categories of the nursing data set. However, due to difference in the text structure, the actual text and the format, we achieved less than random performance on the nursing data set. This brings up an important point about the transferability of such models.
Ablation Analysis
Our ablation analysis shows us that the layers of our models adds to the overall performance. Figure FIGREF33 shows the binary INLINEFORM0 scores on the i2b2 data set with each bar being a feature toggled off. For example, the “No Char Embd” bar shows the performance of the model with no character embeddings and everything else the same as our best model.
We can see a noticeable change in the performance if we do not include the ELMo embeddings versus no GloVe embeddings. The slight decrease in performance when we use no GloVe embeddings shows us that this is a feature we might choose to exclude if computation time is limited. Furthermore, we can see the impact of having no variational dropout and only using a naïve dropout, it shows that variational dropout is better at regularizing our neural network.
Conclusion
In this study, we show that our deep learning architecture, which incorporates the latest developments in contextual word embeddings and NLP, achieves state-of-the-art performance on two widely available gold standard de-identification data sets while also achieving similar performance as the best system available in less epochs. Our architecture also significantly improves over the performance of the hybrid system deidentify on the nursing data set.
This architecture could be integrated into a client-ready system such as the deidentify system. However, as mentioned in Section SECREF8 , the use of a dictionary (or gazetter) might help improve the model even further specially with regards to the Location and Profession PHI types. Such a hybrid system would be highly beneficial to practitioners that needs to de-identify patient data on a daily basis. | Burckhardt et al. BIBREF22, Liu et al. BIBREF18, Dernoncourt et al. BIBREF9, Yang et al. BIBREF10 |
4d4739682d540878a94d8227412e9e1ec1bb3d39 | 4d4739682d540878a94d8227412e9e1ec1bb3d39_0 | Q: Which two datasets is the system tested on?
Text: [block] 1 5mm *
2pt*22pt
[block] 2.1 5mm
[block] 2.1.1 5mm
Introduction
Electronic Health Records (EHR) have become ubiquitous in recent years in the United States, owing much to the The Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009. BIBREF0 Their ubiquity have given researchers a treasure trove of new data, especially in the realm of unstructured textual data. However, this new data source comes with usage restrictions in order to preserve the privacy of individual patients as mandated by the Health Insurance Portability and Accountability Act (HIPAA). HIPAA demands any researcher using this sensitive data to first strip the medical records of any protected health information (PHI), a process known as de-identification.
HIPAA allows for two methods for de-identifying PHIs: the “Expert Determination” method in which an expert certifies that the information is rendered not individually identifiable, and the “Safe Harbor” method in which 18 identifiers are removed or replaced with random data in order for the data to be considered not individually identifiable. Our research pertains to the second method (a list of the relevant identifiers can be seen in Table TABREF4 ).
The process of de-identification has been largely a manual and labor intensive task due to both the sensitive nature of the data and the limited availability of software to automate the task. This has led to a relatively small number of open health data sets available for public use. Recently, there have been two well-known de-identification challenges organized by Informatics for Integrating Biology and the Bedside (i2b2) to encourage innovation in the field of de-identification.
In this paper, we build on the recent advances in natural language processing, especially with regards to word embeddings, by incorporating deep contextualized word embeddings developed by Peters et al. BIBREF1 into a deep learning architecture. More precisely, we present a deep learning architecture that differs from current architectures in literature by using bi-directional long short-term memory networks (Bi-LSTMs) with variational dropouts and deep contextualized word embeddings while also using components already present in other systems such traditional word embeddings, character LSTM embeddings and conditional random fields. We test this architecture on two gold standard data sets, the 2014 i2b2 de-identification Track 1 data set BIBREF2 and the nursing notes corpus BIBREF3 . The architecture achieves state-of-the-art performance on both data sets while also achieving faster convergence without the use of dictionaries (or gazetteers) or other rule-based methods that are typically used in other de-identification systems.
The paper is organized as follows: In Section SECREF4 , we review the latest literature around techniques for de-identification with an emphasis on related work using deep learning techniques. In Section SECREF5 , we detail our deep learning architecture and also describe how we use the deep contextualized word embeddings method to improve our results. Section SECREF6 describes the two data sets we will use to evaluate our method and our evaluation metrics. Section SECREF7 presents the performance of our architecture on the data sets. In Section SECREF8 , we discuss the results and provide an analysis of the errors. Finally, in Section SECREF9 , we summarize our contributions while also discussing possible future research.
Background and Related Work
The task of automatic de-identification has been heavily studied recently, in part due to two main challenges organized by i2b2 in 2006 and in 2014. The task of de-identification can be classified as a named entity recognition (NER) problem which has been extensively studied in machine learning literature. Automated de-identification systems can be roughly broken down into four main categories:
Rule-based Systems
Rule-based systems make heavy use of pattern matching such as dictionaries (or gazetteers), regular expressions and other patterns. BIBREF2 Systems such as the ones described in BIBREF5 , BIBREF6 do not require the use any labeled data. Hence, they are considered as unsupervised learning systems. Advantages of such systems include their ease of use, ease of adding new patterns and easy interpretability. However, these methods suffer from a lack of robustness with regards to the input. For example, different casings of the same word could be misinterpreted as an unknown word. Furthermore, typographical errors are almost always present in most documents and rule-based systems often cannot correctly handle these types of inaccuracies present in the data. Critically, these systems cannot handle context which could render a medical text unreadable. For example, a diagnosis of “Lou Gehring disease” could be misidentified by such a system as a PHI of type Name. The system might replace the tokens “Lou” and “Gehring” with randomized names rendering the text meaningless if enough of these tokens were replaced.
Machine Learning Systems
The drawbacks of such rule-based systems led researchers to adopt a machine learning approach. A comprehensive review of such systems can be found in BIBREF7 , BIBREF8 . In machine learning systems, given a sequence of input vectors INLINEFORM0 , a machine learning algorithm outputs label predictions INLINEFORM1 . Since the task of de-identification is a classification task, traditional classification algorithms such as support vector machines, conditional random fields (CRFs) and decision trees BIBREF9 have been used for building de-identification systems.
These machine learning-based systems have the advantage of being able to recognize complex patterns that are not as readily evident to the naked eye. However, the drawback of such ML-based systems is that since classification is a supervised learning task, most of the common classification algorithms require a large labeled data set for robust models. Furthermore, since most of the algorithms described in the last paragraph maximize the likelihood of a label INLINEFORM0 given an vector of inputs INLINEFORM1 , rare patterns that might not occur in the training data set would be misclassified as not being a PHI label. Furthermore, these models might not be generalizable to other text corpora that contain significantly different patterns such as sentence structures and uses of different abbreviated words found commonly in medical notes than the training data set.
Hybrid Systems
With both the advantages and disadvantages of stand alone rule-based and ML-based systems well-documented, systems such as the ones detailed in BIBREF2 combined both ML and rule-based systems to achieve impressive results. Systems such as the ones presented for 2014 i2b2 challenge by Yang et al. BIBREF10 and Liu et al. BIBREF11 used dictionary look-ups, regular expressions and CRFs to achieve accuracies of well over 90% in identifying PHIs.
It is important to note that such hybrid systems rely heavily on feature engineering, a process that manufactures new features from the data that are not present in the raw text. Most machine learning techniques, for example, cannot take text as an input. They require the text to be represented as a vector of numbers. An example of such features can be seen in the system that won the 2014 i2b2 de-identification challenge by Yang et al. BIBREF10 . Their system uses token features such as part-of-speech tagging and chunking, contextual features such as word lemma and POS tags of neighboring words, orthographic features such as capitalization and punctuation marks and task-specific features such as building a list that included all the full names, acronyms of US states and collecting TF-IDF-statistics. Although such hybrid systems achieve impressive results, the task of feature engineering is a time-intensive task that might not be generalizable to other text corpora.
Deep Learning Systems
With the disadvantages of the past three approaches to building a de-identification system in mind, the current state-of-the-art systems employ deep learning techniques to achieve better results than the best hybrid systems while also not requiring the time-consuming process of feature engineering. Deep learning is a subset of machine learning that uses multiple layers of Artificial Neural Networks (ANNs), which has been very succesful at most Natural Language Processing (NLP) tasks. Recent advances in the field of deep learning and NLP especially in regards to named entity recognition have allowed systems such as the one by Dernoncourt et al. BIBREF9 to achieve better results on the 2014 i2b2 de-identification challenge data set than the winning hybrid system proposed by Yang et al. BIBREF10 . The advances in NLP and deep learning which have allowed for this performance are detailed below.
ANNs cannot take words as inputs and require numeric inputs, therefore, past approaches to using ANNs for NLP have been to employ a bag-of-words (BoW) representation of words where a dictionary is built of all known words and each word in a sentence is assigned a unique vector that is inputted into the ANN. A drawback of such a technique is such that words that have similar meanings are represented completely different. As a solution to this problem, a technique called word embeddings have been used. Word embeddings gained popularity when Mikolov et al. BIBREF12 used ANNs to generate a distributed vector representation of a word based on the usage of the word in a text corpus. This way of representing words allowed for similar words to be represented using vectors of similar values while also allowing for complex operations such as the famous example: INLINEFORM0 , where INLINEFORM1 represents a vector for a particular word.
While pre-trained word embeddings such as the widely used GloVe BIBREF12 embeddings are revolutionary and powerful, such representations only capture one context representation, namely the one of the training corpus they were derived from. This shortcoming has led to the very recent development of context-dependent representations such as the ones developed by BIBREF1 , BIBREF13 , which can capture different features of a word.
The Embeddings from Language Models (ELMo) from the system by Peters et al. BIBREF1 are used by the architecture in this paper to achieve state-of-the-art results. The ELMo representations, learned by combining Bi-LSTMs with a language modeling objective, captures context-depended aspects at the higher-level LSTM while the lower-level LSTM captures aspects of syntax. Moreover, the outputs of the different layers of the system can be used independently or averaged to output embeddings that significantly improve some existing models for solving NLP problems. These results drive our motivation to include the ELMo representations in our architecture.
The use of ANNs for many machine learning tasks has gained popularity in recent years. Recently, a variant of recurrent neural networks (RNN) called Bi-directional Long Short-Term Memory (Bi-LSTM) networks has been successfully employed especially in the realm of NER.
In fact, several Bi-LSTM architectures have been proposed to tackle the problem of NER: LSTM-CRF, LSTM-CNNs-CRF and LSTM-CNNs BIBREF9 . The current best performing system on the i2b2 dataset is in fact a system based on LSTM-CRF BIBREF9 .
Method
Our architecture incorporates most of the recent advances in NLP and NER while also differing from other architectures described in the previous section by use of deep contextualized word embeddings, Bi-LSTMs with a variational dropout and the use of the Adam optimizer. Our architecture can be broken down into four distinct layers: pre-processing, embeddings, Bi-LSTM and CRF classifier. A graphical illustration of the architecture can be seen in Figure FIGREF16 while a summary of the parameters for our architecture can be found in Table TABREF17 .
Pre-processing Layer
For a given document INLINEFORM0 , we first break down the document into sentences INLINEFORM1 , tokens INLINEFORM2 and characters INLINEFORM3 where INLINEFORM4 represents the document number, INLINEFORM5 represents the sentence number, INLINEFORM6 represents the token number, and INLINEFORM7 represents the character number. For example, INLINEFORM8 Patient, where the token: “Patient” represents the 3rd token of the 2nd sentence of the 1st document.
After parsing the tokens, we use a widely used and readily available Python toolkit called Natural Langauge ToolKit (NLTK) to generate a part-of-speech (POS) tag for each token. This generates a POS feature for each token which we will transform into a 20-dimensional one-hot-encoded input vector, INLINEFORM0 , then feed into the main LSTM layer.
For the data labels, since the data labels can be made up of multiple tokens, we formatted the labels to the BIO scheme. The BIO scheme tags the beginning of a PHI with a B-, the rest of the same PHI tokens as I- and the rest of the tokens not associated with a PHI as O. For example, the sentence, “ INLINEFORM0 ”, would have the corresponding labels, “ INLINEFORM1 ”.
Embedding Layer
For the embedding layer, we use three main types of embeddings to represent our input text: traditional word embeddings, ELMo embeddings and character-level LSTM embeddings.
The traditional word embeddings use the latest GloVe 3 BIBREF12 pre-trained word vectors that were trained on the Common Crawl with about 840 billion tokens. For every token input, INLINEFORM0 , the GloVe system outputs INLINEFORM1 , a dense 300-dimensional word vector representation of that same token. We also experimented with other word embeddings by using the bio-medical corpus trained word embeddings BIBREF14 to see if having word embeddings trained on medical texts will have an impact on our results.
As mentioned in previous sections, we also incorporate the powerful ELMo representations as a feature to our Bi-LSTMs. The specifics of the ELMo representations are detailed in BIBREF1 . In short, we compute an ELMo representation by passing a token input INLINEFORM0 to the ELMo network and averaging the the layers of the network to produce an 1024-dimensional ELMo vector, INLINEFORM1 .
Character-level information can capture some information about the token itself while also mitigating issues such as unseen words and misspellings. While lemmatizing (i.e., the act of turning inflected forms of a word to their base or dictionary form) of a token can solve these issues, tokens such as the ones found in medical texts could have important distinctions between, for example, the grammar form of the token. As such, Ma et al. BIBREF15 have used Convolutional Neural Networks (CNN) while Lample et al. BIBREF16 have used Bi-LSTMs to produce character-enhanced representations of each unique token. We have utilized the latter approach of using Bi-LSTMs for produce a character-enhanced embedding for each unique word in our data set. Our parameters for the forward and backward LSTMs are 25 each and the maximum character length is 25, which results in an 50-dimensional embedding vector, INLINEFORM0 , for each token.
After creating the three embeddings for each token, INLINEFORM0 , we concatenate the GloVe and ELMo representations to produce a single 1324-dimensional word input vector, INLINEFORM1 . The concatenated word vector is then further concatenated with the character embedding vector, INLINEFORM2 , POS one-hot-encoded vector, INLINEFORM3 , and the casing embedded vector, INLINEFORM4 , to produce a single 1394-dimensional input vector, INLINEFORM5 , that we feed into our Bi-LSTM layer.
Bi-LSTM Layer
The Bi-LSTM layer is composed of two LSTM layers, which are a variant of the Bidirectional RNNs. In short, the Bi-LSTM layer contains two independent LSTMs in which one network is fed input in the normal time direction while the other network is fed input in the reverse time direction. The outputs of the two networks can then be combined using either summation, multiplication, concatenation or averaging. Our architecture uses simple concatenation to combine the outputs of the two networks.
Our architecture for the Bi-LSTM layer is similar to the ones used by BIBREF16 , BIBREF17 , BIBREF18 with each LSTM containing 100 hidden units. To ensure that the neural networks do not overfit, we use a variant of the popular dropout technique called variational dropout BIBREF19 to regularize our neural networks. Variational dropout differs from the traditional naïve dropout technique by having the same dropout mask for the inputs, outputs and the recurrent layers BIBREF19 . This is in contrast to the traditional technique of applying a different dropout mask for each of the input and output layers. BIBREF20 shows that variational dropout applied to the output and recurrent units performs significantly better than naïve dropout or no dropout for the NER tasks. As such, we apply a dropout probability of 0.5 for both the output and the recurrent units in our architecture.
CRF layer
As a final step, the outputs of the Bi-LSTM layer are inputted into a linear-chain CRF classifier, which maximizes the label probabilities of the entire input sentence. This approach is identical to the Bi-LSTM-CRF model by Huang et al. BIBREF21 CRFs have been incorporated in numerous state-of-the-art models BIBREF16 , BIBREF18 , BIBREF3 because of their ability to incorporate tag information at the sentence level.
While the Bi-LSTM layer takes information from the context into account when generating its label predictions, each decision is independent from the other labels in the sentence. The CRF allows us to find the labeling sequence in a sentence with the highest probability. This way, both previous and subsequent label information is used in determining the label of a given token. As a sequence model, the CRF posits a probability model for the label sequence of the tokens in a sentence, conditional on the word sequence and the output scores from the Bi-LTSM model for the given sentence. In doing so, the CRF models the conditional distribution of the label sequence instead of a joint distribution with the words and output scores. Thus, it does not assume independent features, while at the same time not making strong distributional assumptions about the relationship between the features and sequence labels.
Data and Evaluation Metrics
The two main data sets that we will use to evaluate our architecture are the 2014 i2b2 de-identification challenge data set BIBREF2 and the nursing notes corpus BIBREF3 .
The i2b2 corpus was used by all tracks of the 2014 i2b2 challenge. It consists of 1,304 patient progress notes for 296 diabetic patients. All the PHIs were removed and replaced with random replacements. The PHIs in this data set were broken down first into the HIPAA categories and then into the i2b2-PHI categories as shown in Table TABREF23 . Overall, the data set contains 56,348 sentences with 984,723 separate tokens of which 41,355 are separate PHI tokens, which represent 28,867 separate PHI instances. For our test-train-valid split, we chose 10% of the training sentences to serve as our validation set, which represents 3,381 sentences while a separately held-out official test data set was specified by the competition. This test data set contains 22,541 sentences including 15,275 separate PHI tokens.
The nursing notes were originally collected by Neamatullah et al. BIBREF3 . The data set contains 2,434 notes of which there are 1,724 separate PHI instances. A summary of the breakdown of the PHI categories of this nursing corpora can be seen in Table TABREF23 .
Evaluation Metrics
For de-identification tasks, the three metrics we will use to evaluate the performance of our architecture are Precision, Recall and INLINEFORM0 score as defined below. We will compute both the binary INLINEFORM1 score and the three metrics for each PHI type for both data sets. Note that binary INLINEFORM2 score calculates whether or not a token was identified as a PHI as opposed to correctly predicting the right PHI type. For de-identification, we place more importance on identifying if a token was a PHI instance with correctly predicting the right PHI type as a secondary objective. INLINEFORM3 INLINEFORM4
Notice that a high recall is paramount given the risk of accidentally disclosing sensitive patient information if not all PHI are detected and removed from the document or replaced by fake data. A high precision is also desired to preserve the integrity of the documents, as a large number of false positives might obscure the meaning of the text or even distort it. As the harmonic mean of precision and recall, the INLINEFORM0 score gives an overall measure for model performance that is frequently employed in the NLP literature.
As a benchmark, we will use the results of the systems by Burckhardt et al. BIBREF22 , Liu et al. BIBREF18 , Dernoncourt et al. BIBREF9 and Yang et al. BIBREF10 on the i2b2 dataset and the performance of Burckhardt et al. on the nursing corpus. Note that Burckhardt et al. used the entire data set for their results as it is an unsupervised learning system while we had to split our data set into 60% training data and 40% testing data.
Results
We evaluated the architecture on both the i2b2-PHI categories and the HIPAA-PHI categories for the i2b2 data set based on token-level labels. Note that the HIPAA categories are a super set of the i2b2-PHI categories. We also ran the analysis 5+ times to give us a range of maximum scores for the different data sets.
Table TABREF25 gives us a summary of how our architecture performed against other systems on the binary INLINEFORM0 score metrics while Table TABREF26 and Table TABREF27 summarizes the performance of our architecture against other systems on HIPAA-PHI categories and i2b2-PHI categories respectively. Table TABREF28 presents a summary of the performance on the nursing note corpus while also contrasting the performances achieved by the deidentify system.
Discussion and Error Analysis
As we can see in Table TABREF26 , with the exception of ID, our architecture performs considerably better than systems by Liu et al. and Yang et al. Dernoncourt et al. did not provide exact figures for the HIPAA-PHI categories so we have excluded them from our analysis. Furthermore, Table TABREF25 shows that our architecture performs similarly to the best scores achieved by Dernoncourt et al., with our architecture slightly edging out Dernoncourt et al. on the precision metric. For the nursing corpus, our system, while not performing as well as the performances on i2b2 data set, managed to best the scores achieved by the deidentify system while also achieving a binary INLINEFORM0 score of over 0.812. It is important to note that deidentify was a unsupervised learning system, it did not require the use of a train-valid-test split and therefore, used the whole data set for their performance numbers. The results of our architecture is assessed using a 60%/40% train/test split.
Our architecture noticeably converges faster than the NeuroNER, which was trained for 100 epochs and the system by Liu et al. BIBREF18 which was trained for 80 epochs. Different runs of training our architecture on the i2b2 dataset converge at around 23 INLINEFORM0 4 epochs. A possible explanation for this is due to our architecture using the Adam optimizer, whereas the NeuroNER system use the Stochastic Gradient Descent (SGD) optimizer. In fact, Reimers et al. BIBREF20 show that the SGD optimizer performed considerably worse than the Adam optimizer for different NLP tasks.
Furthermore, we also do not see any noticeable improvements from using the PubMed database trained word embeddings BIBREF14 instead of the general text trained GloVe word embeddings. In fact, we consistently saw better INLINEFORM0 scores using the GloVe embeddings. This could be due to the fact that our use case was for identifying general labels such as Names, Phones, Locations etc. instead of bio-medical specific terms such as diseases which are far better represented in the PubMed corpus.
Error Analysis
We will mainly focus on the two PHI categories: Profession and ID for our error analysis on the i2b2 data set. It is interesting to note that the best performing models on the i2b2 data set by Dernoncourt et al. BIBREF9 experienced similar lower performances on the same two categories. However, we note the performances by Dernoncourt et al. were achieved using a “combination of n-gram, morphological, orthographic and gazetteer features” BIBREF9 while our architecture uses only POS tagging as an external feature. Dernoncourt et al. posits that the lower performance on the Profession category might be due to the close embeddings of the Profession tokens to other PHI tokens which we can confirm on our architecture as well. Furthermore, our experiments show that the Profession PHI performs considerably better with the PubMed embedded model than GloVe embedded model. This could be due to the fact that PubMed embeddings were trained on the PubMed database, which is a database of medical literature. GloVe on the other hand was trained on a general database, which means the PubMed embeddings for Profession tokens might not be as close to other tokens as is the case for the GloVe embeddings.
For the ID PHI, our analysis shows that some of the errors were due to tokenization errors. For example, a “:” was counted as PHI token which our architecture correctly predicted as not a PHI token. Since our architecture is not custom tailored to detect sophisticated ID patterns such as the systems in BIBREF9 , BIBREF10 , we have failed to detect some ID PHIs such as “265-01-73”, a medical record number, which our architecture predicted as a phone number due to the format of the number. Such errors could easily be mitigated by the use of simple regular expressions.
We can see that our architecture outperforms the deidentify system by a considerable margin on most categories as measured by the INLINEFORM0 score. For example, the authors of deidentify note that Date PHIs have considerably low precision values while our architecture achieve a precision value of greater than 0.915% for the Date PHI. However, Burckhardt et al. BIBREF22 achieve an impressive precision of 0.899 and recall of 1.0 for the Phone PHI while our architecture only manages 0.778 and 0.583 respectively. Our analysis of this category shows that this is mainly due a difference in tokenization, stand alone number are being classified as not a PHI.
We tried to use the model that we trained on the i2b2 data set to predict the categories of the nursing data set. However, due to difference in the text structure, the actual text and the format, we achieved less than random performance on the nursing data set. This brings up an important point about the transferability of such models.
Ablation Analysis
Our ablation analysis shows us that the layers of our models adds to the overall performance. Figure FIGREF33 shows the binary INLINEFORM0 scores on the i2b2 data set with each bar being a feature toggled off. For example, the “No Char Embd” bar shows the performance of the model with no character embeddings and everything else the same as our best model.
We can see a noticeable change in the performance if we do not include the ELMo embeddings versus no GloVe embeddings. The slight decrease in performance when we use no GloVe embeddings shows us that this is a feature we might choose to exclude if computation time is limited. Furthermore, we can see the impact of having no variational dropout and only using a naïve dropout, it shows that variational dropout is better at regularizing our neural network.
Conclusion
In this study, we show that our deep learning architecture, which incorporates the latest developments in contextual word embeddings and NLP, achieves state-of-the-art performance on two widely available gold standard de-identification data sets while also achieving similar performance as the best system available in less epochs. Our architecture also significantly improves over the performance of the hybrid system deidentify on the nursing data set.
This architecture could be integrated into a client-ready system such as the deidentify system. However, as mentioned in Section SECREF8 , the use of a dictionary (or gazetter) might help improve the model even further specially with regards to the Location and Profession PHI types. Such a hybrid system would be highly beneficial to practitioners that needs to de-identify patient data on a daily basis. | 2014 i2b2 de-identification challenge data set BIBREF2, nursing notes corpus BIBREF3 |
6baf5d7739758bdd79326ce8f50731c785029802 | 6baf5d7739758bdd79326ce8f50731c785029802_0 | Q: Which four languages do they experiment with?
Text: Introduction
Speech conveys human emotions most naturally. In recent years there has been an increased research interest in speech emotion recognition domain. The first step in a typical SER system is extracting linguistic and acoustic features from speech signal. Some para-linguistic studies find Low-Level Descriptor (LLD) features of the speech signal to be most relevant to studying emotions in speech. These features include frequency related parameters like pitch and jitter, energy parameters like shimmer and loudness, spectral parameters like alpha ratio and other parameters that convey cepstral and dynamic information. Feature extraction is followed with a classification task to predict the emotions of the speaker.
Data scarcity or lack of free speech corpus is a problem for research in speech domain in general. This also means that there are even fewer resources for studying emotion in speech. For those that are available are dissimilar in terms of the spoken language, type of emotion (i.e. naturalistic, elicited, or acted) and labelling scheme (i.e. dimensional or categorical).
Across various studies involving SER we observe that performance of model depends heavily on whether training and testing is performed from the same corpus or not. Performance is best when focus is on a single corpus at a time, without considering the performance of model in cross-language and cross-corpus scenarios. In this work, we work with diverse SER datasets i.e. tackle the problem in both cross-language and cross-corpus setting. We use transfer learning across SER datasets and investigate the effects of language spoken on the accuracy of the emotion recognition system using our Multi-Task Learning framework.
The paper is organized as follows: Section 2 reviewed related work on SER, cross-lingual and cross-corpus SER and the recent studies on role of language identification in speech emotion recognition system, Section 3 describes the datasets that have been used, Section 4 presents detailed descriptions of three types of SER experiments we conduct in this paper. In Section 5, we present our results and evaluations of our models. Section 6 presents some additional experiments to draw a direct comparison with previously published research. Finally, we discuss future work and conclude the paper.
Related Work
Over the last two decades there have been considerable research work on speech emotion recognition. However, all these differ in terms of the training corpora, test conditions, evaluation strategies and more which create difficulty in reproducing exact results. In BIBREF0, the authors give an overview of types of features, classifiers and emotional speech databases used in various SER research.
Speech emotion recognition has evolved over time with regards to both the type of features and models used for classifiers. Different types of features that can be used can involve simple features like pitch and intensity BIBREF1, BIBREF2. Some studies use low-level descriptor features(LLDs) like jitter, shimmer, HNR and spectral/cepstral parameters like alpha ratio BIBREF3, BIBREF4. Other features include rhythm and sentence duration BIBREF5 and non-uniform perceptual linear predictive (UN- PLP) features BIBREF6. Sometimes, linear predictive cepstral coefficients(LPCCs) BIBREF7 are used in conjunction with mel-frequency cepstral coefficients (MFCCs).
There have been studies on SER in languages other than english. For example, BIBREF8 propose a deep learning model consisting of stacked auto-encoders and deep belief networks for SER on the famous German dataset EMODB. BIBREF9 were the first to study SER work on the GEES, a Serbian emotional speech corpus. The authors developed a multistage strategy with SVMs for emotion recognition on a single dataset.
Relatively fewer studies address the problem of cross-language and cross-corpus speech emotion recognition. BIBREF10, BIBREF11. Recent work by BIBREF12, BIBREF13 studies SER for languages belonging to different language families like Urdu vs. Italian or German. Other work involving cross-language emotion recognition includes BIBREF14 which studies speech emotion recognition for for mandarin language vs. western languages like German and Danish. BIBREF15 developed an ensemble SVM for emotion detection with a focus on emotion recognition in unseen languages.
Although there are a lot of psychological case studies on the effect of language and culture in SER, there are very few computational linguistic studies in the same domain. In BIBREF16, the authors support the fact that SER is language independent, however also reveal that there are language specific differences in emotion recognition in which English shows a higher recognition rate compared to Malay and Mandarin. In BIBREF17 the authors proposed two-pass method based on language identification and then emotion recognition. It showed significant improvement in performance. They used English IEMOCAP, the German Emo-DB, and a Japanese corpus to recognize four emotions based on the proposed two-pass method.
In BIBREF18, the authors also use language identification to enhance cross-lingual SER. They concluded that in order to recognize the emotions of a speaker whose language is unknown, it is beneficial to use a language identifier followed by model selection instead of using a model which is trained based on all available languages. This work is to the best of our knowledge the first work that jointly tries to learn the language and emotion in speech.
Datasets ::: EMO-DB
This dataset was introduced by BIBREF19. Language of recordings is German and consists of acted speech with 7 categorical labels. The semantic content in this data is pre-defined in 10 emotionally neutral German short sentences. It contains 494 emotionally labeled phrases collected from 5 male and 5 female actors in age range of 21-35 years.
Datasets ::: SAVEE
Surrey Audio-Visual Expressed Emotion (SAVEE) database BIBREF20 is a famous acted-speech multimodal corpus. It consists of 480 British English utterances from 4 male actors in 7 different emotion categories. The text material consisted of 15 TIMIT BIBREF21 sentences per emotion: 3 common, 2 emotion-specific and 10 generic sentences that were different for each emotion and phonetically-balanced.
Datasets ::: EMOVO
This BIBREF22 is an Italian language acted speech emotional corpus that contains recordings of 6 actors who acted 14 emotionally neutral short sentences sentences to simulate 7 emotional states. It consists of 588 utterances and annotated by two different groups of 24 annotators.
Datasets ::: MASC: Mandarin Affective Speech Corpus
This is an Mandarin language acted speech emotional corpus that consist of 68 speakers (23 females, 45 males) each reading out read that consisted of five phrases, fifteen sentences and two paragraphs to simulate 5 emotional states. Altogether this database BIBREF23 contains 25,636 utterances.
Datasets ::: IEMOCAP: The Interactive Emotional Dyadic Motion Capture
IEMOCAP database BIBREF24 is an English language multi-modal emotional speech database. It contains approximately 12 hours of audiovisual data, including video, speech, motion capture of face, text transcriptions. It consists of dyadic sessions where actors perform improvisations or scripted scenarios, specifically selected to elicit emotional expressions. It has categorical labels, such as anger, happiness, sadness, neutrality, as well as dimensional labels such as valence, activation and dominance.
Experiments ::: SER on Individual Datasets
The first set of experiments focused on performing speech emotion recognition for the 5 datasets individually. We perform a 5-way classification by choosing 5 emotions common in all datasets i.e. happy, sad, fear, anger and neutral. For each dataset, we experiment with different types of features and classifiers. To generate Mel-frequency Cepstral Coefficients (MFCC) features we used the Kaldi-toolkit. We created spk2utt, utt2spk and wav.scp files for each dataset and generated MFCC features in .ark format. We leveraged kaldiio python library to convert .ark files to numpy arrays. Apart from MFCC's we also computed pitch features using the same toolkit. We keep a maximum of 120 frames of the input, and zero padded the extra signal for short utterances and clipped the extra signal for longer utterances to end up with (120,13) feature vector for each utterance.
To compare emotion classification performance using MFCC's as input features we also tried a different feature set i.e. IS09 emotion feature set BIBREF25 which has in previous research shown good performance on SER tasks. The IS09 feature set contains 384 features that result from a systematic combination of 16 Low-Level Descriptors (LLDs) and corresponding first order delta coefficients with 12 functionals. The 16 LLDs consist of zero-crossing-rate (ZCR), root mean square (RMS) frame energy, pitch frequency (normalized to 500 Hz), harmonics-to-noise ratio (HNR) by autocorrelation function, and mel-frequency cepstral coefficients (MFCC) 1–12 (in full accordance to HTK-based computation). The 12 functionals used are mean, standard deviation, kurtosis, skewness, minimum, maximum, relative position, range, and offset and slope of linear regression of segment contours, as well as its two regression coefficients with their mean square error (MSE) applied on a chunk. To get these features we had to install OpenSmile toolkit. Script to get these features after installation is included in code submitted (refer IS09 directory).
Once we had our input features ready we created test datasets from each of the 5 datasets by leaving one speaker out for small datasets (EMOVO, EMODB, SAVEE) and 2 speakers out for the larger datasets (IEMOCAP, MASC). Thus, for all corpora, the speakers in the test sets do not appear in the training set. We then performed SER using both classical machine learning and deep learning models. We used Support Vector one-vs-rest classifier and Logistic Regression Classifier for classical ML models and a stacked LSTM model for the deep learning based classifier. The LSTM network comprised of 2 hidden layers with 128 LSTM cells, followed by a dense layer of size 5 with softmax activation.
We present a comparative study across all datasets, feature sets and classifiers in table 2.
Experiments ::: SER using Transfer learning for small sized datasets
In the next step of experiments we tried to improve on the results we got for individual datasets by trying to leverage the technique of transfer learning. While we had relatively large support for languages like English and Chinese, speech emotion datasets for other languages like Italian and German were very small i.e. only had a total of around 500 labeled utterances. Such small amount of training data is not sufficient specially when training a deep learning based model.
We used the same LSTM classifier as detailed in section 4.1. with an additional dense layer before the final dense layer with softmax. We train this base model using the large IEMOCAP English dataset. We then freeze the weights of LSTM layers i.e. only trainable weights in the classifier remain those of the penultimate dense layer. We fine tune the weights of this layer using the small datasets(eg. SAVEE, EMODB, EMOVO) and test performance on the same test sets we created in section 4.1.
Table 3 shows the results of transfer learning experiments.
Experiments ::: Multitask learning for SER
Last set of experiments focus on studying the role of language being spoken on emotion recognition. Due to the lack of adequately sized emotion corpus in many languages, researchers have previously tried training emotion recognition models on cross-corpus data i.e. training with data in one or more language and testing on another. This approach sounds valid only if we consider that expression of emotion is same in all languages i.e. no matter which language you speak, the way you convey your happiness, anger, sadness etc will remain the same. One example can be that low pitch signals are generally associated with sadness and high pitch and amplitude with anger. If expression of emotion is indeed language agnostic we could train emotion recognition models with high resource languages and use the same models for low resource languages.
To verify this hypothesis, we came up with a multi-task framework that jointly learns to predict emotion and the language in which the emotion is being expressed. The framework is illustrated in figure 2. The parameters of the LSTM model remain the same as mentioned in section 4.1. The SER performance of using training data from all languages and training a single classifier(same as shown in figure 1) vs. using training data from all languages in a multi-task setting is mentioned in table 4.
Results and Analysis
We will discuss the results of each experiment in detail in this section:
For SER experiments on individual dataset we see from Table 2 that SVC classifier with IS09 input features gave the best performance for four out of 5 datasets. We also note a huge difference in accuracy scores when using the same LSTM classifier and only changing the input features i.e. MFCC and IS09. LSTM model with IS09 input features gives better emotion recognition performance for four out of 5 datasets. These experiments suggest the superiority of IS09 features as compared to MFCC's for SER tasks.
As expected the second set of experiments show that transfer learning is beneficial for SER task for small datasets. In table 3 we observe that training on IEMOCAP and then fine-tuning on train set of small dataset improves performance for german dataset EMODB and smaller english dataset SAVEE. However, we also note a small drop in performance for Italian dataset EMOVO.
Results in table 4 do not show improvement with using language as an auxiliary task in speech emotion recognition. While a improvement would have suggested that language spoken does affect the way people express emotions in speech, the current results are more suggestive of the fact that emotion in speech are universal i.e. language agnostic. People speaking different languages express emotions in the same way and SER models could be jointly trained across various SER corpus we have for different languages.
Comparison with Previous Research
In this section we present comparative study of two previous research papers with our work. We keep this report in a separate section because in order to give a direct comparison with these two papers we had to follow their train-test split, number of emotion classes etc.
In Analysis of Deep Learning Architectures for Cross-corpus Speech Emotion Recognition BIBREF26, the authors discuss cross-corpus training using 6 datasets. In one of their experiments, they report performance on test set of each corpus for models trained only on IEMOCAP dataset. When we perform the same experiment i.e. train our model only on IEMOCAP and test on other datasets using IS09 as input features and SVC classifier, we observe better results even while performing a 5 way classification task as compared to their 4 way classification. Results are shown in Table 5.
In multi modal emotion recognition on IEMOCAP with neural networks BIBREF27, the authors present three deep learning based speech emotion recognition models. We follow the exact same data pre-processing steps for obtaining same train-test split. We also use the same LSTM model as their best performing model to verify we get the same result i.e. accuracy of 55.65%. However, we could improve this performance to 56.45% by using IS09 features for input and a simple SVC classifier. This experiment suggested we could get equal or better performance in much less training time with classical machine learning models given the right input features as compared to sophisticated deep learning classifiers.
Future Work
In future we would like to experiment with more architectures and feature sets. We would also like to extend this study to include other languages, specially low resource languages. Since all datasets in this study were acted speech, another interesting study would be to note the differences that arise when dealing with natural speech.
Conclusion
Some of the main conclusions that can be drawn from this study are that classical machine learning models may perform as well as deep learning models for SER tasks given we choose the right input features. IS09 features consistently perform well for SER tasks across datasets in different languages. Transfer learning proved to be an effective technique for performing SER for small datasets and multi-task learning experiments shed light on the language agnostic nature of speech emotion recognition task. | German, English, Italian, Chinese |
5c4c8e91d28935e1655a582568cc9d94149da2b2 | 5c4c8e91d28935e1655a582568cc9d94149da2b2_0 | Q: Does DCA or GMM-based attention perform better in experiments?
Text: Introduction
Sequence-to-sequence models that use an attention mechanism to align the input and output sequences BIBREF0, BIBREF1 are currently the predominant paradigm in end-to-end TTS. Approaches based on the seminal Tacotron system BIBREF2 have demonstrated naturalness that rivals that of human speech for certain domains BIBREF3. Despite these successes, there are sometimes complaints of a lack of robustness in the alignment procedure that leads to missing or repeating words, incomplete synthesis, or an inability to generalize to longer utterances BIBREF4, BIBREF5, BIBREF6.
The original Tacotron system BIBREF2 used the content-based attention mechanism introduced in BIBREF1 to align the target text with the output spectrogram. This mechanism is purely content-based and does not exploit the monotonicity and locality properties of TTS alignment, making it one of the least stable choices. The Tacotron 2 system BIBREF3 used the improved hybrid location-sensitive mechanism from BIBREF7 that combines content-based and location-based features, allowing generalization to utterances longer than those seen during training.
The hybrid mechanism still has occasional alignment issues which led a number of authors to develop attention mechanisms that directly exploit monotonicity BIBREF8, BIBREF4, BIBREF5. These monotonic alignment mechanisms have demonstrated properties like increased alignment speed during training, improved stability, enhanced naturalness, and a virtual elimination of synthesis errors. Downsides of these methods include decreased efficiency due to a reliance on recursion to marginalize over possible alignments, the necessity of training hacks to ensure learning doesn't stall or become unstable, and decreased quality when operating in a more efficient hard alignment mode during inference.
Separately, some authors BIBREF9 have moved back toward the purely location-based GMM attention introduced by Graves in BIBREF0, and some have proposed stabilizing GMM attention by using softplus nonlinearities in place of the exponential function BIBREF10, BIBREF11. However, there has been no systematic comparison of these design choices.
In this paper, we compare the content-based and location-sensitive mechanisms used in Tacotron 1 and 2 with a variety of simple location-relative mechanisms in terms of alignment speed and consistency, naturalness of the synthesized speech, and ability to generalize to long utterances. We show that GMM-based mechanisms are able to generalize to very long (potentially infinite-length) utterances, and we introduce simple modifications that result in improved speed and consistency of alignment during training. We also introduce a new location-relative mechanism called Dynamic Convolution Attention that modifies the hybrid location-sensitive mechanism from Tacotron 2 to be purely location-based, allowing it to generalize to very long utterances as well.
Two Families of Attention Mechanisms ::: Basic Setup
The system that we use in this paper is based on the original Tacotron system BIBREF2 with architectural modifications from the baseline model detailed in the appendix of BIBREF11. We use the CBHG encoder from BIBREF2 to produce a sequence of encoder outputs, $\lbrace j\rbrace _{j=1}^L$, from a length-$L$ input sequence of target phonemes, $\lbrace \mathbf {x}_j\rbrace _{j=1}^L$. Then an attention RNN, (DISPLAY_FORM2), produces a sequence of states, $\lbrace \mathbf {s}_i\rbrace _{i=1}^T$, that the attention mechanism uses to compute $\mathbf {\alpha }_i$, the alignment at decoder step $i$. Additional arguments to the attention function in () depend on the specific attention mechanism (e.g., whether it is content-based, location-based, or both). The context vector, $\mathbf {c}_i$, that is fed to the decoder RNN is computed using the alignment, $\mathbf {\alpha }_i$, to produce a weighted average of encoder states. The decoder is fed both the context vector and the current attention RNN state, and an output function produces the decoder output, $\mathbf {y}_i$, from the decoder RNN state, $\mathbf {d}_i$.
Two Families of Attention Mechanisms ::: GMM-Based Mechanisms
An early sequence-to-sequence attention mechanism was proposed by Graves in BIBREF0. This approach is a purely location-based mechanism that uses an unnormalized mixture of $K$ Gaussians to produce the attention weights, $\mathbf {\alpha }_i$, for each encoder state. The general form of this type of attention is shown in (DISPLAY_FORM4), where $\mathbf {w}_i$, $\mathbf {Z}_i$, $\mathbf {\Delta }_i$, and $\mathbf {\sigma }_i$ are computed from the attention RNN state. The mean of each Gaussian component is computed using the recurrence relation in (), which makes the mechanism location-relative and potentially monotonic if $\mathbf {\Delta }_i$ is constrained to be positive.
In order to compute the mixture parameters, intermediate parameters ($\hat{\mathbf {w}}_i,\hat{\mathbf {\Delta }}_i,\hat{\mathbf {\sigma }}_i$) are first computed using the MLP in (DISPLAY_FORM5) and then converted to the final parameters using the expressions in Table TABREF6.
The version 0 (V0) row in Table TABREF6 corresponds to the original mechanism proposed in BIBREF0. V1 adds normalization of the mixture weights and components and uses the exponential function to compute the mean offset and variance. V2 uses the softplus function to compute the mean offset and standard deviation.
Another modification we test is the addition of initial biases to the intermediate parameters $\hat{\mathbf {\Delta }}_i$ and $\hat{\mathbf {\sigma }}_i$ in order to encourage the final parameters $\mathbf {\Delta }_i$ and $\mathbf {\sigma }_i$ to take on useful values at initialization. In our experiments, we test versions of V1 and V2 GMM attention that use biases that target a value of $\mathbf {\Delta }_i=1$ for the initial forward movement and $\mathbf {\sigma }_i=10$ for the initial standard deviation (taking into account the different nonlinearities used to compute the parameters).
Two Families of Attention Mechanisms ::: Additive Energy-Based Mechanisms
A separate family of attention mechanisms use an MLP to compute attention energies, $\mathbf {e}_i$, that are converted to attention weights, $\mathbf {\alpha }_i$ using the softmax function. This family includes the content-based mechanism introduced in BIBREF1 and the hybrid location-sensitive mechanism from BIBREF7. A generalized formulation of this family is shown in (DISPLAY_FORM8).
Here we see the content-based terms, $W\mathbf {s}_i$ and $Vj$, that represent query/key comparisons and the location-sensitive term, $U{i,j}$, that uses convolutional features computed from the previous attention weights as in () BIBREF7. Also present are two new terms, $T\mathbf {g}_{i,j}$ and $p_{i,j}$, that are unique to our proposed Dynamic Convolution Attention. The $T\mathbf {g}_{i,j}$ term is very similar to $U{i,j}$ except that it uses dynamic filters that are computed from the current attention RNN state as in (). The $p_{i,j}$ term is the output of a fixed prior filter that biases the mechanism to favor certain types of alignment. Table TABREF9 shows which of the terms are present in the three energy-based mechanisms we compare in this paper.
Two Families of Attention Mechanisms ::: Dynamic Convolution Attention
In designing Dynamic Convolution Attention (DCA), we were motivated by location-relative mechanisms like GMM attention, but desired fully normalized attention weights. Despite the fact that GMM attention V1 and V2 use normalized mixture weights and components, the attention weights still end up unnormalized because they are sampled from a continuous probability density function. This can lead to occasional spikes or dropouts in the alignment, and attempting to directly normalize GMM attention weights results in unstable training. Attention normalization isn't a significant problem in fine-grained output-to-text alignment, but becomes more of an issue for coarser-grained alignment tasks where the attention window needs to gradually move to the next index (for example in variable-length prosody transfer applications BIBREF12). Because DCA is in the energy-based attention family, it is normalized by default and should work well for a variety of monotonic alignment tasks.
Another issue with GMM attention is that because it uses a mixture of distributions with infinite support, it isn't necessarily monotonic. At any time, the mechanism could choose to emphasize a component whose mean is at an earlier point in the sequence, or it could expand the variance of a component to look backward in time, potentially hurting alignment stability.
To address monotonicity issues, we make modifications to the hybrid location-sensitive mechanism. First we remove the content-based terms, $W\mathbf {s}_i$ and $Wi$, which prevents the alignment from moving backward due to a query/key match at a past timestep. Doing this prevents the mechanism from adjusting its alignment trajectory as it is only left with a set of static filters, $U{i,j}$, that learn to bias the alignment to move forward by a certain fixed amount. To remedy this, we add a set of learned dynamic filters, $T\mathbf {g}_{i,j}$, that are computed from the attention RNN state as in (). These filters serve to dynamically adjust the alignment relative to the alignment at the previous step.
In order to prevent the dynamic filters from moving things backward, we use a single fixed prior filter to bias the alignment toward short forward steps. Unlike the static and dynamic filters, the prior filter is a causal filter that only allows forward progression of the alignment. In order to enforce the monotonicity constraint, the output of the filter is converted to the logit domain via the log function before being added to the energy function in (DISPLAY_FORM8) (we also floor the prior logits at $-10^6$ to prevent underflow).
We set the taps of the prior filter using values from the beta-binomial distribution, which is a two-parameter discrete distribution with finite support.
where $\textrm {B}(\cdot )$ is the beta function. For our experiments we use the parameters $\alpha =0.1$ and $\beta =0.9$ to set the taps on a length-11 prior filter ($n=10$), Repeated application of the prior filter encourages an average forward movement of 1 encoder step per decoder step ($\mathbb {E}[k] = \alpha n/(\alpha +\beta )$) with the uncertainty in the prior alignment increasing after each step. The prior parameters could be tailored to reflect the phonemic rate of each dataset in order to optimize alignment speed during training, but for simplicity we use the same values for all experiments. Figure FIGREF12 shows the prior filter along with the alignment weights every 20 decoder steps when ignoring the contribution from other terms in (DISPLAY_FORM8).
Experiments ::: Experiment Setup
In our experiments we compare the GMM and additive energy-based families of attention mechanisms enumerated in Tables TABREF6 and TABREF9. We use the Tacotron architecture described in Section SECREF1 and only vary the attention function used to compute the attention weights, $\mathbf {\alpha }_i$. The decoder produces two 128-bin, 12.5ms-hop mel spectrogram frames per step. We train each model using the Adam optimizer for 300,000 steps with a gradient clipping threshold of 5 and a batch size of 256, spread across 32 Google Cloud TPU cores. We use an initial learning rate of $10^{-3}$ that is reduced to $5\times 10^{-4}$, $3\times 10^{-4}$, $10^{-4}$, and $5\times 10^{-5}$ at 50k, 100k, 150k, and 200k steps, respectively. To convert the mel spectrograms produced by the models into audio samples, we use a separately-trained WaveRNN BIBREF13 for each speaker.
For all attention mechanisms, we use a size of 128 for all tanh hidden layers. For the GMM mechanisms, we use $K=5$ mixture components. For location-sensitive attention (LSA), we use 32 static filters, each of length 31. For DCA, we use 8 static filters and 8 dynamic filters (all of length 21), and a length-11 causal prior filter as described in Section SECREF10.
We run experiments using two different single-speaker datasets. The first (which we refer to as the Lessac dataset) comprises audiobook recordings from Catherine Byers, the speaker from the 2013 Blizzard Challenge. For this dataset, we train on a 49,852-utterance (37-hour) subset, consisting of utterances up to 5 seconds long, and evaluate on a separate 935-utterance subset. The second is the LJ Speech dataset BIBREF14, a public dataset consisting of audiobook recordings that are segmented into utterances of up to 10 seconds. We train on a 12,764-utterance subset (23 hours) and evaluate on a separate 130-utterance subset.
Experiments ::: Alignment Speed and Consistency
To test the alignment speed and consistency of the various mechanisms, we run 10 identical trials of 10,000 training steps and plot the MCD-DTW between a ground truth holdout set and the output of the model during training. The MCD-DTW is an objective similarity metric that uses dynamic time warping (DTW) to find the minimum mel cepstral distortion (MCD) BIBREF15 between two sequences. The faster a model is able to align with the text, the faster it will start producing reasonable spectrograms that produce a lower MCD-DTW.
Figure FIGREF15 shows these trials for 8 different mechanisms for both the Lessac and LJ datasets. Content-based (CBA), location-sensitive (LSA), and DCA are the three energy-based mechanisms from Table TABREF9, and the 3 GMM varieties are shown in Table TABREF6. We also test the V1 and V2 GMM mechanisms with an initial parameter bias as described in Section SECREF3 (abbreviated as GMMv1b and GMMv2b).
Looking at the plots for the Lessac dataset (top of Figure FIGREF15), we see that the mechanisms on the top row (the energy-based family and GMMv2b) all align consistently with DCA and GMMv2b aligning the fastest. The GMM mechanisms on the bottom row don't fare as well, and while they typically align more often than not, there are a significant number failures or cases of delayed alignment. It's interesting to note that adding a bias to the GMMv1 mechanism actually hurts its consistency while adding a bias to GMMv2 helps it.
Looking at the plots for the LJ dataset at bottom of Figure FIGREF15, we first see that the dataset is more difficult in terms of alignment. This is likely due to the higher maximum and average length of the utterances in the training data (most utterances in the LJ dataset are longer than 5 seconds) but could also be caused by an increased presence of intra-utterance pauses and overall lower audio quality. Here, the top row doesn't fare as well: CBA has trouble aligning within the first 10k steps, while DCA and GMMv2b both fail to align once. LSA succeeds on all 10 trials but tends to align more slowly than DCA and GMMv2b when they succeed. With these consistency results in mind, we will only be testing the top row of mechanisms in subsequent evaluations.
Experiments ::: In-Domain Naturalness
We evaluate CBA, LSA, DCA, and GMMv2b using mean opinion score (MOS) naturalness judgments produced by a crowd-sourced pool of raters. Scores range from 1 to 5, with 5 representing “completely natural speech”. The Lessac and LJ models are evaluated on their respective test sets (hence in-domain), and the results are shown in Table TABREF17. We see that for these utterances, the LSA, DCA, and GMMV2b mechanisms all produce equivalent scores around 4.3, while the content-based mechanism is a bit lower due to occasional catastrophic attention failures.
Experiments ::: Generalization to Long Utterances
Now we evaluate our models on long utterances taken from two chapters of the Harry Potter novels. We use 1034 utterances that vary between 58 and 1648 characters (10 and 299 words). Google Cloud Speech-To-Text is used to produce transcripts of the resulting audio output, and we compute the character errors rate (CER) between the produced transcripts and the target transcripts.
Figure FIGREF20 shows the CER results as the utterance length is varied for the Lessac models (trained on up to 5 second utterances) and LJ models (trained on up to 10 second utterances). The plots show that CBA fares the worst with the CER shooting up when the test length exceeds the max training length. LSA shoots up soon after at around 3x the max training length, while the two location-relative mechanisms, DCA and GMMv2b, are both able to generalize to the whole range of utterance lengths tested.
Discussion
We have shown that Dynamic Convolution Attention (DCA) and our V2 GMM attention with initial bias (GMMv2b) are able to generalize to utterances much longer than those seen during training, while preserving naturalness on shorter utterances. This opens the door for synthesis of entire paragraph or long sentences (e.g., for book or news reading applications), which can improve naturalness and continuity compared to synthesizing each sentence or clause separately and then stitching them together.
These two location-relative mechanisms are simple to implement and do not rely on dynamic programming to marginalize over alignments. They also tend to align very quickly during training, which makes the occasional alignment failure easy to detect so training can be restarted. In our alignment trials, despite being slower to align on average, LSA attention seemed to have an edge in terms of alignment consistency; however, we have noticed that slower alignment can sometimes lead to worse quality models, probably because the other model components are being optimized in an unaligned state for longer.
Compared to GMMv2b, DCA can more easily bound its receptive field (because its prior filter numerically disallows excessive forward movement), which makes it easier to incorporate hard windowing optimizations in production. Another advantage of DCA over GMM attention is that its attention weights are normalized, which helps to stabilize the alignment, especially for coarse-grained alignment tasks.
For monotonic alignment tasks like TTS and speech recognition, location-relative attention mechanisms have many advantages and warrant increased consideration and further study. Supplemental materials, including audio examples, are available on the web. | About the same performance |
e4024db40f4b8c1ce593f53b28718e52d5007cd2 | e4024db40f4b8c1ce593f53b28718e52d5007cd2_0 | Q: How they compare varioius mechanisms in terms of naturalness?
Text: Introduction
Sequence-to-sequence models that use an attention mechanism to align the input and output sequences BIBREF0, BIBREF1 are currently the predominant paradigm in end-to-end TTS. Approaches based on the seminal Tacotron system BIBREF2 have demonstrated naturalness that rivals that of human speech for certain domains BIBREF3. Despite these successes, there are sometimes complaints of a lack of robustness in the alignment procedure that leads to missing or repeating words, incomplete synthesis, or an inability to generalize to longer utterances BIBREF4, BIBREF5, BIBREF6.
The original Tacotron system BIBREF2 used the content-based attention mechanism introduced in BIBREF1 to align the target text with the output spectrogram. This mechanism is purely content-based and does not exploit the monotonicity and locality properties of TTS alignment, making it one of the least stable choices. The Tacotron 2 system BIBREF3 used the improved hybrid location-sensitive mechanism from BIBREF7 that combines content-based and location-based features, allowing generalization to utterances longer than those seen during training.
The hybrid mechanism still has occasional alignment issues which led a number of authors to develop attention mechanisms that directly exploit monotonicity BIBREF8, BIBREF4, BIBREF5. These monotonic alignment mechanisms have demonstrated properties like increased alignment speed during training, improved stability, enhanced naturalness, and a virtual elimination of synthesis errors. Downsides of these methods include decreased efficiency due to a reliance on recursion to marginalize over possible alignments, the necessity of training hacks to ensure learning doesn't stall or become unstable, and decreased quality when operating in a more efficient hard alignment mode during inference.
Separately, some authors BIBREF9 have moved back toward the purely location-based GMM attention introduced by Graves in BIBREF0, and some have proposed stabilizing GMM attention by using softplus nonlinearities in place of the exponential function BIBREF10, BIBREF11. However, there has been no systematic comparison of these design choices.
In this paper, we compare the content-based and location-sensitive mechanisms used in Tacotron 1 and 2 with a variety of simple location-relative mechanisms in terms of alignment speed and consistency, naturalness of the synthesized speech, and ability to generalize to long utterances. We show that GMM-based mechanisms are able to generalize to very long (potentially infinite-length) utterances, and we introduce simple modifications that result in improved speed and consistency of alignment during training. We also introduce a new location-relative mechanism called Dynamic Convolution Attention that modifies the hybrid location-sensitive mechanism from Tacotron 2 to be purely location-based, allowing it to generalize to very long utterances as well.
Two Families of Attention Mechanisms ::: Basic Setup
The system that we use in this paper is based on the original Tacotron system BIBREF2 with architectural modifications from the baseline model detailed in the appendix of BIBREF11. We use the CBHG encoder from BIBREF2 to produce a sequence of encoder outputs, $\lbrace j\rbrace _{j=1}^L$, from a length-$L$ input sequence of target phonemes, $\lbrace \mathbf {x}_j\rbrace _{j=1}^L$. Then an attention RNN, (DISPLAY_FORM2), produces a sequence of states, $\lbrace \mathbf {s}_i\rbrace _{i=1}^T$, that the attention mechanism uses to compute $\mathbf {\alpha }_i$, the alignment at decoder step $i$. Additional arguments to the attention function in () depend on the specific attention mechanism (e.g., whether it is content-based, location-based, or both). The context vector, $\mathbf {c}_i$, that is fed to the decoder RNN is computed using the alignment, $\mathbf {\alpha }_i$, to produce a weighted average of encoder states. The decoder is fed both the context vector and the current attention RNN state, and an output function produces the decoder output, $\mathbf {y}_i$, from the decoder RNN state, $\mathbf {d}_i$.
Two Families of Attention Mechanisms ::: GMM-Based Mechanisms
An early sequence-to-sequence attention mechanism was proposed by Graves in BIBREF0. This approach is a purely location-based mechanism that uses an unnormalized mixture of $K$ Gaussians to produce the attention weights, $\mathbf {\alpha }_i$, for each encoder state. The general form of this type of attention is shown in (DISPLAY_FORM4), where $\mathbf {w}_i$, $\mathbf {Z}_i$, $\mathbf {\Delta }_i$, and $\mathbf {\sigma }_i$ are computed from the attention RNN state. The mean of each Gaussian component is computed using the recurrence relation in (), which makes the mechanism location-relative and potentially monotonic if $\mathbf {\Delta }_i$ is constrained to be positive.
In order to compute the mixture parameters, intermediate parameters ($\hat{\mathbf {w}}_i,\hat{\mathbf {\Delta }}_i,\hat{\mathbf {\sigma }}_i$) are first computed using the MLP in (DISPLAY_FORM5) and then converted to the final parameters using the expressions in Table TABREF6.
The version 0 (V0) row in Table TABREF6 corresponds to the original mechanism proposed in BIBREF0. V1 adds normalization of the mixture weights and components and uses the exponential function to compute the mean offset and variance. V2 uses the softplus function to compute the mean offset and standard deviation.
Another modification we test is the addition of initial biases to the intermediate parameters $\hat{\mathbf {\Delta }}_i$ and $\hat{\mathbf {\sigma }}_i$ in order to encourage the final parameters $\mathbf {\Delta }_i$ and $\mathbf {\sigma }_i$ to take on useful values at initialization. In our experiments, we test versions of V1 and V2 GMM attention that use biases that target a value of $\mathbf {\Delta }_i=1$ for the initial forward movement and $\mathbf {\sigma }_i=10$ for the initial standard deviation (taking into account the different nonlinearities used to compute the parameters).
Two Families of Attention Mechanisms ::: Additive Energy-Based Mechanisms
A separate family of attention mechanisms use an MLP to compute attention energies, $\mathbf {e}_i$, that are converted to attention weights, $\mathbf {\alpha }_i$ using the softmax function. This family includes the content-based mechanism introduced in BIBREF1 and the hybrid location-sensitive mechanism from BIBREF7. A generalized formulation of this family is shown in (DISPLAY_FORM8).
Here we see the content-based terms, $W\mathbf {s}_i$ and $Vj$, that represent query/key comparisons and the location-sensitive term, $U{i,j}$, that uses convolutional features computed from the previous attention weights as in () BIBREF7. Also present are two new terms, $T\mathbf {g}_{i,j}$ and $p_{i,j}$, that are unique to our proposed Dynamic Convolution Attention. The $T\mathbf {g}_{i,j}$ term is very similar to $U{i,j}$ except that it uses dynamic filters that are computed from the current attention RNN state as in (). The $p_{i,j}$ term is the output of a fixed prior filter that biases the mechanism to favor certain types of alignment. Table TABREF9 shows which of the terms are present in the three energy-based mechanisms we compare in this paper.
Two Families of Attention Mechanisms ::: Dynamic Convolution Attention
In designing Dynamic Convolution Attention (DCA), we were motivated by location-relative mechanisms like GMM attention, but desired fully normalized attention weights. Despite the fact that GMM attention V1 and V2 use normalized mixture weights and components, the attention weights still end up unnormalized because they are sampled from a continuous probability density function. This can lead to occasional spikes or dropouts in the alignment, and attempting to directly normalize GMM attention weights results in unstable training. Attention normalization isn't a significant problem in fine-grained output-to-text alignment, but becomes more of an issue for coarser-grained alignment tasks where the attention window needs to gradually move to the next index (for example in variable-length prosody transfer applications BIBREF12). Because DCA is in the energy-based attention family, it is normalized by default and should work well for a variety of monotonic alignment tasks.
Another issue with GMM attention is that because it uses a mixture of distributions with infinite support, it isn't necessarily monotonic. At any time, the mechanism could choose to emphasize a component whose mean is at an earlier point in the sequence, or it could expand the variance of a component to look backward in time, potentially hurting alignment stability.
To address monotonicity issues, we make modifications to the hybrid location-sensitive mechanism. First we remove the content-based terms, $W\mathbf {s}_i$ and $Wi$, which prevents the alignment from moving backward due to a query/key match at a past timestep. Doing this prevents the mechanism from adjusting its alignment trajectory as it is only left with a set of static filters, $U{i,j}$, that learn to bias the alignment to move forward by a certain fixed amount. To remedy this, we add a set of learned dynamic filters, $T\mathbf {g}_{i,j}$, that are computed from the attention RNN state as in (). These filters serve to dynamically adjust the alignment relative to the alignment at the previous step.
In order to prevent the dynamic filters from moving things backward, we use a single fixed prior filter to bias the alignment toward short forward steps. Unlike the static and dynamic filters, the prior filter is a causal filter that only allows forward progression of the alignment. In order to enforce the monotonicity constraint, the output of the filter is converted to the logit domain via the log function before being added to the energy function in (DISPLAY_FORM8) (we also floor the prior logits at $-10^6$ to prevent underflow).
We set the taps of the prior filter using values from the beta-binomial distribution, which is a two-parameter discrete distribution with finite support.
where $\textrm {B}(\cdot )$ is the beta function. For our experiments we use the parameters $\alpha =0.1$ and $\beta =0.9$ to set the taps on a length-11 prior filter ($n=10$), Repeated application of the prior filter encourages an average forward movement of 1 encoder step per decoder step ($\mathbb {E}[k] = \alpha n/(\alpha +\beta )$) with the uncertainty in the prior alignment increasing after each step. The prior parameters could be tailored to reflect the phonemic rate of each dataset in order to optimize alignment speed during training, but for simplicity we use the same values for all experiments. Figure FIGREF12 shows the prior filter along with the alignment weights every 20 decoder steps when ignoring the contribution from other terms in (DISPLAY_FORM8).
Experiments ::: Experiment Setup
In our experiments we compare the GMM and additive energy-based families of attention mechanisms enumerated in Tables TABREF6 and TABREF9. We use the Tacotron architecture described in Section SECREF1 and only vary the attention function used to compute the attention weights, $\mathbf {\alpha }_i$. The decoder produces two 128-bin, 12.5ms-hop mel spectrogram frames per step. We train each model using the Adam optimizer for 300,000 steps with a gradient clipping threshold of 5 and a batch size of 256, spread across 32 Google Cloud TPU cores. We use an initial learning rate of $10^{-3}$ that is reduced to $5\times 10^{-4}$, $3\times 10^{-4}$, $10^{-4}$, and $5\times 10^{-5}$ at 50k, 100k, 150k, and 200k steps, respectively. To convert the mel spectrograms produced by the models into audio samples, we use a separately-trained WaveRNN BIBREF13 for each speaker.
For all attention mechanisms, we use a size of 128 for all tanh hidden layers. For the GMM mechanisms, we use $K=5$ mixture components. For location-sensitive attention (LSA), we use 32 static filters, each of length 31. For DCA, we use 8 static filters and 8 dynamic filters (all of length 21), and a length-11 causal prior filter as described in Section SECREF10.
We run experiments using two different single-speaker datasets. The first (which we refer to as the Lessac dataset) comprises audiobook recordings from Catherine Byers, the speaker from the 2013 Blizzard Challenge. For this dataset, we train on a 49,852-utterance (37-hour) subset, consisting of utterances up to 5 seconds long, and evaluate on a separate 935-utterance subset. The second is the LJ Speech dataset BIBREF14, a public dataset consisting of audiobook recordings that are segmented into utterances of up to 10 seconds. We train on a 12,764-utterance subset (23 hours) and evaluate on a separate 130-utterance subset.
Experiments ::: Alignment Speed and Consistency
To test the alignment speed and consistency of the various mechanisms, we run 10 identical trials of 10,000 training steps and plot the MCD-DTW between a ground truth holdout set and the output of the model during training. The MCD-DTW is an objective similarity metric that uses dynamic time warping (DTW) to find the minimum mel cepstral distortion (MCD) BIBREF15 between two sequences. The faster a model is able to align with the text, the faster it will start producing reasonable spectrograms that produce a lower MCD-DTW.
Figure FIGREF15 shows these trials for 8 different mechanisms for both the Lessac and LJ datasets. Content-based (CBA), location-sensitive (LSA), and DCA are the three energy-based mechanisms from Table TABREF9, and the 3 GMM varieties are shown in Table TABREF6. We also test the V1 and V2 GMM mechanisms with an initial parameter bias as described in Section SECREF3 (abbreviated as GMMv1b and GMMv2b).
Looking at the plots for the Lessac dataset (top of Figure FIGREF15), we see that the mechanisms on the top row (the energy-based family and GMMv2b) all align consistently with DCA and GMMv2b aligning the fastest. The GMM mechanisms on the bottom row don't fare as well, and while they typically align more often than not, there are a significant number failures or cases of delayed alignment. It's interesting to note that adding a bias to the GMMv1 mechanism actually hurts its consistency while adding a bias to GMMv2 helps it.
Looking at the plots for the LJ dataset at bottom of Figure FIGREF15, we first see that the dataset is more difficult in terms of alignment. This is likely due to the higher maximum and average length of the utterances in the training data (most utterances in the LJ dataset are longer than 5 seconds) but could also be caused by an increased presence of intra-utterance pauses and overall lower audio quality. Here, the top row doesn't fare as well: CBA has trouble aligning within the first 10k steps, while DCA and GMMv2b both fail to align once. LSA succeeds on all 10 trials but tends to align more slowly than DCA and GMMv2b when they succeed. With these consistency results in mind, we will only be testing the top row of mechanisms in subsequent evaluations.
Experiments ::: In-Domain Naturalness
We evaluate CBA, LSA, DCA, and GMMv2b using mean opinion score (MOS) naturalness judgments produced by a crowd-sourced pool of raters. Scores range from 1 to 5, with 5 representing “completely natural speech”. The Lessac and LJ models are evaluated on their respective test sets (hence in-domain), and the results are shown in Table TABREF17. We see that for these utterances, the LSA, DCA, and GMMV2b mechanisms all produce equivalent scores around 4.3, while the content-based mechanism is a bit lower due to occasional catastrophic attention failures.
Experiments ::: Generalization to Long Utterances
Now we evaluate our models on long utterances taken from two chapters of the Harry Potter novels. We use 1034 utterances that vary between 58 and 1648 characters (10 and 299 words). Google Cloud Speech-To-Text is used to produce transcripts of the resulting audio output, and we compute the character errors rate (CER) between the produced transcripts and the target transcripts.
Figure FIGREF20 shows the CER results as the utterance length is varied for the Lessac models (trained on up to 5 second utterances) and LJ models (trained on up to 10 second utterances). The plots show that CBA fares the worst with the CER shooting up when the test length exceeds the max training length. LSA shoots up soon after at around 3x the max training length, while the two location-relative mechanisms, DCA and GMMv2b, are both able to generalize to the whole range of utterance lengths tested.
Discussion
We have shown that Dynamic Convolution Attention (DCA) and our V2 GMM attention with initial bias (GMMv2b) are able to generalize to utterances much longer than those seen during training, while preserving naturalness on shorter utterances. This opens the door for synthesis of entire paragraph or long sentences (e.g., for book or news reading applications), which can improve naturalness and continuity compared to synthesizing each sentence or clause separately and then stitching them together.
These two location-relative mechanisms are simple to implement and do not rely on dynamic programming to marginalize over alignments. They also tend to align very quickly during training, which makes the occasional alignment failure easy to detect so training can be restarted. In our alignment trials, despite being slower to align on average, LSA attention seemed to have an edge in terms of alignment consistency; however, we have noticed that slower alignment can sometimes lead to worse quality models, probably because the other model components are being optimized in an unaligned state for longer.
Compared to GMMv2b, DCA can more easily bound its receptive field (because its prior filter numerically disallows excessive forward movement), which makes it easier to incorporate hard windowing optimizations in production. Another advantage of DCA over GMM attention is that its attention weights are normalized, which helps to stabilize the alignment, especially for coarse-grained alignment tasks.
For monotonic alignment tasks like TTS and speech recognition, location-relative attention mechanisms have many advantages and warrant increased consideration and further study. Supplemental materials, including audio examples, are available on the web. | using mean opinion score (MOS) naturalness judgments produced by a crowd-sourced pool of raters |
3f326c003be29c8eac76b24d6bba9608c75aa7ea | 3f326c003be29c8eac76b24d6bba9608c75aa7ea_0 | Q: What evaluation metric is used?
Text: Introduction
The detection of offensive language has become an important topic as the online community has grown, as so too have the number of bad actors BIBREF2. Such behavior includes, but is not limited to, trolling in public discussion forums BIBREF3 and via social media BIBREF4, BIBREF5, employing hate speech that expresses prejudice against a particular group, or offensive language specifically targeting an individual. Such actions can be motivated to cause harm from which the bad actor derives enjoyment, despite negative consequences to others BIBREF6. As such, some bad actors go to great lengths to both avoid detection and to achieve their goals BIBREF7. In that context, any attempt to automatically detect this behavior can be expected to be adversarially attacked by looking for weaknesses in the detection system, which currently can easily be exploited as shown in BIBREF8, BIBREF9. A further example, relevant to the natural langauge processing community, is the exploitation of weaknesses in machine learning models that generate text, to force them to emit offensive language. Adversarial attacks on the Tay chatbot led to the developers shutting down the system BIBREF1.
In this work, we study the detection of offensive language in dialogue with models that are robust to adversarial attack. We develop an automatic approach to the “Build it Break it Fix it” strategy originally adopted for writing secure programs BIBREF10, and the “Build it Break it” approach consequently adapting it for NLP BIBREF11. In the latter work, two teams of researchers, “builders” and “breakers” were used to first create sentiment and semantic role-labeling systems and then construct examples that find their faults. In this work we instead fully automate such an approach using crowdworkers as the humans-in-the-loop, and also apply a fixing stage where models are retrained to improve them. Finally, we repeat the whole build, break, and fix sequence over a number of iterations.
We show that such an approach provides more and more robust systems over the fixing iterations. Analysis of the type of data collected in the iterations of the break it phase shows clear distribution changes, moving away from simple use of profanity and other obvious offensive words to utterances that require understanding of world knowledge, figurative language, and use of negation to detect if they are offensive or not. Further, data collected in the context of a dialogue rather than a sentence without context provides more sophisticated attacks. We show that model architectures that use the dialogue context efficiently perform much better than systems that do not, where the latter has been the main focus of existing research BIBREF12, BIBREF5, BIBREF13.
Code for our entire build it, break it, fix it algorithm will be made open source, complete with model training code and crowdsourcing interface for humans. Our data and trained models will also be made available for the community.
Related Work
The task of detecting offensive language has been studied across a variety of content classes. Perhaps the most commonly studied class is hate speech, but work has also covered bullying, aggression, and toxic comments BIBREF13.
To this end, various datasets have been created to benchmark progress in the field. In hate speech detection, recently BIBREF5 compiled and released a dataset of over 24,000 tweets labeled as containing hate speech, offensive language, or neither. The TRAC shared task on Aggression Identification, a dataset of over 15,000 Facebook comments labeled with varying levels of aggression, was released as part of a competition BIBREF14. In order to benchmark toxic comment detection, The Wikipedia Toxic Comments dataset (which we study in this work) was collected and extracted from Wikipedia Talk pages and featured in a Kaggle competition BIBREF12, BIBREF15. Each of these benchmarks examine only single-turn utterances, outside of the context in which the language appeared. In this work we recommend that future systems should move beyond classification of singular utterances and use contextual information to help identify offensive language.
Many approaches have been taken to solve these tasks – from linear regression and SVMs to deep learning BIBREF16. The best performing systems in each of the competitions mentioned above (for aggression and toxic comment classification) used deep learning approaches such as LSTMs and CNNs BIBREF14, BIBREF15. In this work we consider a large-pretrained transformer model which has been shown to perform well on many downstream NLP tasks BIBREF17.
The broad class of adversarial training is currently a hot topic in machine learning BIBREF18. Use cases include training image generators BIBREF19 as well as image classifiers to be robust to adversarial examples BIBREF20. These methods find the breaking examples algorithmically, rather than by using humans breakers as we do. Applying the same approaches to NLP tends to be more challenging because, unlike for images, even small changes to a sentence can cause a large change in the meaning of that sentence, which a human can detect but a lower quality model cannot. Nevertheless algorithmic approaches have been attempted, for example in text classification BIBREF21, machine translation BIBREF22, dialogue generation tasks BIBREF23 and reading comprehension BIBREF24. The latter was particularly effective at proposing a more difficult version of the popular SQuAD dataset.
As mentioned in the introduction, our approach takes inspiration from “Build it Break it” approaches which have been successfully tried in other domains BIBREF10, BIBREF11. Those approaches advocate finding faults in systems by having humans look for insecurities (in software) or prediction failures (in models), but do not advocate an automated approach as we do here. Our work is also closely connected to the “Mechanical Turker Descent” algorithm detailed in BIBREF25 where language to action pairs were collected from crowdworkers by incentivizing them with a game-with-a-purpose technique: a crowdworker receives a bonus if their contribution results in better models than another crowdworker. We did not gamify our approach in this way, but still our approach has commonalities in the round-based improvement of models through crowdworker interaction.
Baselines: Wikipedia Toxic Comments
In this section we describe the publicly available data that we have used to bootstrap our build it break it fix it approach. We also compare our model choices with existing work and clarify the metrics chosen to report our results.
Baselines: Wikipedia Toxic Comments ::: Wikipedia Toxic Comments
The Wikipedia Toxic Comments dataset (WTC) has been collected in a common effort from the Wikimedia Foundation and Jigsaw BIBREF12 to identify personal attacks online. The data has been extracted from the Wikipedia Talk pages, discussion pages where editors can discuss improvements to articles or other Wikipedia pages. We considered the version of the dataset that corresponds to the Kaggle competition: “Toxic Comment Classification Challenge" BIBREF15 which features 7 classes of toxicity: toxic, severe toxic, obscene, threat, insult, identity hate and non-toxic. In the same way as in BIBREF26, every label except non-toxic is grouped into a class offensive while the non-toxic class is kept as the safe class. In order to compare our results to BIBREF26, we similarly split this dataset to dedicate 10% as a test set. 80% are dedicated to train set while the remaining 10% is used for validation. Statistics on the dataset are shown in Table TABREF4.
Baselines: Wikipedia Toxic Comments ::: Models
We establish baselines using two models. The first one is a binary classifier built on top of a large pre-trained transformer model. We use the same architecture as in BERT BIBREF17. We add a linear layer to the output of the first token ([CLS]) to produce a final binary classification. We initialize the model using the weights provided by BIBREF17 corresponding to “BERT-base". The transformer is composed of 12 layers with hidden size of 768 and 12 attention heads. We fine-tune the whole network on the classification task. We also compare it the fastText classifier BIBREF27 for which a given sentence is encoded as the average of individual word vectors that are pre-trained on a large corpus issued from Wikipedia. A linear layer is then applied on top to yield a binary classification.
Baselines: Wikipedia Toxic Comments ::: Experiments
We compare the two aforementioned models with BIBREF26 who conducted their experiments with a BiLSTM with GloVe pre-trained word vectors BIBREF28. Results are listed in Table TABREF5 and we compare them using the weighted-F1, i.e. the sum of F1 score of each class weighted by their frequency in the dataset. We also report the F1 of the offensive-class which is the metric we favor within this work, although we report both. (Note that throughout the paper, the notation F1 is always referring to offensive-class F1.) Indeed, in the case of an imbalanced dataset such as Wikipedia Toxic Comments where most samples are safe, the weighted-F1 is closer to the F1 score of the safe class while we focus on detecting offensive content. Our BERT-based model outperforms the method from BIBREF26; throughout the rest of the paper, we use the BERT-based architecture in our experiments. In particular, we used this baseline trained on WTC to bootstrap our approach, to be described subsequently.
Build it Break it Fix it Method
In order to train models that are robust to adversarial behavior, we posit that it is crucial collect and train on data that was collected in an adversarial manner. We propose the following automated build it, break it, fix it algorithm:
Build it: Build a model capable of detecting offensive messages. This is our best-performing BERT-based model trained on the Wikipedia Toxic Comments dataset described in the previous section. We refer to this model throughout as $A_0$.
Break it: Ask crowdworkers to try to “beat the system" by submitting messages that our system ($A_0$) marks as safe but that the worker considers to be offensive.
Fix it: Train a new model on these collected examples in order to be more robust to these adversarial attacks.
Repeat: Repeat, deploying the newly trained model in the break it phase, then fix it again.
See Figure FIGREF6 for a visualization of this process.
Build it Break it Fix it Method ::: Break it Details ::: Definition of offensive
Throughout data collection, we characterize offensive messages for users as messages that would not be “ok to send in a friendly conversation with someone you just met online." We use this specific language in an attempt to capture various classes of content that would be considered unacceptable in a friendly conversation, without imposing our own definitions of what that means. The phrase “with someone you just met online" was meant to mimic the setting of a public forum.
Build it Break it Fix it Method ::: Break it Details ::: Crowderworker Task
We ask crowdworkers to try to “beat the system" by submitting messages that our system marks as safe but that the worker considers to be offensive. For a given round, workers earn a “game” point each time they are able to “beat the system," or in other words, trick the model by submitting offensive messages that the model marks as safe. Workers earn up to 5 points each round, and have two tries for each point: we allow multiple attempts per point so that workers can get feedback from the models and better understand their weaknesses. The points serve to indicate success to the crowdworker and motivate to achieve high scores, but have no other meaning (e.g. no monetary value as in BIBREF25). More details regarding the user interface and instructions can be found in Appendix SECREF9.
Build it Break it Fix it Method ::: Break it Details ::: Models to Break
During round 1, workers try to break the baseline model $A_0$, trained on Wikipedia Toxic Comments. For rounds $i$, $i > 1$, workers must break both the baseline model and the model from the previous “fix it" round, which we refer to as $A_{i-1}$. In that case, the worker must submit messages that both $A_0$ and $A_{i-1}$ mark as safe but which the worker considers to be offensive.
Build it Break it Fix it Method ::: Fix it Details
During the “fix it" round, we update the models with the newly collected adversarial data from the “break it" round.
The training data consists of all previous rounds of data, so that model $A_i$ is trained on all rounds $n$ for $n \le i$, as well as the Wikipedia Toxic Comments data. We split each round of data into train, validation, and test partitions. The validation set is used for hyperparameter selection. The test sets are used to measure how robust we are to new adversarial attacks. With increasing round $i$, $A_i$ should become more robust to increasingly complex human adversarial attacks.
Single-Turn Task
We first consider a single-turn set-up, i.e. detection of offensive language in one utterance, with no dialogue context or conversational history.
Single-Turn Task ::: Data Collection ::: Adversarial Collection
We collected three rounds of data with the build it, break it, fix it algorithm described in the previous section. Each round of data consisted of 1000 examples, leading to 3000 single-turn adversarial examples in total. For the remainder of the paper, we refer to this method of data collection as the adversarial method.
Single-Turn Task ::: Data Collection ::: Standard Collection
In addition to the adversarial method, we also collected data in a non-adversarial manner in order to directly compare the two set-ups. In this method – which we refer to as the standard method, we simply ask crowdworkers to submit messages that they consider to be offensive. There is no model to break. Instructions are otherwise the same.
In this set-up, there is no real notion of “rounds", but for the sake of comparison we refer to each subsequent 1000 examples collected in this manner as a “round". We collect 3000 examples – or three rounds of data. We refer to a model trained on rounds $n \le i$ of the standard data as $S_i$.
Single-Turn Task ::: Data Collection ::: Task Formulation Details
Since all of the collected examples are labeled as offensive, to make this task a binary classification problem, we will also add safe examples to it.
The “safe data" is comprised of utterances from the ConvAI2 chit-chat task BIBREF29, BIBREF30 which consists of pairs of humans getting to know each other by discussing their interests. Each utterance we used was reviewed by two independent crowdworkers and labeled as safe, with the same characterization of safe as described before.
For each partition (train, validation, test), the final task has a ratio of 9:1 safe to offensive examples, mimicking the division of the Wikipedia Toxic Comments dataset used for training our baseline models. Dataset statistics for the final task can be found in Table TABREF21. We refer to these tasks – with both safe and offensive examples – as the adversarial and standard tasks.
Single-Turn Task ::: Data Collection ::: Model Training Details
Using the BERT-based model architecture described in Section SECREF3, we trained models on each round of the standard and adversarial tasks, multi-tasking with the Wikipedia Toxic Comments task. We weight the multi-tasking with a mixing parameter which is also tuned on the validation set. Finally, after training weights with the cross entropy loss, we adjust the final bias also using the validation set. We optimize for the sensitive class (i.e. offensive-class) F1 metric on the standard and adversarial validation sets respectively.
For each task (standard and adversarial), on round $i$, we train on data from all rounds $n$ for $n \le i$ and optimize for performance on the validation sets $n \le i$.
Single-Turn Task ::: Experimental Results
We conduct experiments comparing the adversarial and standard methods. We break down the results into “break it" results comparing the data collected and “fix it" results comparing the models obtained.
Single-Turn Task ::: Experimental Results ::: Break it Phase
Examples obtained from both the adversarial and standard collection methods were found to be clearly offensive, but we note several differences in the distribution of examples from each task, shown in Table TABREF21. First, examples from the standard task tend to contain more profanity. Using a list of common English obscenities and otherwise bad words, in Table TABREF21 we calculate the percentage of examples in each task containing such obscenities, and see that the standard examples contain at least seven times as many as each round of the adversarial task. Additionally, in previous works, authors have observed that classifiers struggle with negations BIBREF8. This is borne out by our data: examples from the single-turn adversarial task more often contain the token “not" than examples from the standard task, indicating that users are easily able to fool the classifier with negations.
We also anecdotally see figurative language such as “snakes hiding in the grass” in the adversarial data, which contain no individually offensive words, the offensive nature is captured by reading the entire sentence. Other examples require sophisticated world knowledge such as that many cultures consider eating cats to be offensive. To quantify these differences, we performed a blind human annotation of a sample of the data, 100 examples of standard and 100 examples of adversarial round 1. Results are shown in Table TABREF16. Adversarial data was indeed found to contain less profanity, fewer non-profane but offending words (such as “idiot”), more figurative language, and to require more world knowledge.
We note that, as anticipated, the task becomes more challenging for the crowdworkers with each round, indicated by the decreasing average scores in Table TABREF27. In round 1, workers are able to get past $A_0$ most of the time – earning an average score of $4.56$ out of 5 points per round – showcasing how susceptible this baseline is to adversarial attack despite its relatively strong performance on the Wikipedia Toxic Comments task. By round 3, however, workers struggle to trick the system, earning an average score of only $1.6$ out of 5. A finer-grained assessment of the worker scores can be found in Table TABREF38 in the appendix.
Single-Turn Task ::: Experimental Results ::: Fix it Phase
Results comparing the performance of models trained on the adversarial ($A_i$) and standard ($S_i$) tasks are summarized in Table TABREF22, with further results in Table TABREF41 in Appendix SECREF40. The adversarially trained models $A_i$ prove to be more robust to adversarial attack: on each round of adversarial testing they outperform standard models $S_i$.
Further, note that the adversarial task becomes harder with each subsequent round. In particular, the performance of the standard models $S_i$ rapidly deteriorates between round 1 and round 2 of the adversarial task. This is a clear indication that models need to train on adversarially-collected data to be robust to adversarial behavior.
Standard models ($S_i$), trained on the standard data, tend to perform similarly to the adversarial models ($A_i$) as measured on the standard test sets, with the exception of training round 3, in which $A_3$ fails to improve on this task, likely due to being too optimized for adversarial tasks. The standard models $S_i$, on the other hand, are improving with subsequent rounds as they have more training data of the same distribution as the evaluation set. Similarly, our baseline model performs best on its own test set, but other models are not far behind.
Finally, we remark that all scores of 0 in Table TABREF22 are by design, as for round $i$ of the adversarial task, both $A_0$ and $A_{i-1}$ classified each example as safe during the `break it' data collection phase.
Multi-Turn Task
In most real-world applications, we find that adversarial behavior occurs in context – whether it is in the context of a one-on-one conversation, a comment thread, or even an image. In this work we focus on offensive utterances within the context of two-person dialogues. For dialogue safety we posit it is important to move beyond classifying single utterances, as it may be the case that an utterance is entirely innocuous on its own but extremely offensive in the context of the previous dialogue history. For instance, “Yes, you should definitely do it!" is a rather inoffensive message by itself, but most would agree that it is a hurtful response to the question “Should I hurt myself?"
Multi-Turn Task ::: Task Implementation
To this end, we collect data by asking crowdworkers to try to “beat" our best single-turn classifier (using the model that performed best on rounds 1-3 of the adversarial task, i.e., $A_3$), in addition to our baseline classifier $A_0$. The workers are shown truncated pieces of a conversation from the ConvAI2 chit-chat task, and asked to continue the conversation with offensive responses that our classifier marks as safe. As before, workers have two attempts per conversation to try to get past the classifier and are shown five conversations per round. They are given a score (out of five) at the end of each round indicating the number of times they successfully fooled the classifier.
We collected 3000 offensive examples in this manner. As in the single-turn set up, we combine this data with safe examples with a ratio of 9:1 safe to offensive for classifier training. The safe examples are dialogue examples from ConvAI2 for which the responses were reviewed by two independent crowdworkers and labeled as safe, as in the s single-turn task set-up. We refer to this overall task as the multi-turn adversarial task. Dataset statistics are given in Table TABREF30.
Multi-Turn Task ::: Models
To measure the impact of the context, we train models on this dataset with and without the given context. We use the fastText and the BERT-based model described in Section SECREF3. In addition, we build a BERT-based model variant that splits the last utterance (to be classified) and the rest of the history into two dialogue segments. Each segment is assigned an embedding and the input provided to the transformer is the sum of word embedding and segment embedding, replicating the setup of the Next Sentence Prediction that is used in the training of BERT BIBREF17.
Multi-Turn Task ::: Experimental Results ::: Break it Phase
During data collection, we observed that workers had an easier time bypassing the classifiers than in the single-turn set-up. See Table TABREF27. In the single-turn set-up, the task at hand gets harder with each round – the average score of the crowdworkers decreases from $4.56$ in round 1 to $1.6$ in round 3. Despite the fact that we are using our best single-turn classifier in the multi-turn set-up ($A_3$), the task becomes easier: the average score per round is $2.89$. This is because the workers are often able to use contextual information to suggest something offensive rather than say something offensive outright. See examples of submitted messages in Table TABREF29. Having context also allows one to express something offensive more efficiently: the messages supplied by workers in the multi-turn setting were significantly shorter on average, see Table TABREF21.
Multi-Turn Task ::: Experimental Results ::: Fix it Phase
During training, we multi-tasked the multi-turn adversarial task with the Wikipedia Toxic Comments task as well as the single-turn adversarial and standard tasks. We average the results of our best models from five different training runs. The results of these experiments are given in Table TABREF31.
As we observed during the training of our baselines in Section SECREF3, the fastText model architecture is ill-equipped for this task relative to our BERT-based architectures. The fastText model performs worse given the dialogue context (an average of 23.56 offensive-class F1 relative to 37.1) than without, likely because its bag-of-embeddings representation is too simple to take the context into account.
We see the opposite with our BERT-based models, indicating that more complex models are able to effectively use the contextual information to detect whether the response is safe or offensive. With the simple BERT-based architecture (that does not split the context and the utterance into separate segments), we observe an average of a 3.7 point increase in offensive-class F1 with the addition of context. When we use segments to separate the context from the utterance we are trying to classify, we observe an average of a 7.4 point increase in offensive-class F1. Thus, it appears that the use of contextual information to identify offensive language is critical to making these systems robust, and improving the model architecture to take account of this has large impact.
Conclusion
We have presented an approach to build more robust offensive language detection systems in the context of a dialogue. We proposed a build it, break it, fix it, and then repeat strategy, whereby humans attempt to break the models we built, and we use the broken examples to fix the models. We show this results in far more nuanced language than in existing datasets. The adversarial data includes less profanity, which existing classifiers can pick up on, and is instead offensive due to figurative language, negation, and by requiring more world knowledge, which all make current classifiers fail. Similarly, offensive language in the context of a dialogue is also more nuanced than stand-alone offensive utterances. We show that classifiers that learn from these more complex examples are indeed more robust to attack, and that using the dialogue context gives improved performance if the model architecture takes it into account.
In this work we considered a binary problem (offensive or safe). Future work could consider classes of offensive language separately BIBREF13, or explore other dialogue tasks, e.g. from social media or forums. Another interesting direction is to explore how our build it, break it, fix it strategy would similarly apply to make neural generative models safe BIBREF31.
Additional Experimental Results ::: Additional Break It Phase Results
Additional results regarding the crowdworkers' ability to “beat" the classifiers are reported in Table TABREF38. In particular, we report the percent of messages sent by the crowdsource workers that were marked safe and offensive by both $A_0$ and $A_{i-1}$. We note that very infrequently ($<1\%$ of the time) a message was marked offensive by $A_0$ but safe by $A_{i-1}$, showing that $A_0$ was relatively ineffective at catching adversarial behavior.
In Table TABREF39, we report the categorization of examples into classes of offensive language from the blind human annotation of round 1 of the single-turn adversarial and standard data. We observe that in the adversarial set-up, there were fewer examples of bullying language but more examples targeting a protected class.
Additional Experimental Results ::: Additional Fix It Phase Results
We report F1, precision, and recall for the offensive class, as well as weighted-F1 for models $S_i$ and $A_i$ on the single-turn standard and adversarial tasks in Table TABREF41.
Data Collection Interface Details
During the adversarial data collection, we asked users to generate a message that “[the user believes] is not ok but that our system marks as ok," using the definition of “ok" and “not ok" described in the paper (i.e. “ok to send in a friendly conversation with someone you just met online").
In order to generate a variety of responses, during the single-turn adversarial collection, we provided users with a topic to base their response on 50% of the time. The topics were pulled from a set of 1365 crowd-sourced open-domain dialogue topics. Example topics include diverse topics such as commuting, Gouda cheese, music festivals, podcasts, bowling, and Arnold Schwarzenegger.
Users were able to earn up to five points per round, with two tries for each point (to allow them to get a sense of the models' weaknesses). Users were informed of their score after each message, and provided with bonuses for good effort. The points did not affect the user's compensation, but rather, were provided as a way of gamifying the data collection, as this has been showed to increase data quality BIBREF25.
Please see an example image of the chat interface in Figure FIGREF42. | F1 and Weighted-F1 |