id
stringlengths 40
40
| pid
stringlengths 42
42
| input
stringlengths 8.37k
169k
| output
stringlengths 1
1.63k
|
---|---|---|---|
f27502c3ece9ade265389d5ace90ca9ca42b46f3 | f27502c3ece9ade265389d5ace90ca9ca42b46f3_0 | Q: How do they evaluate generated stories?
Text: Introduction
Collaborative human-machine story-writing has had a recent resurgence of attention from the research community BIBREF0 , BIBREF1 . It represents a frontier for AI research; as a research community we have developed convincing NLP systems for some generative tasks like machine translation, but lag behind in creative areas like open-domain storytelling. Collaborative open-domain storytelling incorporates human interactivity for one of two aims: to improve human creativity via the aid of a machine, or to improve machine quality via the aid of a human. Previously existing approaches treat the former aim, and have shown that storytelling systems are not yet developed enough to help human writers. We attempt the latter, with the goal of investigating at what stage human collaboration is most helpful.
gordon2009sayanything use an information retrieval based system to write by alternating turns between a human and their system. clark2018mil use a similar turn-taking approach to interactivity, but employ a neural model for generation and allow the user to edit the generated sentence before accepting it. They find that users prefer a full-sentence collaborative setup (vs. shorter fragments) but are mixed with regard to the system-driven approach to interaction. roemmele2017eval experiment with a user-driven setup, where the machine doesn't generate until the user requests it to, and then the user can edit or delete at will. They leverage user-acceptance or rejection of suggestions as a tool for understanding the characteristics of a helpful generation. All of these systems involve the user in the story-writing process, but lack user involvement in the story-planning process, and so they lean on the user's ability to knit a coherent overall story together out of locally related sentences. They also do not allow a user to control the novelty or “unexpectedness” of the generations, which clark2018mil find to be a weakness. Nor do they enable iteration; a user cannot revise earlier sentences and have the system update later generations. We develop a system that allows a user to interact in all of these ways that were limitations in previous systems; it enables involvement in planning, editing, iterative revising, and control of novelty. We conduct experiments to understand which types of interaction are most effective for improving stories and for making users satisfied and engaged. We have two main interfaces that enable human interaction with the computer. There is cross-model interaction, where the machine does all the composition work, and displays three different versions of a story written by three distinct models for a human to compare. The user guides generation by providing a topic for story-writing and by tweaking decoding parameters to control novelty, or diversity. The second interface is intra-model interaction, where a human can select the model to interact with (potentially after having chosen it via cross-model), and can collaborate at all stages to jointly create better stories. The full range of interactions available to a user is: select a model, provide a topic, change diversity of content, collaborate on the planning for the story, and collaborate on the story sentences. It is entirely user-driven, as the users control how much is their own work and how much is the machine's at every stage. It supports revision; a user can modify an earlier part of a written story or of the story plan at any point, and observe how this affects later generations.
System Overview
Figure FIGREF3 shows a diagram of the interaction system. The dotted arrows represent optional user interactions.
requires the user to enter a topic, such as “the not so haunted house”, and can optionally vary the diversity used in the Storyline Planner or the Story Writer. Diversity numbers correspond directly to softmax temperatures, which we restrict to a reasonable range, determined empirically. The settings are sent to the Storyline Planner module, which generates a storyline for the story in the form of a sequence of phrases as per the method of yao2018plan. Everything is then sent to the Story Writer, which will return three stories.
enables advanced interactions with one story system of the user's choice. The Storyline Planner returns either one storyline phrase or many, and composes the final storyline out of the combination of phrases the system generated, the user has written, and edits the user has made. These are sent to the Story Writer, which returns either a single sentence or a full story as per user's request. The process is flexible and iterative. The user can choose how much or little content they want to provide, edit, or re-generate, and they can return to any step at any time until they decide they are done.
To enable interactive flexibility, the system must handle open-domain user input. User input is lower-cased and tokenized to match the model training data via spaCy. Model output is naively detokenized via Moses BIBREF2 based on feedback from users that this was more natural. User input OOV handling is done via WordNet BIBREF3 by recursively searching for hypernyms and hyponyms (in that order) until either an in-vocabulary word is found or until a maximum distance from the initial word is reached. We additionally experimented with using cosine similarity to GloVe vectors BIBREF4 , but found that to be slower and not qualitatively better for this domain.
Web Interface
Figure FIGREF10 shows screenshots for both the cross-model and intra-model modes of interaction. Figure FIGREF10 shows that the cross-model mode makes clear the differences between different model generations for the same topic. Figure FIGREF10 shows the variety of interactions a user can take in intra-model interaction, and is annotated with an example-in-action. User inserted text is underlined in blue, generated text that has been removed by the user is in grey strike-through. The refresh symbol marks areas that the user re-generated to get a different sentence (presumably after being unhappy with the first result). As can be seen in this example, minor user involvement can result in a significantly better story.
Model Design
All models for both the Storyline Planner and Story Writer modules are conditional language models implemented with LSTMs based on merity2018regularizing. These are 3-stacked LSTMs that include weight-dropping, weight-tying, variable length back propagation with learning rate adjustment, and Averaged Stochastic Gradient Descent (ASGD). They are trained on the ROC dataset BIBREF5 , which after lowercasing and tokenization has a vocabulary of 38k. Storyline Phrases are extracted as in yao2018plan via the RAKE algorithm BIBREF6 which results in a slightly smaller Storyline vocabulary of 31k. The Storyline Planner does decoding via sampling to encourage creative exploration. The Story Writer has an option to use one or all three systems, all of which decode via beamsearch and are detailed below.
The Title-to-Story system is a baseline, which generates directly from topic.
The Plan-and-Write system adopts the static model in yao2018plan to use the storyline to supervise story-writing.
Plan-and-Revise is a new system that combines the strengths of yao2018plan and holtzman2018learning. It supplements the Plan-and-Write model by training two discriminators on the ROC data and using them to re-rank the LSTM generations to prefer increased creativity and relevance. Thus the decoding objective of this system becomes INLINEFORM0 where INLINEFORM1 is the conditional language model probability of the LSTM, INLINEFORM2 is the discriminator scoring function, and INLINEFORM3 is the learned weight of that discriminator. At each timestep all live beam hypotheses are scored and re-ranked. Discriminator weights are learnt by minimizing Mean Squared Error on the difference between the scores of gold standard and generated story sentences.
Experiments
We experiment with six types of interaction: five variations created by restricting different capabilities of our system, and a sixth turn-taking baseline that mimics the interaction of the previous work BIBREF1 , BIBREF7 . We choose our experiments to address the research questions: What type of interaction is most engaging? Which type results in the best stories? Can a human tasked with correcting for certain weaknesses of a model successfully do so? The variations on interactions that we tested are:
We expand experiment 5 to answer the question of whether a human-in-the-loop interactive system can address specific shortcomings of generated stories. We identify three types of weaknesses common to generation systems – Creativity, Relevance, and Causal & Temporal Coherence, and conduct experiments where the human is instructed to focus on improving specifically one of them. The targeted human improvement areas intentionally match the Plan-and-Revise discriminators, so that, if successful, the "human discriminator" data can assist in training the machine discriminators. All experiments (save experiment 2, which lets the user pick between models) use the Plan-and-Revise system.
Details
We recruit 30 Mechanical Turk workers per experiment (270 unique workers total) to complete story writing tasks with the system. We constrain them to ten minutes of work (five for writing and five for a survey) and provide them with a fixed topic to control this factor across experiments. They co-create a story and complete a questionnaire which asks them to self-report on their engagement, satisfaction, and perception of story quality. For the additional focused error-correction experiments, we instruct Turkers to try to improve the machine-generated stories with regard to the given aspect, under the same time constraints. As an incentive, they are given a small bonus if they are later judged to have succeeded.
We then ask a separate set of Turkers to rate the stories for overall quality and the three improvement areas. All ratings are on a five-point scale. We collect two ratings per story, and throw out ratings that disagree by more than 2 points. A total of 11% of ratings were thrown out, leaving four metrics across 241 stories for analysis.
Conclusions and Future Work
We have shown that all levels of human-computer collaboration improve story quality across all metrics, compared to a baseline computer-only story generation system. We have also shown that flexible interaction, which allows the user to return to edit earlier text, improves the specific metrics of creativity and causal-temporal coherence above previous rigid turn-taking approaches. We find that, as well as improving story quality, more interaction makes users more engaged and likely to use the system again. Users tasked with collaborating to improve a specific story quality were able to do so, as judged by independent readers.
As the demo system has successfully used an ensemble of collaborative discriminators to improve the same qualities that untrained human users were able to improve even further, this suggests promising future research into human-collaborative stories as training data for new discriminators. It could be used both to strengthen existing discriminators and to develop novel ones, since discriminators are extensible to arbitrarily many story aspects.
Acknowledgments
We thank the anonymous reviewers for their feedback, as well as the members of the PLUS lab for their thoughts and iterative testing. This work is supported by Contract W911NF-15- 1-0543 with the US Defense Advanced Research Projects Agency (DARPA).
Demo Video
The three-minute video demonstrating the interaction capabilities of the system can be viewed at https://youtu.be/-hGd2399dnA. (Same video as linked in the paper footnote).
Decoding
Default diversity (Softmax Temperature) for Storyline Planner is 0.5, for Story Writer it is None (as beamsearch is used an thus can have but does not require a temperature). Beam size for all Story Writer models is 5. Additionally, Storyline Phrases are constrained to be unique (unless a user duplicates them), and Beamsearch is not normalized by length (both choices determined empirically).
Training
We follow the parameters used in yao2018plan and merity2018regularizing.
Mechanical Turk Materials
Following are examples of the materials used in doing Mechanical Turk User Studies. Figure FIGREF37 is an example of the All + Creative focused experiment for story-writing. The instructions per experiment differ across all, but the template is the same. Figure FIGREF38 is the survey for ranking stories across various metrics. This remains constant save that story order was shuffled every time to control for any effects of the order a story was read in. | separate set of Turkers to rate the stories for overall quality and the three improvement areas |
ffb7a12dfe069ab7263bb7dd366817a9d22b8ef2 | ffb7a12dfe069ab7263bb7dd366817a9d22b8ef2_0 | Q: Do they evaluate in other language appart from English?
Text: Introduction
Collaborative human-machine story-writing has had a recent resurgence of attention from the research community BIBREF0 , BIBREF1 . It represents a frontier for AI research; as a research community we have developed convincing NLP systems for some generative tasks like machine translation, but lag behind in creative areas like open-domain storytelling. Collaborative open-domain storytelling incorporates human interactivity for one of two aims: to improve human creativity via the aid of a machine, or to improve machine quality via the aid of a human. Previously existing approaches treat the former aim, and have shown that storytelling systems are not yet developed enough to help human writers. We attempt the latter, with the goal of investigating at what stage human collaboration is most helpful.
gordon2009sayanything use an information retrieval based system to write by alternating turns between a human and their system. clark2018mil use a similar turn-taking approach to interactivity, but employ a neural model for generation and allow the user to edit the generated sentence before accepting it. They find that users prefer a full-sentence collaborative setup (vs. shorter fragments) but are mixed with regard to the system-driven approach to interaction. roemmele2017eval experiment with a user-driven setup, where the machine doesn't generate until the user requests it to, and then the user can edit or delete at will. They leverage user-acceptance or rejection of suggestions as a tool for understanding the characteristics of a helpful generation. All of these systems involve the user in the story-writing process, but lack user involvement in the story-planning process, and so they lean on the user's ability to knit a coherent overall story together out of locally related sentences. They also do not allow a user to control the novelty or “unexpectedness” of the generations, which clark2018mil find to be a weakness. Nor do they enable iteration; a user cannot revise earlier sentences and have the system update later generations. We develop a system that allows a user to interact in all of these ways that were limitations in previous systems; it enables involvement in planning, editing, iterative revising, and control of novelty. We conduct experiments to understand which types of interaction are most effective for improving stories and for making users satisfied and engaged. We have two main interfaces that enable human interaction with the computer. There is cross-model interaction, where the machine does all the composition work, and displays three different versions of a story written by three distinct models for a human to compare. The user guides generation by providing a topic for story-writing and by tweaking decoding parameters to control novelty, or diversity. The second interface is intra-model interaction, where a human can select the model to interact with (potentially after having chosen it via cross-model), and can collaborate at all stages to jointly create better stories. The full range of interactions available to a user is: select a model, provide a topic, change diversity of content, collaborate on the planning for the story, and collaborate on the story sentences. It is entirely user-driven, as the users control how much is their own work and how much is the machine's at every stage. It supports revision; a user can modify an earlier part of a written story or of the story plan at any point, and observe how this affects later generations.
System Overview
Figure FIGREF3 shows a diagram of the interaction system. The dotted arrows represent optional user interactions.
requires the user to enter a topic, such as “the not so haunted house”, and can optionally vary the diversity used in the Storyline Planner or the Story Writer. Diversity numbers correspond directly to softmax temperatures, which we restrict to a reasonable range, determined empirically. The settings are sent to the Storyline Planner module, which generates a storyline for the story in the form of a sequence of phrases as per the method of yao2018plan. Everything is then sent to the Story Writer, which will return three stories.
enables advanced interactions with one story system of the user's choice. The Storyline Planner returns either one storyline phrase or many, and composes the final storyline out of the combination of phrases the system generated, the user has written, and edits the user has made. These are sent to the Story Writer, which returns either a single sentence or a full story as per user's request. The process is flexible and iterative. The user can choose how much or little content they want to provide, edit, or re-generate, and they can return to any step at any time until they decide they are done.
To enable interactive flexibility, the system must handle open-domain user input. User input is lower-cased and tokenized to match the model training data via spaCy. Model output is naively detokenized via Moses BIBREF2 based on feedback from users that this was more natural. User input OOV handling is done via WordNet BIBREF3 by recursively searching for hypernyms and hyponyms (in that order) until either an in-vocabulary word is found or until a maximum distance from the initial word is reached. We additionally experimented with using cosine similarity to GloVe vectors BIBREF4 , but found that to be slower and not qualitatively better for this domain.
Web Interface
Figure FIGREF10 shows screenshots for both the cross-model and intra-model modes of interaction. Figure FIGREF10 shows that the cross-model mode makes clear the differences between different model generations for the same topic. Figure FIGREF10 shows the variety of interactions a user can take in intra-model interaction, and is annotated with an example-in-action. User inserted text is underlined in blue, generated text that has been removed by the user is in grey strike-through. The refresh symbol marks areas that the user re-generated to get a different sentence (presumably after being unhappy with the first result). As can be seen in this example, minor user involvement can result in a significantly better story.
Model Design
All models for both the Storyline Planner and Story Writer modules are conditional language models implemented with LSTMs based on merity2018regularizing. These are 3-stacked LSTMs that include weight-dropping, weight-tying, variable length back propagation with learning rate adjustment, and Averaged Stochastic Gradient Descent (ASGD). They are trained on the ROC dataset BIBREF5 , which after lowercasing and tokenization has a vocabulary of 38k. Storyline Phrases are extracted as in yao2018plan via the RAKE algorithm BIBREF6 which results in a slightly smaller Storyline vocabulary of 31k. The Storyline Planner does decoding via sampling to encourage creative exploration. The Story Writer has an option to use one or all three systems, all of which decode via beamsearch and are detailed below.
The Title-to-Story system is a baseline, which generates directly from topic.
The Plan-and-Write system adopts the static model in yao2018plan to use the storyline to supervise story-writing.
Plan-and-Revise is a new system that combines the strengths of yao2018plan and holtzman2018learning. It supplements the Plan-and-Write model by training two discriminators on the ROC data and using them to re-rank the LSTM generations to prefer increased creativity and relevance. Thus the decoding objective of this system becomes INLINEFORM0 where INLINEFORM1 is the conditional language model probability of the LSTM, INLINEFORM2 is the discriminator scoring function, and INLINEFORM3 is the learned weight of that discriminator. At each timestep all live beam hypotheses are scored and re-ranked. Discriminator weights are learnt by minimizing Mean Squared Error on the difference between the scores of gold standard and generated story sentences.
Experiments
We experiment with six types of interaction: five variations created by restricting different capabilities of our system, and a sixth turn-taking baseline that mimics the interaction of the previous work BIBREF1 , BIBREF7 . We choose our experiments to address the research questions: What type of interaction is most engaging? Which type results in the best stories? Can a human tasked with correcting for certain weaknesses of a model successfully do so? The variations on interactions that we tested are:
We expand experiment 5 to answer the question of whether a human-in-the-loop interactive system can address specific shortcomings of generated stories. We identify three types of weaknesses common to generation systems – Creativity, Relevance, and Causal & Temporal Coherence, and conduct experiments where the human is instructed to focus on improving specifically one of them. The targeted human improvement areas intentionally match the Plan-and-Revise discriminators, so that, if successful, the "human discriminator" data can assist in training the machine discriminators. All experiments (save experiment 2, which lets the user pick between models) use the Plan-and-Revise system.
Details
We recruit 30 Mechanical Turk workers per experiment (270 unique workers total) to complete story writing tasks with the system. We constrain them to ten minutes of work (five for writing and five for a survey) and provide them with a fixed topic to control this factor across experiments. They co-create a story and complete a questionnaire which asks them to self-report on their engagement, satisfaction, and perception of story quality. For the additional focused error-correction experiments, we instruct Turkers to try to improve the machine-generated stories with regard to the given aspect, under the same time constraints. As an incentive, they are given a small bonus if they are later judged to have succeeded.
We then ask a separate set of Turkers to rate the stories for overall quality and the three improvement areas. All ratings are on a five-point scale. We collect two ratings per story, and throw out ratings that disagree by more than 2 points. A total of 11% of ratings were thrown out, leaving four metrics across 241 stories for analysis.
Conclusions and Future Work
We have shown that all levels of human-computer collaboration improve story quality across all metrics, compared to a baseline computer-only story generation system. We have also shown that flexible interaction, which allows the user to return to edit earlier text, improves the specific metrics of creativity and causal-temporal coherence above previous rigid turn-taking approaches. We find that, as well as improving story quality, more interaction makes users more engaged and likely to use the system again. Users tasked with collaborating to improve a specific story quality were able to do so, as judged by independent readers.
As the demo system has successfully used an ensemble of collaborative discriminators to improve the same qualities that untrained human users were able to improve even further, this suggests promising future research into human-collaborative stories as training data for new discriminators. It could be used both to strengthen existing discriminators and to develop novel ones, since discriminators are extensible to arbitrarily many story aspects.
Acknowledgments
We thank the anonymous reviewers for their feedback, as well as the members of the PLUS lab for their thoughts and iterative testing. This work is supported by Contract W911NF-15- 1-0543 with the US Defense Advanced Research Projects Agency (DARPA).
Demo Video
The three-minute video demonstrating the interaction capabilities of the system can be viewed at https://youtu.be/-hGd2399dnA. (Same video as linked in the paper footnote).
Decoding
Default diversity (Softmax Temperature) for Storyline Planner is 0.5, for Story Writer it is None (as beamsearch is used an thus can have but does not require a temperature). Beam size for all Story Writer models is 5. Additionally, Storyline Phrases are constrained to be unique (unless a user duplicates them), and Beamsearch is not normalized by length (both choices determined empirically).
Training
We follow the parameters used in yao2018plan and merity2018regularizing.
Mechanical Turk Materials
Following are examples of the materials used in doing Mechanical Turk User Studies. Figure FIGREF37 is an example of the All + Creative focused experiment for story-writing. The instructions per experiment differ across all, but the template is the same. Figure FIGREF38 is the survey for ranking stories across various metrics. This remains constant save that story order was shuffled every time to control for any effects of the order a story was read in. | Unanswerable |
aa4b38f601cc87bf93849245d5f65124da3dc112 | aa4b38f601cc87bf93849245d5f65124da3dc112_0 | Q: What are the baselines?
Text: Introduction
Collaborative human-machine story-writing has had a recent resurgence of attention from the research community BIBREF0 , BIBREF1 . It represents a frontier for AI research; as a research community we have developed convincing NLP systems for some generative tasks like machine translation, but lag behind in creative areas like open-domain storytelling. Collaborative open-domain storytelling incorporates human interactivity for one of two aims: to improve human creativity via the aid of a machine, or to improve machine quality via the aid of a human. Previously existing approaches treat the former aim, and have shown that storytelling systems are not yet developed enough to help human writers. We attempt the latter, with the goal of investigating at what stage human collaboration is most helpful.
gordon2009sayanything use an information retrieval based system to write by alternating turns between a human and their system. clark2018mil use a similar turn-taking approach to interactivity, but employ a neural model for generation and allow the user to edit the generated sentence before accepting it. They find that users prefer a full-sentence collaborative setup (vs. shorter fragments) but are mixed with regard to the system-driven approach to interaction. roemmele2017eval experiment with a user-driven setup, where the machine doesn't generate until the user requests it to, and then the user can edit or delete at will. They leverage user-acceptance or rejection of suggestions as a tool for understanding the characteristics of a helpful generation. All of these systems involve the user in the story-writing process, but lack user involvement in the story-planning process, and so they lean on the user's ability to knit a coherent overall story together out of locally related sentences. They also do not allow a user to control the novelty or “unexpectedness” of the generations, which clark2018mil find to be a weakness. Nor do they enable iteration; a user cannot revise earlier sentences and have the system update later generations. We develop a system that allows a user to interact in all of these ways that were limitations in previous systems; it enables involvement in planning, editing, iterative revising, and control of novelty. We conduct experiments to understand which types of interaction are most effective for improving stories and for making users satisfied and engaged. We have two main interfaces that enable human interaction with the computer. There is cross-model interaction, where the machine does all the composition work, and displays three different versions of a story written by three distinct models for a human to compare. The user guides generation by providing a topic for story-writing and by tweaking decoding parameters to control novelty, or diversity. The second interface is intra-model interaction, where a human can select the model to interact with (potentially after having chosen it via cross-model), and can collaborate at all stages to jointly create better stories. The full range of interactions available to a user is: select a model, provide a topic, change diversity of content, collaborate on the planning for the story, and collaborate on the story sentences. It is entirely user-driven, as the users control how much is their own work and how much is the machine's at every stage. It supports revision; a user can modify an earlier part of a written story or of the story plan at any point, and observe how this affects later generations.
System Overview
Figure FIGREF3 shows a diagram of the interaction system. The dotted arrows represent optional user interactions.
requires the user to enter a topic, such as “the not so haunted house”, and can optionally vary the diversity used in the Storyline Planner or the Story Writer. Diversity numbers correspond directly to softmax temperatures, which we restrict to a reasonable range, determined empirically. The settings are sent to the Storyline Planner module, which generates a storyline for the story in the form of a sequence of phrases as per the method of yao2018plan. Everything is then sent to the Story Writer, which will return three stories.
enables advanced interactions with one story system of the user's choice. The Storyline Planner returns either one storyline phrase or many, and composes the final storyline out of the combination of phrases the system generated, the user has written, and edits the user has made. These are sent to the Story Writer, which returns either a single sentence or a full story as per user's request. The process is flexible and iterative. The user can choose how much or little content they want to provide, edit, or re-generate, and they can return to any step at any time until they decide they are done.
To enable interactive flexibility, the system must handle open-domain user input. User input is lower-cased and tokenized to match the model training data via spaCy. Model output is naively detokenized via Moses BIBREF2 based on feedback from users that this was more natural. User input OOV handling is done via WordNet BIBREF3 by recursively searching for hypernyms and hyponyms (in that order) until either an in-vocabulary word is found or until a maximum distance from the initial word is reached. We additionally experimented with using cosine similarity to GloVe vectors BIBREF4 , but found that to be slower and not qualitatively better for this domain.
Web Interface
Figure FIGREF10 shows screenshots for both the cross-model and intra-model modes of interaction. Figure FIGREF10 shows that the cross-model mode makes clear the differences between different model generations for the same topic. Figure FIGREF10 shows the variety of interactions a user can take in intra-model interaction, and is annotated with an example-in-action. User inserted text is underlined in blue, generated text that has been removed by the user is in grey strike-through. The refresh symbol marks areas that the user re-generated to get a different sentence (presumably after being unhappy with the first result). As can be seen in this example, minor user involvement can result in a significantly better story.
Model Design
All models for both the Storyline Planner and Story Writer modules are conditional language models implemented with LSTMs based on merity2018regularizing. These are 3-stacked LSTMs that include weight-dropping, weight-tying, variable length back propagation with learning rate adjustment, and Averaged Stochastic Gradient Descent (ASGD). They are trained on the ROC dataset BIBREF5 , which after lowercasing and tokenization has a vocabulary of 38k. Storyline Phrases are extracted as in yao2018plan via the RAKE algorithm BIBREF6 which results in a slightly smaller Storyline vocabulary of 31k. The Storyline Planner does decoding via sampling to encourage creative exploration. The Story Writer has an option to use one or all three systems, all of which decode via beamsearch and are detailed below.
The Title-to-Story system is a baseline, which generates directly from topic.
The Plan-and-Write system adopts the static model in yao2018plan to use the storyline to supervise story-writing.
Plan-and-Revise is a new system that combines the strengths of yao2018plan and holtzman2018learning. It supplements the Plan-and-Write model by training two discriminators on the ROC data and using them to re-rank the LSTM generations to prefer increased creativity and relevance. Thus the decoding objective of this system becomes INLINEFORM0 where INLINEFORM1 is the conditional language model probability of the LSTM, INLINEFORM2 is the discriminator scoring function, and INLINEFORM3 is the learned weight of that discriminator. At each timestep all live beam hypotheses are scored and re-ranked. Discriminator weights are learnt by minimizing Mean Squared Error on the difference between the scores of gold standard and generated story sentences.
Experiments
We experiment with six types of interaction: five variations created by restricting different capabilities of our system, and a sixth turn-taking baseline that mimics the interaction of the previous work BIBREF1 , BIBREF7 . We choose our experiments to address the research questions: What type of interaction is most engaging? Which type results in the best stories? Can a human tasked with correcting for certain weaknesses of a model successfully do so? The variations on interactions that we tested are:
We expand experiment 5 to answer the question of whether a human-in-the-loop interactive system can address specific shortcomings of generated stories. We identify three types of weaknesses common to generation systems – Creativity, Relevance, and Causal & Temporal Coherence, and conduct experiments where the human is instructed to focus on improving specifically one of them. The targeted human improvement areas intentionally match the Plan-and-Revise discriminators, so that, if successful, the "human discriminator" data can assist in training the machine discriminators. All experiments (save experiment 2, which lets the user pick between models) use the Plan-and-Revise system.
Details
We recruit 30 Mechanical Turk workers per experiment (270 unique workers total) to complete story writing tasks with the system. We constrain them to ten minutes of work (five for writing and five for a survey) and provide them with a fixed topic to control this factor across experiments. They co-create a story and complete a questionnaire which asks them to self-report on their engagement, satisfaction, and perception of story quality. For the additional focused error-correction experiments, we instruct Turkers to try to improve the machine-generated stories with regard to the given aspect, under the same time constraints. As an incentive, they are given a small bonus if they are later judged to have succeeded.
We then ask a separate set of Turkers to rate the stories for overall quality and the three improvement areas. All ratings are on a five-point scale. We collect two ratings per story, and throw out ratings that disagree by more than 2 points. A total of 11% of ratings were thrown out, leaving four metrics across 241 stories for analysis.
Conclusions and Future Work
We have shown that all levels of human-computer collaboration improve story quality across all metrics, compared to a baseline computer-only story generation system. We have also shown that flexible interaction, which allows the user to return to edit earlier text, improves the specific metrics of creativity and causal-temporal coherence above previous rigid turn-taking approaches. We find that, as well as improving story quality, more interaction makes users more engaged and likely to use the system again. Users tasked with collaborating to improve a specific story quality were able to do so, as judged by independent readers.
As the demo system has successfully used an ensemble of collaborative discriminators to improve the same qualities that untrained human users were able to improve even further, this suggests promising future research into human-collaborative stories as training data for new discriminators. It could be used both to strengthen existing discriminators and to develop novel ones, since discriminators are extensible to arbitrarily many story aspects.
Acknowledgments
We thank the anonymous reviewers for their feedback, as well as the members of the PLUS lab for their thoughts and iterative testing. This work is supported by Contract W911NF-15- 1-0543 with the US Defense Advanced Research Projects Agency (DARPA).
Demo Video
The three-minute video demonstrating the interaction capabilities of the system can be viewed at https://youtu.be/-hGd2399dnA. (Same video as linked in the paper footnote).
Decoding
Default diversity (Softmax Temperature) for Storyline Planner is 0.5, for Story Writer it is None (as beamsearch is used an thus can have but does not require a temperature). Beam size for all Story Writer models is 5. Additionally, Storyline Phrases are constrained to be unique (unless a user duplicates them), and Beamsearch is not normalized by length (both choices determined empirically).
Training
We follow the parameters used in yao2018plan and merity2018regularizing.
Mechanical Turk Materials
Following are examples of the materials used in doing Mechanical Turk User Studies. Figure FIGREF37 is an example of the All + Creative focused experiment for story-writing. The instructions per experiment differ across all, but the template is the same. Figure FIGREF38 is the survey for ranking stories across various metrics. This remains constant save that story order was shuffled every time to control for any effects of the order a story was read in. | Title-to-Story system |
08b87a90139968095433f27fc88f571d939cd433 | 08b87a90139968095433f27fc88f571d939cd433_0 | Q: What is used a baseline?
Text: Introduction
Indicators of Compromise (IOCs) are forensic artifacts that are used as signs when a system has been compromised by an attacker or infected with a particular piece of malware. To be specific, IOCs are composed of some combinations of virus signatures, IPs, URLs or domain names of botnets, MD5 hashes of attack files, etc. They are frequently described in cybersecurity articles, many of which are written in unstructured text, describing attack tactics, technique and procedures. For example, a snippet from a cybersecurity article is shown in Fig. FIGREF1 . From the text , token “INST.exe” is the name of an executable file of a malicious software, and the file “ntdll.exe” downloaded by “INST.exe” is a malicious file as well. Obviously, these kinds of IOCs can be then utilized for early detection of future attack attempts by using intrusion detection systems and antivirus software, and thus, they exert an important role in the field of cybersecurity. However, with the rapid evolvement of cyber threats, the IOC data are produced at a high volume and velocity every day, which makes it increasingly hard for human to gather and manage them.
A number of systems are proposed to help discover and gather malicious information and IOCs from various types of data sources BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . However, most of those systems consist of several components that identify IOCs by using human-crafted features that heavily rely on specific language knowledge such as dependency structure, and they often have to be pre-defined by experts in the field of the cybersecurity. Furthermore, they need a large amount of annotated data used as the training data to train an IOC classifier. Those training data are frequently difficult to be crowed-sourced, because non-experts can hardly distinguish IOCs from those non-malicious IPs or URLs. Thus, it is a time-consuming and laborious task to construct such systems for different languages.
In this work, we consider the task of collecting IOCs from cybersecurity articles as a task of sequence labelling of natural language processing (NLP). By applying a sequence labelling model, each token in an unstructured input text is assigned with a label, and tokens assigned with IOC labels are then collected as IOCs. Recently, sequence labelling models have been utilized in many NLP tasks. Huang et al. BIBREF6 proposed using a sequence labelling model based on the bidirectional long short-term memory (LSTM) BIBREF7 for the task of named entity recognition (NER). Chiu et al. BIBREF8 and Lample et al. BIBREF9 proposed integrating LSTM encoders with character embedding and the neural sequence labelling model to achieve a remarkable performance on the task of NER as well as part-of-speech (POS) tagging. Besides, Dernoncourt et al. BIBREF10 and Jiang et al. BIBREF11 proposed applying the neural sequence labelling model to the task of de-identification of medical records.
Among the previous studies of the neural sequence labelling task, Zhou el al. BIBREF12 firstly propose using an end-to-end neural sequence labelling model to fully automate the process of IOCs identification. Their model is on the basis of an artificial neural networks (ANN) with bidirectional LSTM and CRF. However, their newly introduced spelling features bring a more extraction of false positives, i.e., tokens that are similar to IOCs but not malicious. In this paper, we further introduce a multi-head self-attention module and contextual features to the ANN model so that the proposed model can perform better in gathering the contextual information from the unstructured text for the task of IOCs identification. Based on the results of our experiments, our proposed approach achieves an average precision of 93.1% and the recall of 85.2% on English cybersecurity article test set, and an average precision of 82.9% and recall of 80.7% on Chinese test set. We further evaluate the proposed model by training the model using both the English dataset and Chinese dataset, which even achieves better performance.
Model
Fig. FIGREF2 shows the 3 components (layers) of the proposed neural network architecture.
Token Embedding Layer
The token embedding layer takes a token as input and outputs its vector representation. As shown in Fig. FIGREF2 , given an input sequence of tokens INLINEFORM0 , the output vector INLINEFORM1 ( INLINEFORM2 ) of each token INLINEFORM3 results from the concatenation of two different types of embeddings: token embedding INLINEFORM4 and the character-based token embeddings INLINEFORM5 , INLINEFORM6 that come from the output of a character-level bi-LSTM encoder.
Sequence Representation Layer
The Sequence Representation Layer takes the sequence of embeddings INLINEFORM0 ( INLINEFORM1 ) as input, and outputs a sequence INLINEFORM2 , where the INLINEFORM3 element of INLINEFORM4 represents the probability that the INLINEFORM5 token has the label INLINEFORM6 .
Different from the previous work of sequence labelling in news articles or patient notes BIBREF9 , BIBREF10 , sentences from a cybersecurity report often contain a large number of tokens as well as lists of IOCs with little context, making it much more difficult for LSTM to encode the input sentence correctly. Therefore, instead of the token LSTM layer in BIBREF12 , we propose sequence representation layer that consists of 3 modules, i.e., attention-based Bi-LSTM module, multi-head self-attention module and token feature module.
Considering that tokens cannot contribute equally to the representation of the input sequence, we introduce attention mechanism to Bi-LSTM to extract such tokens that are crucial to the meaning of the sentence. Then, we aggregate the representation of those informative words to form the vector of the input sequence. The attention mechanism is similar to the one proposed by Yang et al. BIBREF13 , which is defined as follows: DISPLAYFORM0
That is to say, we first compute the INLINEFORM0 as a hidden representation of the hidden states of Bi-LSTM INLINEFORM1 for INLINEFORM2 input token, where INLINEFORM3 is obtained by concatenating the INLINEFORM4 hidden states of forward and backward LSTM, i.e., INLINEFORM5 . Then, we measure the importance of the INLINEFORM6 token with a trainable vector INLINEFORM7 and get a normalized importance weight INLINEFORM8 through a softmax function. After that, the sentence vector INLINEFORM9 is computed as a weighted sum of INLINEFORM10 ( INLINEFORM11 ). Here, weight matrix INLINEFORM12 , bias INLINEFORM13 and vector INLINEFORM14 are randomly initialized and jointly learned during the training process. Note that each input sentence merely has one sentence vector INLINEFORM15 as its weighted representation, and INLINEFORM16 is then used as a part of the INLINEFORM17 output of attention-based Bi-LSTM module, where INLINEFORM18 ( INLINEFORM19 ).
Motivated by the successful application of self-attention in many NLP tasks BIBREF14 , BIBREF15 , we add a multi-head self-attention module to enhance the embedding of each word with the information of other words in a text adaptively. By means of this, the local text regions where convolution performs carry the global information of text. Following the encoder part of Vaswani et al. BIBREF14 , multi-head self-attention module is composed of a stack of several identical layers, each of which consists of a multi-head self-attention mechanism and two convolutions with kernel size 1. Given the sequence of embeddings INLINEFORM0 as input, and the output is defined as follows: DISPLAYFORM0
where, INLINEFORM0 , INLINEFORM1 , INLINEFORM2 are parameter matrices for the projections of queries INLINEFORM3 , keys INLINEFORM4 and values INLINEFORM5 in the INLINEFORM6 head, respectively. Here, INLINEFORM7 , INLINEFORM8 and INLINEFORM9 are set as the input sequence INLINEFORM10 ( INLINEFORM11 ). The INLINEFORM12 is then given to the two convolutions and the output of multi-head self-attention INLINEFORM13 ( INLINEFORM14 ) is obtained.
Furthermore, we introduce some features to defined IOCs to improve the performance of the proposed model on a very small amount of training data. Here, we define two types of features, i.e., spelling features and contextual features, and map each token INLINEFORM0 ( INLINEFORM1 ) to a feature vector INLINEFORM2 , where INLINEFORM3 is the spelling feature vector and INLINEFORM4 is the contextual feature vector. Note that the values of features are then jointly learned during the process of training. In Section SECREF3 , we will explain the features in more detail.
As shown in Fig. FIGREF2 , the vector INLINEFORM0 ( INLINEFORM1 ) is a concatenation of the INLINEFORM2 , INLINEFORM3 and INLINEFORM4 . Each vector INLINEFORM5 is then given to a feed-forward neural network with one hidden layer, which outputs the corresponding probability vector INLINEFORM6 .
CRF Layer
We also introduce a CRF layer to output the most likely sequence of predicted labels. The score of a label sequence INLINEFORM0 is defined as the sum of the probabilities of unigram labels and the bigram label transition probabilities: DISPLAYFORM0
where INLINEFORM0 is a matrix that contains the transition probabilities of two subsequent labels. Vector INLINEFORM1 is the output of the token LSTM layer, and INLINEFORM2 is the probability of label INLINEFORM3 in INLINEFORM4 . INLINEFORM5 is the probability that a token with label INLINEFORM6 is followed by a token with the label INLINEFORM7 . Subsequently, these scores are turned into probabilities of the label sequence by taking a softmax function over all possible label sequences.
Features
We extract a vector of features for each tokens of input sequences. In this section, we present each feature category in detail.
Spelling Features
Since the IOCs tend to follow fixed patterns, we predefined several regular expressions and spelling rules to identify IOCs. For example, to identify a URL, we defined a regular expression INLINEFORM0 and set the value of the URL feature to 1 when the input token matches the regular expression. However, such expressions and spelling rules could introduce false positives, i.e., tokens that have the same spelling patterns as IOCs but are not malicious. In this work, we further introduce the contextual features as described next.
Contextual Features
IOCs in cybersecurity articles are often described in a predictable way: being connected to a set of contextual keywords BIBREF16 , BIBREF1 . For example, a human user can infer that the word “ntdll.exe” is the name of a malicious file on the basis of the words “download” and “compromised” from the text shown in Fig. FIGREF1 . By analyzing the whole corpus, it is interesting that malicious file names tends to co-occur with words such as "download", "malware", "malicious", etc. In this work, we consider words that can indicate the characteristics of the neighbor words as contextual keywords and develop an approach to generate features from the automatically extracted contextual keywords.
Taking the above into account, we introduce the contextual feature vector INLINEFORM0 for a given input token INLINEFORM1 , where the INLINEFORM2 element of INLINEFORM3 is defined as follows: DISPLAYFORM0
INLINEFORM0 is the frequency of token INLINEFORM1 in the whole corpus, while INLINEFORM2 is the frequency of contextual keyword INLINEFORM3 from the windowed portions of the texts centering on the token INLINEFORM4 in the whole corpus and INLINEFORM5 is the size of window. The set of contextual keywords INLINEFORM6 are automatically extracted from the annotated texts, where each contextual keyword INLINEFORM7 ( INLINEFORM8 ) satisfies the following conditions:
INLINEFORM0 , where INLINEFORM1 is the set of manually annotated IOCs and INLINEFORM2 is a the lower bound of the frequency.
INLINEFORM0 is not a punctuation or stopword.
Note that we extract contextual keywords only from manually annotated data (e.g., training set), while we compute the contextual feature vector in all of the unlabeled data. According to this definition, it is obvious that the dimension of the contextual feature vector is as the same as the number of extracted contextual keywords. The size of window INLINEFORM0 and the lower bound of frequency INLINEFORM1 are then tuned by the validation set.
Usage of Features
The feature vector for an input token is the concatenation of the token spelling feature vector and the contextaul feature vector. Here, to elucidate the best usage of the feature vector, we evaluate the feature vector by concatenating it at different locations in the proposed model, i.e., the input of the token LSTM layer ( INLINEFORM0 ), the hidden state of the token LSTM ( INLINEFORM1 ), and the output of token LSTM ( INLINEFORM2 ). Among them, to concatenate the feature vector with the LSTM hidden state vector and the sentence vector of attention in the token LSTM layer, as shown in Section SECREF4 , achieved the best performance. We speculate that the features played an important role in the task of IOCs identification and feature vectors near the output layer were able to improve the performance more significantly than those at other locations.
Datasets
For English dataset, we crawl 687 cybersecurity articles from a collection of advanced persistent threats (APT) reports which are published from 2008 to 2018. All of these cybersecurity articles are used to train the English word embedding. Afterwards, we randomly select 370 articles, and manually annotate the IOCs contained in the articles. Among the selected articles, we randomly select 70 articles as the validation set and 70 articles as the test set; the remaining articles are used for training.
For Chinese dataset, we crawl 5,427 cybersecurity articles online from 35 cybersecurity blogs which are published from 2001 to 2018. All of these cybersecurity articles are used to train the Chinese word embedding. Afterwards, we randomly select 607 articles, and manually annotate the IOCs contained in the articles. Among the selected articles, we randomly select 122 articles as the validation set and 122 articles as the test set; the remaining articles are used for training.
TABLE TABREF20 shows statistics of the datasets. The output labels are annotated with the BIO (which stands for “Begin”, “Inside” and “Outside”) scheme.
Training Details
For pre-trained token embedding, we apply word2vec BIBREF17 to all crawled 687 English APT reports and 5,427 Chinese cybersecurity articles described in Section SECREF21 respectively. The word2vec models are trained with a window size of 8, a minimum vocabulary count of 1, and 15 iterations. The negative sampling number of word2vec is set to 8 and the model type is skip-gram. The dimension of the output token embedding is set to 100.
The ANN model is trained with the stochastic gradient descent to update all parameters, i.e., token embedding, character embedding, parameters of Bi-LSTM, weights of sentence attention, weights of multi-head self-attention, token features, and transition probabilities of CRF layers at each gradient step. For regularization, the dropout is applied to the output of each sub layer of the ANN model. Further training details are given below: (a) For attention-based Bi-LSTM module, dimensions of character embedding, hidden states of character-based token embedding LSTM, hidden states of Bi-LSTM, and sentence attention are set to 25, 25, 100 and 100, respectively. For multi-head self-attention module, we employ a stack of 6 multi-head self attention layer, each of which has 4 head and dimension of each head is set to 64. (b) All of the ANN’s parameters are initialized with a uniform distribution ranging from -1 to 1. (c) We train our model with a fixed learning rate of 0.005. The minimum number of epochs for training is set as 30. After the first 30 epochs had been trained, we compute the average F1-score of the validation set by the use of the currently produced model after every epoch had been trained, and stop the training process when the average F1-score of validation set fails to increase during the last ten epochs. We train our model for, if we do not early stop the training process, 100 epochs as the maximum number. (d) We rescale the normalized gradient to ensure that its norm does not exceed 5. (e) The dropout probability is set to 0.5.
Results
As shown in TABLE TABREF24 , we report the micro average of precision, recall and F1-score for all 11 types of labels for a baseline as well as the proposed model. As the baseline, we simply judge the input token as IOCs on the basis of the spelling features described in BIBREF12 . As presented in TABLE TABREF24 , the score obtained by the proposed model is clearly higher than the baseline. Here, as described in Section SECREF14 , the sizes of window and lower bounds of frequency for selecting contextual keywords are tuned as 4 and 7 throughout the evaluation of English dataset, and tuned as 3 and 4 throughout the evaluation of Chinese dataset. The number of extracted contextual keywords from the English dataset is 1,328, and from the Chinese dataset is 331.
Furthermore, we quantitatively compare our study with other typical works of sequence labelling, i.e., the work of Huang et al. BIBREF6 , the work of Lample et al. BIBREF9 and the work of Rei et al. BIBREF18 . Huang et al. BIBREF6 proposed a bidirectional LSTM model with a CRF layer, including hand-crafted features specialized for the task of sequence labelling. Lample et al. BIBREF9 described a model where the character-level representation was concatenated with word embedding and Rei et al. BIBREF18 improved the model by introducing an attention mechanism to the character-level representations. We train these models by employing the same training set and training parameters as the proposed model. As shown in TABLE TABREF24 , the proposed model obtains the highest precision, recall and F1-score than other models in the task of IOCs extraction. Compared with the second-best model of Lample et al. BIBREF9 , the performance gain of the proposed model on the English dataset is approximately 10.1% of precision and 10.0% of recall. The performance gain of the proposed model on the Chinese dataset is approximately 4.2% of precision and 9.0% of recall.
We also quantitatively compare our study with the work of Zhou et al. BIBREF12 , which proposed a bidirectional LSTM model with a CRF layer, including hand-crafted spelling features for the task of IOC identification. As shown in TABLE TABREF24 , the proposed model obtains a slightly higher F1-score on the English dataset and significantly higher F1-score on the Chinese dataset.
TABLE TABREF26 compares several examples of correct IOC extraction produced by the proposed model with one by the work of Lample et al. BIBREF9 . In the first example, the model of Lample et al. BIBREF9 fails to identify the malicious URL “http://www7.chrome-up.date/0m5EE”, because the token only appears in the test set and consists of several parts that are uncommon for URLs, such as “www7” and “date”, and thus both the token embedding and the character embedding lack proper information to represent the token as a malicious URL. The proposed model correctly identifies the URL, where the token is defined as a URL by spelling features and is then identified as a malicious URL by the use of the context information. In the second example, the model of Lample et al. BIBREF9 fails to identify token “cr.sh” of the input Chinese text as a malicious file name, while the token is assigned with a correct label by the proposed model. It is mainly because that the token “cr.sh” is defined as a token of file information by spelling features and tends to co-occur with words, “”(download) and “”(mining software). These two words often appear nearby malicious file information and are then extracted as contextual keywords in Section SECREF14 . The token “cr.sh” is then correctly identified as a token of malicious file information by the use of the contextual features.
Analysis of Contextual Features
The proposed model provides an intuitive way to inspect the contextual information of each given token. As described in Section SECREF14 , we initialize the contextual features of each given token using the automatically extracted contextual keywords and jointly learn them during the process of training with the whole ANN model. To prove the effectiveness of the contextual features, we visualize the learned weights martix of each contextual keyword of contextual feature and show several examples in Fig. FIGREF28 . Each row of the matrix in each plot indicates the weights of contextual keywords for the given tokens. From this we see which contextual keyword were considered more important to represent the contextual information of the given token. We can see from the matrix in Fig. FIGREF28 that, for the token “spearphshing”, which is an email-spoofing attack method, the contextual keyword “email” has the largest weight. For the malware “SunOrcal”, which drops several malicious executable files, contextual keywords “droppper” and “dropper” have larger weights than other contextual keywords such as “ascii”, “port” and “type”. For non-IOC token “socket”, contextual keywords “gateway” and “port” yield larger weights than other keywords because "socket" tends to co-occur with “gateway” and “port”.
We further calculate the average weight of each contextual keyword and show the top 10 and bottom 10 largest weighted contextual keywords in TABLE TABREF29 . From this we see that contextual keywords such as, “hash” and “filename”, which tends to co-occur with malicious filenames, have the largest weights for IOCs, while the contextual keywords such as “ascii”, “password” have the largest weights for non-IOCs. Here, it is interesting to find that contextual keyword “dropped” and “droppper”, which tend to co-occur with malicious file information and malwares, yield large weights for IOCs but small weights for non-IOCs. The proposed ANN model benefits from the differences of contextual information between IOCs and non-IOCs that is represented by the contextual features, and thus, achieves better performance than the previous works.
Training the Proposed Model with Bilingual Data
Even though security articles are written in different languages, most of the IOCs are written in English, and are described in a similar pattern. Therefore, using multilingual corpora could be a solution for addressing the lack of annotated data, and the performance of the proposed model is expected to be improved by extending the training set. To examine the hypothesis, we ran a number of additional experiments using both the English dataset and Chinese dataset, both of which are described in Section SECREF21 and are not parallel data or comparable data.
As pre-trained word embeddings for the bilingual training dataset, we applied a cross-lingual word embedding obtained by the work of Duong el al BIBREF19 , where the English-Chinese cross-lingual dictionary is obtained by simply translating all the English words from English dataset to Chinese and Chinese words from Chinese dataset to English using Google translation. As contextual feature vector, we concatenate the contextual feature vector obtained from English dataset with the contextual feature vector obtained from Chinese dataset. Then we merge the English training set and the Chinese training set into one set and train the proposed model with the merged bilingual training set. TABLE TABREF31 shows that the proposed model trained with the English training set and Chinese training set achieves a small improvement of F1-score on English test set when compared with the model trained with only English training set, and a great improvement of F1-score on Chinese test set when compared with the model trained with only Chinese training set.
We compare scores of each label when the proposed model is trained with different training sets in TABLE TABREF32 . When using the English test set, the F1-scores of labels “attack method”, “attack target” and “malware” by the model trained with the English training set and Chinese training set are lower than those scores by the model trained with only the English training set. It is mainly because that tokens of these labels can be written in different languages, which harms the model trained with the bilingual training data set. In contrast, benefiting from the extension of training set, for types of labels that are often written in English, e.g., “domain ”, “file imformation”, “IPv4” and “vlunerability”, the proposed model trained with the English training set and the Chinese training set achieves higher scores than the model trained with only the English training set. When using the Chinese test set, the proposed model trained with the English training set and the Chinese training set obtained a obviously higher F1-scores than the model trained with only the Chinese training set for almost all the types of labels. It is interesting to find that types of labels “e-mail address”, “attack method”, “attacker”, which lack of instances in Chinese training set, show the biggest improvement by using the model trained with the bilingual training set.
Conclusions
To conclude, in this paper, we newly introduce a multi-head self-attention module and contextual features to the neural based sequence labelling model, which significantly improved the performance in the task of IOC identification. Based on the evaluation results of our experiments, our proposed model is proved effective on both the English test set and the Chinese test set. We further evaluated the proposed model by training the proposed model using both the English training set and the Chinese training set and compared it with models that are trained with only one training set, where the model trained with the merged bilngual training set performs better.
One of our future works is to integrate the contextual embeddings from the bidirectional language model into our proposed model. The pretrained neural language models are proved effective in the sequence labelling models BIBREF26 , BIBREF27 , BIBREF28 . It is expected to improve the performance of the proposed model by integrating both the contextual features and contextual embeddings into the neural sequence labelling model. | As the baseline, we simply judge the input token as IOCs on the basis of the spelling features described in BIBREF12 |
ef872807cb0c9974d18bbb886a7836e793727c3d | ef872807cb0c9974d18bbb886a7836e793727c3d_0 | Q: What contextual features are used?
Text: Introduction
Indicators of Compromise (IOCs) are forensic artifacts that are used as signs when a system has been compromised by an attacker or infected with a particular piece of malware. To be specific, IOCs are composed of some combinations of virus signatures, IPs, URLs or domain names of botnets, MD5 hashes of attack files, etc. They are frequently described in cybersecurity articles, many of which are written in unstructured text, describing attack tactics, technique and procedures. For example, a snippet from a cybersecurity article is shown in Fig. FIGREF1 . From the text , token “INST.exe” is the name of an executable file of a malicious software, and the file “ntdll.exe” downloaded by “INST.exe” is a malicious file as well. Obviously, these kinds of IOCs can be then utilized for early detection of future attack attempts by using intrusion detection systems and antivirus software, and thus, they exert an important role in the field of cybersecurity. However, with the rapid evolvement of cyber threats, the IOC data are produced at a high volume and velocity every day, which makes it increasingly hard for human to gather and manage them.
A number of systems are proposed to help discover and gather malicious information and IOCs from various types of data sources BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . However, most of those systems consist of several components that identify IOCs by using human-crafted features that heavily rely on specific language knowledge such as dependency structure, and they often have to be pre-defined by experts in the field of the cybersecurity. Furthermore, they need a large amount of annotated data used as the training data to train an IOC classifier. Those training data are frequently difficult to be crowed-sourced, because non-experts can hardly distinguish IOCs from those non-malicious IPs or URLs. Thus, it is a time-consuming and laborious task to construct such systems for different languages.
In this work, we consider the task of collecting IOCs from cybersecurity articles as a task of sequence labelling of natural language processing (NLP). By applying a sequence labelling model, each token in an unstructured input text is assigned with a label, and tokens assigned with IOC labels are then collected as IOCs. Recently, sequence labelling models have been utilized in many NLP tasks. Huang et al. BIBREF6 proposed using a sequence labelling model based on the bidirectional long short-term memory (LSTM) BIBREF7 for the task of named entity recognition (NER). Chiu et al. BIBREF8 and Lample et al. BIBREF9 proposed integrating LSTM encoders with character embedding and the neural sequence labelling model to achieve a remarkable performance on the task of NER as well as part-of-speech (POS) tagging. Besides, Dernoncourt et al. BIBREF10 and Jiang et al. BIBREF11 proposed applying the neural sequence labelling model to the task of de-identification of medical records.
Among the previous studies of the neural sequence labelling task, Zhou el al. BIBREF12 firstly propose using an end-to-end neural sequence labelling model to fully automate the process of IOCs identification. Their model is on the basis of an artificial neural networks (ANN) with bidirectional LSTM and CRF. However, their newly introduced spelling features bring a more extraction of false positives, i.e., tokens that are similar to IOCs but not malicious. In this paper, we further introduce a multi-head self-attention module and contextual features to the ANN model so that the proposed model can perform better in gathering the contextual information from the unstructured text for the task of IOCs identification. Based on the results of our experiments, our proposed approach achieves an average precision of 93.1% and the recall of 85.2% on English cybersecurity article test set, and an average precision of 82.9% and recall of 80.7% on Chinese test set. We further evaluate the proposed model by training the model using both the English dataset and Chinese dataset, which even achieves better performance.
Model
Fig. FIGREF2 shows the 3 components (layers) of the proposed neural network architecture.
Token Embedding Layer
The token embedding layer takes a token as input and outputs its vector representation. As shown in Fig. FIGREF2 , given an input sequence of tokens INLINEFORM0 , the output vector INLINEFORM1 ( INLINEFORM2 ) of each token INLINEFORM3 results from the concatenation of two different types of embeddings: token embedding INLINEFORM4 and the character-based token embeddings INLINEFORM5 , INLINEFORM6 that come from the output of a character-level bi-LSTM encoder.
Sequence Representation Layer
The Sequence Representation Layer takes the sequence of embeddings INLINEFORM0 ( INLINEFORM1 ) as input, and outputs a sequence INLINEFORM2 , where the INLINEFORM3 element of INLINEFORM4 represents the probability that the INLINEFORM5 token has the label INLINEFORM6 .
Different from the previous work of sequence labelling in news articles or patient notes BIBREF9 , BIBREF10 , sentences from a cybersecurity report often contain a large number of tokens as well as lists of IOCs with little context, making it much more difficult for LSTM to encode the input sentence correctly. Therefore, instead of the token LSTM layer in BIBREF12 , we propose sequence representation layer that consists of 3 modules, i.e., attention-based Bi-LSTM module, multi-head self-attention module and token feature module.
Considering that tokens cannot contribute equally to the representation of the input sequence, we introduce attention mechanism to Bi-LSTM to extract such tokens that are crucial to the meaning of the sentence. Then, we aggregate the representation of those informative words to form the vector of the input sequence. The attention mechanism is similar to the one proposed by Yang et al. BIBREF13 , which is defined as follows: DISPLAYFORM0
That is to say, we first compute the INLINEFORM0 as a hidden representation of the hidden states of Bi-LSTM INLINEFORM1 for INLINEFORM2 input token, where INLINEFORM3 is obtained by concatenating the INLINEFORM4 hidden states of forward and backward LSTM, i.e., INLINEFORM5 . Then, we measure the importance of the INLINEFORM6 token with a trainable vector INLINEFORM7 and get a normalized importance weight INLINEFORM8 through a softmax function. After that, the sentence vector INLINEFORM9 is computed as a weighted sum of INLINEFORM10 ( INLINEFORM11 ). Here, weight matrix INLINEFORM12 , bias INLINEFORM13 and vector INLINEFORM14 are randomly initialized and jointly learned during the training process. Note that each input sentence merely has one sentence vector INLINEFORM15 as its weighted representation, and INLINEFORM16 is then used as a part of the INLINEFORM17 output of attention-based Bi-LSTM module, where INLINEFORM18 ( INLINEFORM19 ).
Motivated by the successful application of self-attention in many NLP tasks BIBREF14 , BIBREF15 , we add a multi-head self-attention module to enhance the embedding of each word with the information of other words in a text adaptively. By means of this, the local text regions where convolution performs carry the global information of text. Following the encoder part of Vaswani et al. BIBREF14 , multi-head self-attention module is composed of a stack of several identical layers, each of which consists of a multi-head self-attention mechanism and two convolutions with kernel size 1. Given the sequence of embeddings INLINEFORM0 as input, and the output is defined as follows: DISPLAYFORM0
where, INLINEFORM0 , INLINEFORM1 , INLINEFORM2 are parameter matrices for the projections of queries INLINEFORM3 , keys INLINEFORM4 and values INLINEFORM5 in the INLINEFORM6 head, respectively. Here, INLINEFORM7 , INLINEFORM8 and INLINEFORM9 are set as the input sequence INLINEFORM10 ( INLINEFORM11 ). The INLINEFORM12 is then given to the two convolutions and the output of multi-head self-attention INLINEFORM13 ( INLINEFORM14 ) is obtained.
Furthermore, we introduce some features to defined IOCs to improve the performance of the proposed model on a very small amount of training data. Here, we define two types of features, i.e., spelling features and contextual features, and map each token INLINEFORM0 ( INLINEFORM1 ) to a feature vector INLINEFORM2 , where INLINEFORM3 is the spelling feature vector and INLINEFORM4 is the contextual feature vector. Note that the values of features are then jointly learned during the process of training. In Section SECREF3 , we will explain the features in more detail.
As shown in Fig. FIGREF2 , the vector INLINEFORM0 ( INLINEFORM1 ) is a concatenation of the INLINEFORM2 , INLINEFORM3 and INLINEFORM4 . Each vector INLINEFORM5 is then given to a feed-forward neural network with one hidden layer, which outputs the corresponding probability vector INLINEFORM6 .
CRF Layer
We also introduce a CRF layer to output the most likely sequence of predicted labels. The score of a label sequence INLINEFORM0 is defined as the sum of the probabilities of unigram labels and the bigram label transition probabilities: DISPLAYFORM0
where INLINEFORM0 is a matrix that contains the transition probabilities of two subsequent labels. Vector INLINEFORM1 is the output of the token LSTM layer, and INLINEFORM2 is the probability of label INLINEFORM3 in INLINEFORM4 . INLINEFORM5 is the probability that a token with label INLINEFORM6 is followed by a token with the label INLINEFORM7 . Subsequently, these scores are turned into probabilities of the label sequence by taking a softmax function over all possible label sequences.
Features
We extract a vector of features for each tokens of input sequences. In this section, we present each feature category in detail.
Spelling Features
Since the IOCs tend to follow fixed patterns, we predefined several regular expressions and spelling rules to identify IOCs. For example, to identify a URL, we defined a regular expression INLINEFORM0 and set the value of the URL feature to 1 when the input token matches the regular expression. However, such expressions and spelling rules could introduce false positives, i.e., tokens that have the same spelling patterns as IOCs but are not malicious. In this work, we further introduce the contextual features as described next.
Contextual Features
IOCs in cybersecurity articles are often described in a predictable way: being connected to a set of contextual keywords BIBREF16 , BIBREF1 . For example, a human user can infer that the word “ntdll.exe” is the name of a malicious file on the basis of the words “download” and “compromised” from the text shown in Fig. FIGREF1 . By analyzing the whole corpus, it is interesting that malicious file names tends to co-occur with words such as "download", "malware", "malicious", etc. In this work, we consider words that can indicate the characteristics of the neighbor words as contextual keywords and develop an approach to generate features from the automatically extracted contextual keywords.
Taking the above into account, we introduce the contextual feature vector INLINEFORM0 for a given input token INLINEFORM1 , where the INLINEFORM2 element of INLINEFORM3 is defined as follows: DISPLAYFORM0
INLINEFORM0 is the frequency of token INLINEFORM1 in the whole corpus, while INLINEFORM2 is the frequency of contextual keyword INLINEFORM3 from the windowed portions of the texts centering on the token INLINEFORM4 in the whole corpus and INLINEFORM5 is the size of window. The set of contextual keywords INLINEFORM6 are automatically extracted from the annotated texts, where each contextual keyword INLINEFORM7 ( INLINEFORM8 ) satisfies the following conditions:
INLINEFORM0 , where INLINEFORM1 is the set of manually annotated IOCs and INLINEFORM2 is a the lower bound of the frequency.
INLINEFORM0 is not a punctuation or stopword.
Note that we extract contextual keywords only from manually annotated data (e.g., training set), while we compute the contextual feature vector in all of the unlabeled data. According to this definition, it is obvious that the dimension of the contextual feature vector is as the same as the number of extracted contextual keywords. The size of window INLINEFORM0 and the lower bound of frequency INLINEFORM1 are then tuned by the validation set.
Usage of Features
The feature vector for an input token is the concatenation of the token spelling feature vector and the contextaul feature vector. Here, to elucidate the best usage of the feature vector, we evaluate the feature vector by concatenating it at different locations in the proposed model, i.e., the input of the token LSTM layer ( INLINEFORM0 ), the hidden state of the token LSTM ( INLINEFORM1 ), and the output of token LSTM ( INLINEFORM2 ). Among them, to concatenate the feature vector with the LSTM hidden state vector and the sentence vector of attention in the token LSTM layer, as shown in Section SECREF4 , achieved the best performance. We speculate that the features played an important role in the task of IOCs identification and feature vectors near the output layer were able to improve the performance more significantly than those at other locations.
Datasets
For English dataset, we crawl 687 cybersecurity articles from a collection of advanced persistent threats (APT) reports which are published from 2008 to 2018. All of these cybersecurity articles are used to train the English word embedding. Afterwards, we randomly select 370 articles, and manually annotate the IOCs contained in the articles. Among the selected articles, we randomly select 70 articles as the validation set and 70 articles as the test set; the remaining articles are used for training.
For Chinese dataset, we crawl 5,427 cybersecurity articles online from 35 cybersecurity blogs which are published from 2001 to 2018. All of these cybersecurity articles are used to train the Chinese word embedding. Afterwards, we randomly select 607 articles, and manually annotate the IOCs contained in the articles. Among the selected articles, we randomly select 122 articles as the validation set and 122 articles as the test set; the remaining articles are used for training.
TABLE TABREF20 shows statistics of the datasets. The output labels are annotated with the BIO (which stands for “Begin”, “Inside” and “Outside”) scheme.
Training Details
For pre-trained token embedding, we apply word2vec BIBREF17 to all crawled 687 English APT reports and 5,427 Chinese cybersecurity articles described in Section SECREF21 respectively. The word2vec models are trained with a window size of 8, a minimum vocabulary count of 1, and 15 iterations. The negative sampling number of word2vec is set to 8 and the model type is skip-gram. The dimension of the output token embedding is set to 100.
The ANN model is trained with the stochastic gradient descent to update all parameters, i.e., token embedding, character embedding, parameters of Bi-LSTM, weights of sentence attention, weights of multi-head self-attention, token features, and transition probabilities of CRF layers at each gradient step. For regularization, the dropout is applied to the output of each sub layer of the ANN model. Further training details are given below: (a) For attention-based Bi-LSTM module, dimensions of character embedding, hidden states of character-based token embedding LSTM, hidden states of Bi-LSTM, and sentence attention are set to 25, 25, 100 and 100, respectively. For multi-head self-attention module, we employ a stack of 6 multi-head self attention layer, each of which has 4 head and dimension of each head is set to 64. (b) All of the ANN’s parameters are initialized with a uniform distribution ranging from -1 to 1. (c) We train our model with a fixed learning rate of 0.005. The minimum number of epochs for training is set as 30. After the first 30 epochs had been trained, we compute the average F1-score of the validation set by the use of the currently produced model after every epoch had been trained, and stop the training process when the average F1-score of validation set fails to increase during the last ten epochs. We train our model for, if we do not early stop the training process, 100 epochs as the maximum number. (d) We rescale the normalized gradient to ensure that its norm does not exceed 5. (e) The dropout probability is set to 0.5.
Results
As shown in TABLE TABREF24 , we report the micro average of precision, recall and F1-score for all 11 types of labels for a baseline as well as the proposed model. As the baseline, we simply judge the input token as IOCs on the basis of the spelling features described in BIBREF12 . As presented in TABLE TABREF24 , the score obtained by the proposed model is clearly higher than the baseline. Here, as described in Section SECREF14 , the sizes of window and lower bounds of frequency for selecting contextual keywords are tuned as 4 and 7 throughout the evaluation of English dataset, and tuned as 3 and 4 throughout the evaluation of Chinese dataset. The number of extracted contextual keywords from the English dataset is 1,328, and from the Chinese dataset is 331.
Furthermore, we quantitatively compare our study with other typical works of sequence labelling, i.e., the work of Huang et al. BIBREF6 , the work of Lample et al. BIBREF9 and the work of Rei et al. BIBREF18 . Huang et al. BIBREF6 proposed a bidirectional LSTM model with a CRF layer, including hand-crafted features specialized for the task of sequence labelling. Lample et al. BIBREF9 described a model where the character-level representation was concatenated with word embedding and Rei et al. BIBREF18 improved the model by introducing an attention mechanism to the character-level representations. We train these models by employing the same training set and training parameters as the proposed model. As shown in TABLE TABREF24 , the proposed model obtains the highest precision, recall and F1-score than other models in the task of IOCs extraction. Compared with the second-best model of Lample et al. BIBREF9 , the performance gain of the proposed model on the English dataset is approximately 10.1% of precision and 10.0% of recall. The performance gain of the proposed model on the Chinese dataset is approximately 4.2% of precision and 9.0% of recall.
We also quantitatively compare our study with the work of Zhou et al. BIBREF12 , which proposed a bidirectional LSTM model with a CRF layer, including hand-crafted spelling features for the task of IOC identification. As shown in TABLE TABREF24 , the proposed model obtains a slightly higher F1-score on the English dataset and significantly higher F1-score on the Chinese dataset.
TABLE TABREF26 compares several examples of correct IOC extraction produced by the proposed model with one by the work of Lample et al. BIBREF9 . In the first example, the model of Lample et al. BIBREF9 fails to identify the malicious URL “http://www7.chrome-up.date/0m5EE”, because the token only appears in the test set and consists of several parts that are uncommon for URLs, such as “www7” and “date”, and thus both the token embedding and the character embedding lack proper information to represent the token as a malicious URL. The proposed model correctly identifies the URL, where the token is defined as a URL by spelling features and is then identified as a malicious URL by the use of the context information. In the second example, the model of Lample et al. BIBREF9 fails to identify token “cr.sh” of the input Chinese text as a malicious file name, while the token is assigned with a correct label by the proposed model. It is mainly because that the token “cr.sh” is defined as a token of file information by spelling features and tends to co-occur with words, “”(download) and “”(mining software). These two words often appear nearby malicious file information and are then extracted as contextual keywords in Section SECREF14 . The token “cr.sh” is then correctly identified as a token of malicious file information by the use of the contextual features.
Analysis of Contextual Features
The proposed model provides an intuitive way to inspect the contextual information of each given token. As described in Section SECREF14 , we initialize the contextual features of each given token using the automatically extracted contextual keywords and jointly learn them during the process of training with the whole ANN model. To prove the effectiveness of the contextual features, we visualize the learned weights martix of each contextual keyword of contextual feature and show several examples in Fig. FIGREF28 . Each row of the matrix in each plot indicates the weights of contextual keywords for the given tokens. From this we see which contextual keyword were considered more important to represent the contextual information of the given token. We can see from the matrix in Fig. FIGREF28 that, for the token “spearphshing”, which is an email-spoofing attack method, the contextual keyword “email” has the largest weight. For the malware “SunOrcal”, which drops several malicious executable files, contextual keywords “droppper” and “dropper” have larger weights than other contextual keywords such as “ascii”, “port” and “type”. For non-IOC token “socket”, contextual keywords “gateway” and “port” yield larger weights than other keywords because "socket" tends to co-occur with “gateway” and “port”.
We further calculate the average weight of each contextual keyword and show the top 10 and bottom 10 largest weighted contextual keywords in TABLE TABREF29 . From this we see that contextual keywords such as, “hash” and “filename”, which tends to co-occur with malicious filenames, have the largest weights for IOCs, while the contextual keywords such as “ascii”, “password” have the largest weights for non-IOCs. Here, it is interesting to find that contextual keyword “dropped” and “droppper”, which tend to co-occur with malicious file information and malwares, yield large weights for IOCs but small weights for non-IOCs. The proposed ANN model benefits from the differences of contextual information between IOCs and non-IOCs that is represented by the contextual features, and thus, achieves better performance than the previous works.
Training the Proposed Model with Bilingual Data
Even though security articles are written in different languages, most of the IOCs are written in English, and are described in a similar pattern. Therefore, using multilingual corpora could be a solution for addressing the lack of annotated data, and the performance of the proposed model is expected to be improved by extending the training set. To examine the hypothesis, we ran a number of additional experiments using both the English dataset and Chinese dataset, both of which are described in Section SECREF21 and are not parallel data or comparable data.
As pre-trained word embeddings for the bilingual training dataset, we applied a cross-lingual word embedding obtained by the work of Duong el al BIBREF19 , where the English-Chinese cross-lingual dictionary is obtained by simply translating all the English words from English dataset to Chinese and Chinese words from Chinese dataset to English using Google translation. As contextual feature vector, we concatenate the contextual feature vector obtained from English dataset with the contextual feature vector obtained from Chinese dataset. Then we merge the English training set and the Chinese training set into one set and train the proposed model with the merged bilingual training set. TABLE TABREF31 shows that the proposed model trained with the English training set and Chinese training set achieves a small improvement of F1-score on English test set when compared with the model trained with only English training set, and a great improvement of F1-score on Chinese test set when compared with the model trained with only Chinese training set.
We compare scores of each label when the proposed model is trained with different training sets in TABLE TABREF32 . When using the English test set, the F1-scores of labels “attack method”, “attack target” and “malware” by the model trained with the English training set and Chinese training set are lower than those scores by the model trained with only the English training set. It is mainly because that tokens of these labels can be written in different languages, which harms the model trained with the bilingual training data set. In contrast, benefiting from the extension of training set, for types of labels that are often written in English, e.g., “domain ”, “file imformation”, “IPv4” and “vlunerability”, the proposed model trained with the English training set and the Chinese training set achieves higher scores than the model trained with only the English training set. When using the Chinese test set, the proposed model trained with the English training set and the Chinese training set obtained a obviously higher F1-scores than the model trained with only the Chinese training set for almost all the types of labels. It is interesting to find that types of labels “e-mail address”, “attack method”, “attacker”, which lack of instances in Chinese training set, show the biggest improvement by using the model trained with the bilingual training set.
Conclusions
To conclude, in this paper, we newly introduce a multi-head self-attention module and contextual features to the neural based sequence labelling model, which significantly improved the performance in the task of IOC identification. Based on the evaluation results of our experiments, our proposed model is proved effective on both the English test set and the Chinese test set. We further evaluated the proposed model by training the proposed model using both the English training set and the Chinese training set and compared it with models that are trained with only one training set, where the model trained with the merged bilngual training set performs better.
One of our future works is to integrate the contextual embeddings from the bidirectional language model into our proposed model. The pretrained neural language models are proved effective in the sequence labelling models BIBREF26 , BIBREF27 , BIBREF28 . It is expected to improve the performance of the proposed model by integrating both the contextual features and contextual embeddings into the neural sequence labelling model. | The words that can indicate the characteristics of the neighbor words as contextual keywords and generate it from the automatically extracted contextual keywords. |
4db3c2ca6ddc87209c31b20763b7a3c1c33387bc | 4db3c2ca6ddc87209c31b20763b7a3c1c33387bc_0 | Q: Where are the cybersecurity articles used in the model sourced from?
Text: Introduction
Indicators of Compromise (IOCs) are forensic artifacts that are used as signs when a system has been compromised by an attacker or infected with a particular piece of malware. To be specific, IOCs are composed of some combinations of virus signatures, IPs, URLs or domain names of botnets, MD5 hashes of attack files, etc. They are frequently described in cybersecurity articles, many of which are written in unstructured text, describing attack tactics, technique and procedures. For example, a snippet from a cybersecurity article is shown in Fig. FIGREF1 . From the text , token “INST.exe” is the name of an executable file of a malicious software, and the file “ntdll.exe” downloaded by “INST.exe” is a malicious file as well. Obviously, these kinds of IOCs can be then utilized for early detection of future attack attempts by using intrusion detection systems and antivirus software, and thus, they exert an important role in the field of cybersecurity. However, with the rapid evolvement of cyber threats, the IOC data are produced at a high volume and velocity every day, which makes it increasingly hard for human to gather and manage them.
A number of systems are proposed to help discover and gather malicious information and IOCs from various types of data sources BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . However, most of those systems consist of several components that identify IOCs by using human-crafted features that heavily rely on specific language knowledge such as dependency structure, and they often have to be pre-defined by experts in the field of the cybersecurity. Furthermore, they need a large amount of annotated data used as the training data to train an IOC classifier. Those training data are frequently difficult to be crowed-sourced, because non-experts can hardly distinguish IOCs from those non-malicious IPs or URLs. Thus, it is a time-consuming and laborious task to construct such systems for different languages.
In this work, we consider the task of collecting IOCs from cybersecurity articles as a task of sequence labelling of natural language processing (NLP). By applying a sequence labelling model, each token in an unstructured input text is assigned with a label, and tokens assigned with IOC labels are then collected as IOCs. Recently, sequence labelling models have been utilized in many NLP tasks. Huang et al. BIBREF6 proposed using a sequence labelling model based on the bidirectional long short-term memory (LSTM) BIBREF7 for the task of named entity recognition (NER). Chiu et al. BIBREF8 and Lample et al. BIBREF9 proposed integrating LSTM encoders with character embedding and the neural sequence labelling model to achieve a remarkable performance on the task of NER as well as part-of-speech (POS) tagging. Besides, Dernoncourt et al. BIBREF10 and Jiang et al. BIBREF11 proposed applying the neural sequence labelling model to the task of de-identification of medical records.
Among the previous studies of the neural sequence labelling task, Zhou el al. BIBREF12 firstly propose using an end-to-end neural sequence labelling model to fully automate the process of IOCs identification. Their model is on the basis of an artificial neural networks (ANN) with bidirectional LSTM and CRF. However, their newly introduced spelling features bring a more extraction of false positives, i.e., tokens that are similar to IOCs but not malicious. In this paper, we further introduce a multi-head self-attention module and contextual features to the ANN model so that the proposed model can perform better in gathering the contextual information from the unstructured text for the task of IOCs identification. Based on the results of our experiments, our proposed approach achieves an average precision of 93.1% and the recall of 85.2% on English cybersecurity article test set, and an average precision of 82.9% and recall of 80.7% on Chinese test set. We further evaluate the proposed model by training the model using both the English dataset and Chinese dataset, which even achieves better performance.
Model
Fig. FIGREF2 shows the 3 components (layers) of the proposed neural network architecture.
Token Embedding Layer
The token embedding layer takes a token as input and outputs its vector representation. As shown in Fig. FIGREF2 , given an input sequence of tokens INLINEFORM0 , the output vector INLINEFORM1 ( INLINEFORM2 ) of each token INLINEFORM3 results from the concatenation of two different types of embeddings: token embedding INLINEFORM4 and the character-based token embeddings INLINEFORM5 , INLINEFORM6 that come from the output of a character-level bi-LSTM encoder.
Sequence Representation Layer
The Sequence Representation Layer takes the sequence of embeddings INLINEFORM0 ( INLINEFORM1 ) as input, and outputs a sequence INLINEFORM2 , where the INLINEFORM3 element of INLINEFORM4 represents the probability that the INLINEFORM5 token has the label INLINEFORM6 .
Different from the previous work of sequence labelling in news articles or patient notes BIBREF9 , BIBREF10 , sentences from a cybersecurity report often contain a large number of tokens as well as lists of IOCs with little context, making it much more difficult for LSTM to encode the input sentence correctly. Therefore, instead of the token LSTM layer in BIBREF12 , we propose sequence representation layer that consists of 3 modules, i.e., attention-based Bi-LSTM module, multi-head self-attention module and token feature module.
Considering that tokens cannot contribute equally to the representation of the input sequence, we introduce attention mechanism to Bi-LSTM to extract such tokens that are crucial to the meaning of the sentence. Then, we aggregate the representation of those informative words to form the vector of the input sequence. The attention mechanism is similar to the one proposed by Yang et al. BIBREF13 , which is defined as follows: DISPLAYFORM0
That is to say, we first compute the INLINEFORM0 as a hidden representation of the hidden states of Bi-LSTM INLINEFORM1 for INLINEFORM2 input token, where INLINEFORM3 is obtained by concatenating the INLINEFORM4 hidden states of forward and backward LSTM, i.e., INLINEFORM5 . Then, we measure the importance of the INLINEFORM6 token with a trainable vector INLINEFORM7 and get a normalized importance weight INLINEFORM8 through a softmax function. After that, the sentence vector INLINEFORM9 is computed as a weighted sum of INLINEFORM10 ( INLINEFORM11 ). Here, weight matrix INLINEFORM12 , bias INLINEFORM13 and vector INLINEFORM14 are randomly initialized and jointly learned during the training process. Note that each input sentence merely has one sentence vector INLINEFORM15 as its weighted representation, and INLINEFORM16 is then used as a part of the INLINEFORM17 output of attention-based Bi-LSTM module, where INLINEFORM18 ( INLINEFORM19 ).
Motivated by the successful application of self-attention in many NLP tasks BIBREF14 , BIBREF15 , we add a multi-head self-attention module to enhance the embedding of each word with the information of other words in a text adaptively. By means of this, the local text regions where convolution performs carry the global information of text. Following the encoder part of Vaswani et al. BIBREF14 , multi-head self-attention module is composed of a stack of several identical layers, each of which consists of a multi-head self-attention mechanism and two convolutions with kernel size 1. Given the sequence of embeddings INLINEFORM0 as input, and the output is defined as follows: DISPLAYFORM0
where, INLINEFORM0 , INLINEFORM1 , INLINEFORM2 are parameter matrices for the projections of queries INLINEFORM3 , keys INLINEFORM4 and values INLINEFORM5 in the INLINEFORM6 head, respectively. Here, INLINEFORM7 , INLINEFORM8 and INLINEFORM9 are set as the input sequence INLINEFORM10 ( INLINEFORM11 ). The INLINEFORM12 is then given to the two convolutions and the output of multi-head self-attention INLINEFORM13 ( INLINEFORM14 ) is obtained.
Furthermore, we introduce some features to defined IOCs to improve the performance of the proposed model on a very small amount of training data. Here, we define two types of features, i.e., spelling features and contextual features, and map each token INLINEFORM0 ( INLINEFORM1 ) to a feature vector INLINEFORM2 , where INLINEFORM3 is the spelling feature vector and INLINEFORM4 is the contextual feature vector. Note that the values of features are then jointly learned during the process of training. In Section SECREF3 , we will explain the features in more detail.
As shown in Fig. FIGREF2 , the vector INLINEFORM0 ( INLINEFORM1 ) is a concatenation of the INLINEFORM2 , INLINEFORM3 and INLINEFORM4 . Each vector INLINEFORM5 is then given to a feed-forward neural network with one hidden layer, which outputs the corresponding probability vector INLINEFORM6 .
CRF Layer
We also introduce a CRF layer to output the most likely sequence of predicted labels. The score of a label sequence INLINEFORM0 is defined as the sum of the probabilities of unigram labels and the bigram label transition probabilities: DISPLAYFORM0
where INLINEFORM0 is a matrix that contains the transition probabilities of two subsequent labels. Vector INLINEFORM1 is the output of the token LSTM layer, and INLINEFORM2 is the probability of label INLINEFORM3 in INLINEFORM4 . INLINEFORM5 is the probability that a token with label INLINEFORM6 is followed by a token with the label INLINEFORM7 . Subsequently, these scores are turned into probabilities of the label sequence by taking a softmax function over all possible label sequences.
Features
We extract a vector of features for each tokens of input sequences. In this section, we present each feature category in detail.
Spelling Features
Since the IOCs tend to follow fixed patterns, we predefined several regular expressions and spelling rules to identify IOCs. For example, to identify a URL, we defined a regular expression INLINEFORM0 and set the value of the URL feature to 1 when the input token matches the regular expression. However, such expressions and spelling rules could introduce false positives, i.e., tokens that have the same spelling patterns as IOCs but are not malicious. In this work, we further introduce the contextual features as described next.
Contextual Features
IOCs in cybersecurity articles are often described in a predictable way: being connected to a set of contextual keywords BIBREF16 , BIBREF1 . For example, a human user can infer that the word “ntdll.exe” is the name of a malicious file on the basis of the words “download” and “compromised” from the text shown in Fig. FIGREF1 . By analyzing the whole corpus, it is interesting that malicious file names tends to co-occur with words such as "download", "malware", "malicious", etc. In this work, we consider words that can indicate the characteristics of the neighbor words as contextual keywords and develop an approach to generate features from the automatically extracted contextual keywords.
Taking the above into account, we introduce the contextual feature vector INLINEFORM0 for a given input token INLINEFORM1 , where the INLINEFORM2 element of INLINEFORM3 is defined as follows: DISPLAYFORM0
INLINEFORM0 is the frequency of token INLINEFORM1 in the whole corpus, while INLINEFORM2 is the frequency of contextual keyword INLINEFORM3 from the windowed portions of the texts centering on the token INLINEFORM4 in the whole corpus and INLINEFORM5 is the size of window. The set of contextual keywords INLINEFORM6 are automatically extracted from the annotated texts, where each contextual keyword INLINEFORM7 ( INLINEFORM8 ) satisfies the following conditions:
INLINEFORM0 , where INLINEFORM1 is the set of manually annotated IOCs and INLINEFORM2 is a the lower bound of the frequency.
INLINEFORM0 is not a punctuation or stopword.
Note that we extract contextual keywords only from manually annotated data (e.g., training set), while we compute the contextual feature vector in all of the unlabeled data. According to this definition, it is obvious that the dimension of the contextual feature vector is as the same as the number of extracted contextual keywords. The size of window INLINEFORM0 and the lower bound of frequency INLINEFORM1 are then tuned by the validation set.
Usage of Features
The feature vector for an input token is the concatenation of the token spelling feature vector and the contextaul feature vector. Here, to elucidate the best usage of the feature vector, we evaluate the feature vector by concatenating it at different locations in the proposed model, i.e., the input of the token LSTM layer ( INLINEFORM0 ), the hidden state of the token LSTM ( INLINEFORM1 ), and the output of token LSTM ( INLINEFORM2 ). Among them, to concatenate the feature vector with the LSTM hidden state vector and the sentence vector of attention in the token LSTM layer, as shown in Section SECREF4 , achieved the best performance. We speculate that the features played an important role in the task of IOCs identification and feature vectors near the output layer were able to improve the performance more significantly than those at other locations.
Datasets
For English dataset, we crawl 687 cybersecurity articles from a collection of advanced persistent threats (APT) reports which are published from 2008 to 2018. All of these cybersecurity articles are used to train the English word embedding. Afterwards, we randomly select 370 articles, and manually annotate the IOCs contained in the articles. Among the selected articles, we randomly select 70 articles as the validation set and 70 articles as the test set; the remaining articles are used for training.
For Chinese dataset, we crawl 5,427 cybersecurity articles online from 35 cybersecurity blogs which are published from 2001 to 2018. All of these cybersecurity articles are used to train the Chinese word embedding. Afterwards, we randomly select 607 articles, and manually annotate the IOCs contained in the articles. Among the selected articles, we randomly select 122 articles as the validation set and 122 articles as the test set; the remaining articles are used for training.
TABLE TABREF20 shows statistics of the datasets. The output labels are annotated with the BIO (which stands for “Begin”, “Inside” and “Outside”) scheme.
Training Details
For pre-trained token embedding, we apply word2vec BIBREF17 to all crawled 687 English APT reports and 5,427 Chinese cybersecurity articles described in Section SECREF21 respectively. The word2vec models are trained with a window size of 8, a minimum vocabulary count of 1, and 15 iterations. The negative sampling number of word2vec is set to 8 and the model type is skip-gram. The dimension of the output token embedding is set to 100.
The ANN model is trained with the stochastic gradient descent to update all parameters, i.e., token embedding, character embedding, parameters of Bi-LSTM, weights of sentence attention, weights of multi-head self-attention, token features, and transition probabilities of CRF layers at each gradient step. For regularization, the dropout is applied to the output of each sub layer of the ANN model. Further training details are given below: (a) For attention-based Bi-LSTM module, dimensions of character embedding, hidden states of character-based token embedding LSTM, hidden states of Bi-LSTM, and sentence attention are set to 25, 25, 100 and 100, respectively. For multi-head self-attention module, we employ a stack of 6 multi-head self attention layer, each of which has 4 head and dimension of each head is set to 64. (b) All of the ANN’s parameters are initialized with a uniform distribution ranging from -1 to 1. (c) We train our model with a fixed learning rate of 0.005. The minimum number of epochs for training is set as 30. After the first 30 epochs had been trained, we compute the average F1-score of the validation set by the use of the currently produced model after every epoch had been trained, and stop the training process when the average F1-score of validation set fails to increase during the last ten epochs. We train our model for, if we do not early stop the training process, 100 epochs as the maximum number. (d) We rescale the normalized gradient to ensure that its norm does not exceed 5. (e) The dropout probability is set to 0.5.
Results
As shown in TABLE TABREF24 , we report the micro average of precision, recall and F1-score for all 11 types of labels for a baseline as well as the proposed model. As the baseline, we simply judge the input token as IOCs on the basis of the spelling features described in BIBREF12 . As presented in TABLE TABREF24 , the score obtained by the proposed model is clearly higher than the baseline. Here, as described in Section SECREF14 , the sizes of window and lower bounds of frequency for selecting contextual keywords are tuned as 4 and 7 throughout the evaluation of English dataset, and tuned as 3 and 4 throughout the evaluation of Chinese dataset. The number of extracted contextual keywords from the English dataset is 1,328, and from the Chinese dataset is 331.
Furthermore, we quantitatively compare our study with other typical works of sequence labelling, i.e., the work of Huang et al. BIBREF6 , the work of Lample et al. BIBREF9 and the work of Rei et al. BIBREF18 . Huang et al. BIBREF6 proposed a bidirectional LSTM model with a CRF layer, including hand-crafted features specialized for the task of sequence labelling. Lample et al. BIBREF9 described a model where the character-level representation was concatenated with word embedding and Rei et al. BIBREF18 improved the model by introducing an attention mechanism to the character-level representations. We train these models by employing the same training set and training parameters as the proposed model. As shown in TABLE TABREF24 , the proposed model obtains the highest precision, recall and F1-score than other models in the task of IOCs extraction. Compared with the second-best model of Lample et al. BIBREF9 , the performance gain of the proposed model on the English dataset is approximately 10.1% of precision and 10.0% of recall. The performance gain of the proposed model on the Chinese dataset is approximately 4.2% of precision and 9.0% of recall.
We also quantitatively compare our study with the work of Zhou et al. BIBREF12 , which proposed a bidirectional LSTM model with a CRF layer, including hand-crafted spelling features for the task of IOC identification. As shown in TABLE TABREF24 , the proposed model obtains a slightly higher F1-score on the English dataset and significantly higher F1-score on the Chinese dataset.
TABLE TABREF26 compares several examples of correct IOC extraction produced by the proposed model with one by the work of Lample et al. BIBREF9 . In the first example, the model of Lample et al. BIBREF9 fails to identify the malicious URL “http://www7.chrome-up.date/0m5EE”, because the token only appears in the test set and consists of several parts that are uncommon for URLs, such as “www7” and “date”, and thus both the token embedding and the character embedding lack proper information to represent the token as a malicious URL. The proposed model correctly identifies the URL, where the token is defined as a URL by spelling features and is then identified as a malicious URL by the use of the context information. In the second example, the model of Lample et al. BIBREF9 fails to identify token “cr.sh” of the input Chinese text as a malicious file name, while the token is assigned with a correct label by the proposed model. It is mainly because that the token “cr.sh” is defined as a token of file information by spelling features and tends to co-occur with words, “”(download) and “”(mining software). These two words often appear nearby malicious file information and are then extracted as contextual keywords in Section SECREF14 . The token “cr.sh” is then correctly identified as a token of malicious file information by the use of the contextual features.
Analysis of Contextual Features
The proposed model provides an intuitive way to inspect the contextual information of each given token. As described in Section SECREF14 , we initialize the contextual features of each given token using the automatically extracted contextual keywords and jointly learn them during the process of training with the whole ANN model. To prove the effectiveness of the contextual features, we visualize the learned weights martix of each contextual keyword of contextual feature and show several examples in Fig. FIGREF28 . Each row of the matrix in each plot indicates the weights of contextual keywords for the given tokens. From this we see which contextual keyword were considered more important to represent the contextual information of the given token. We can see from the matrix in Fig. FIGREF28 that, for the token “spearphshing”, which is an email-spoofing attack method, the contextual keyword “email” has the largest weight. For the malware “SunOrcal”, which drops several malicious executable files, contextual keywords “droppper” and “dropper” have larger weights than other contextual keywords such as “ascii”, “port” and “type”. For non-IOC token “socket”, contextual keywords “gateway” and “port” yield larger weights than other keywords because "socket" tends to co-occur with “gateway” and “port”.
We further calculate the average weight of each contextual keyword and show the top 10 and bottom 10 largest weighted contextual keywords in TABLE TABREF29 . From this we see that contextual keywords such as, “hash” and “filename”, which tends to co-occur with malicious filenames, have the largest weights for IOCs, while the contextual keywords such as “ascii”, “password” have the largest weights for non-IOCs. Here, it is interesting to find that contextual keyword “dropped” and “droppper”, which tend to co-occur with malicious file information and malwares, yield large weights for IOCs but small weights for non-IOCs. The proposed ANN model benefits from the differences of contextual information between IOCs and non-IOCs that is represented by the contextual features, and thus, achieves better performance than the previous works.
Training the Proposed Model with Bilingual Data
Even though security articles are written in different languages, most of the IOCs are written in English, and are described in a similar pattern. Therefore, using multilingual corpora could be a solution for addressing the lack of annotated data, and the performance of the proposed model is expected to be improved by extending the training set. To examine the hypothesis, we ran a number of additional experiments using both the English dataset and Chinese dataset, both of which are described in Section SECREF21 and are not parallel data or comparable data.
As pre-trained word embeddings for the bilingual training dataset, we applied a cross-lingual word embedding obtained by the work of Duong el al BIBREF19 , where the English-Chinese cross-lingual dictionary is obtained by simply translating all the English words from English dataset to Chinese and Chinese words from Chinese dataset to English using Google translation. As contextual feature vector, we concatenate the contextual feature vector obtained from English dataset with the contextual feature vector obtained from Chinese dataset. Then we merge the English training set and the Chinese training set into one set and train the proposed model with the merged bilingual training set. TABLE TABREF31 shows that the proposed model trained with the English training set and Chinese training set achieves a small improvement of F1-score on English test set when compared with the model trained with only English training set, and a great improvement of F1-score on Chinese test set when compared with the model trained with only Chinese training set.
We compare scores of each label when the proposed model is trained with different training sets in TABLE TABREF32 . When using the English test set, the F1-scores of labels “attack method”, “attack target” and “malware” by the model trained with the English training set and Chinese training set are lower than those scores by the model trained with only the English training set. It is mainly because that tokens of these labels can be written in different languages, which harms the model trained with the bilingual training data set. In contrast, benefiting from the extension of training set, for types of labels that are often written in English, e.g., “domain ”, “file imformation”, “IPv4” and “vlunerability”, the proposed model trained with the English training set and the Chinese training set achieves higher scores than the model trained with only the English training set. When using the Chinese test set, the proposed model trained with the English training set and the Chinese training set obtained a obviously higher F1-scores than the model trained with only the Chinese training set for almost all the types of labels. It is interesting to find that types of labels “e-mail address”, “attack method”, “attacker”, which lack of instances in Chinese training set, show the biggest improvement by using the model trained with the bilingual training set.
Conclusions
To conclude, in this paper, we newly introduce a multi-head self-attention module and contextual features to the neural based sequence labelling model, which significantly improved the performance in the task of IOC identification. Based on the evaluation results of our experiments, our proposed model is proved effective on both the English test set and the Chinese test set. We further evaluated the proposed model by training the proposed model using both the English training set and the Chinese training set and compared it with models that are trained with only one training set, where the model trained with the merged bilngual training set performs better.
One of our future works is to integrate the contextual embeddings from the bidirectional language model into our proposed model. The pretrained neural language models are proved effective in the sequence labelling models BIBREF26 , BIBREF27 , BIBREF28 . It is expected to improve the performance of the proposed model by integrating both the contextual features and contextual embeddings into the neural sequence labelling model. | from a collection of advanced persistent threats (APT) reports which are published from 2008 to 2018 |
63337fd803f6fdd060ebd0f53f9de79d451810cd | 63337fd803f6fdd060ebd0f53f9de79d451810cd_0 | Q: What type of hand-crafted features are used in state of the art IOC detection systems?
Text: Introduction
Indicators of Compromise (IOCs) are forensic artifacts that are used as signs when a system has been compromised by an attacker or infected with a particular piece of malware. To be specific, IOCs are composed of some combinations of virus signatures, IPs, URLs or domain names of botnets, MD5 hashes of attack files, etc. They are frequently described in cybersecurity articles, many of which are written in unstructured text, describing attack tactics, technique and procedures. For example, a snippet from a cybersecurity article is shown in Fig. FIGREF1 . From the text , token “INST.exe” is the name of an executable file of a malicious software, and the file “ntdll.exe” downloaded by “INST.exe” is a malicious file as well. Obviously, these kinds of IOCs can be then utilized for early detection of future attack attempts by using intrusion detection systems and antivirus software, and thus, they exert an important role in the field of cybersecurity. However, with the rapid evolvement of cyber threats, the IOC data are produced at a high volume and velocity every day, which makes it increasingly hard for human to gather and manage them.
A number of systems are proposed to help discover and gather malicious information and IOCs from various types of data sources BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . However, most of those systems consist of several components that identify IOCs by using human-crafted features that heavily rely on specific language knowledge such as dependency structure, and they often have to be pre-defined by experts in the field of the cybersecurity. Furthermore, they need a large amount of annotated data used as the training data to train an IOC classifier. Those training data are frequently difficult to be crowed-sourced, because non-experts can hardly distinguish IOCs from those non-malicious IPs or URLs. Thus, it is a time-consuming and laborious task to construct such systems for different languages.
In this work, we consider the task of collecting IOCs from cybersecurity articles as a task of sequence labelling of natural language processing (NLP). By applying a sequence labelling model, each token in an unstructured input text is assigned with a label, and tokens assigned with IOC labels are then collected as IOCs. Recently, sequence labelling models have been utilized in many NLP tasks. Huang et al. BIBREF6 proposed using a sequence labelling model based on the bidirectional long short-term memory (LSTM) BIBREF7 for the task of named entity recognition (NER). Chiu et al. BIBREF8 and Lample et al. BIBREF9 proposed integrating LSTM encoders with character embedding and the neural sequence labelling model to achieve a remarkable performance on the task of NER as well as part-of-speech (POS) tagging. Besides, Dernoncourt et al. BIBREF10 and Jiang et al. BIBREF11 proposed applying the neural sequence labelling model to the task of de-identification of medical records.
Among the previous studies of the neural sequence labelling task, Zhou el al. BIBREF12 firstly propose using an end-to-end neural sequence labelling model to fully automate the process of IOCs identification. Their model is on the basis of an artificial neural networks (ANN) with bidirectional LSTM and CRF. However, their newly introduced spelling features bring a more extraction of false positives, i.e., tokens that are similar to IOCs but not malicious. In this paper, we further introduce a multi-head self-attention module and contextual features to the ANN model so that the proposed model can perform better in gathering the contextual information from the unstructured text for the task of IOCs identification. Based on the results of our experiments, our proposed approach achieves an average precision of 93.1% and the recall of 85.2% on English cybersecurity article test set, and an average precision of 82.9% and recall of 80.7% on Chinese test set. We further evaluate the proposed model by training the model using both the English dataset and Chinese dataset, which even achieves better performance.
Model
Fig. FIGREF2 shows the 3 components (layers) of the proposed neural network architecture.
Token Embedding Layer
The token embedding layer takes a token as input and outputs its vector representation. As shown in Fig. FIGREF2 , given an input sequence of tokens INLINEFORM0 , the output vector INLINEFORM1 ( INLINEFORM2 ) of each token INLINEFORM3 results from the concatenation of two different types of embeddings: token embedding INLINEFORM4 and the character-based token embeddings INLINEFORM5 , INLINEFORM6 that come from the output of a character-level bi-LSTM encoder.
Sequence Representation Layer
The Sequence Representation Layer takes the sequence of embeddings INLINEFORM0 ( INLINEFORM1 ) as input, and outputs a sequence INLINEFORM2 , where the INLINEFORM3 element of INLINEFORM4 represents the probability that the INLINEFORM5 token has the label INLINEFORM6 .
Different from the previous work of sequence labelling in news articles or patient notes BIBREF9 , BIBREF10 , sentences from a cybersecurity report often contain a large number of tokens as well as lists of IOCs with little context, making it much more difficult for LSTM to encode the input sentence correctly. Therefore, instead of the token LSTM layer in BIBREF12 , we propose sequence representation layer that consists of 3 modules, i.e., attention-based Bi-LSTM module, multi-head self-attention module and token feature module.
Considering that tokens cannot contribute equally to the representation of the input sequence, we introduce attention mechanism to Bi-LSTM to extract such tokens that are crucial to the meaning of the sentence. Then, we aggregate the representation of those informative words to form the vector of the input sequence. The attention mechanism is similar to the one proposed by Yang et al. BIBREF13 , which is defined as follows: DISPLAYFORM0
That is to say, we first compute the INLINEFORM0 as a hidden representation of the hidden states of Bi-LSTM INLINEFORM1 for INLINEFORM2 input token, where INLINEFORM3 is obtained by concatenating the INLINEFORM4 hidden states of forward and backward LSTM, i.e., INLINEFORM5 . Then, we measure the importance of the INLINEFORM6 token with a trainable vector INLINEFORM7 and get a normalized importance weight INLINEFORM8 through a softmax function. After that, the sentence vector INLINEFORM9 is computed as a weighted sum of INLINEFORM10 ( INLINEFORM11 ). Here, weight matrix INLINEFORM12 , bias INLINEFORM13 and vector INLINEFORM14 are randomly initialized and jointly learned during the training process. Note that each input sentence merely has one sentence vector INLINEFORM15 as its weighted representation, and INLINEFORM16 is then used as a part of the INLINEFORM17 output of attention-based Bi-LSTM module, where INLINEFORM18 ( INLINEFORM19 ).
Motivated by the successful application of self-attention in many NLP tasks BIBREF14 , BIBREF15 , we add a multi-head self-attention module to enhance the embedding of each word with the information of other words in a text adaptively. By means of this, the local text regions where convolution performs carry the global information of text. Following the encoder part of Vaswani et al. BIBREF14 , multi-head self-attention module is composed of a stack of several identical layers, each of which consists of a multi-head self-attention mechanism and two convolutions with kernel size 1. Given the sequence of embeddings INLINEFORM0 as input, and the output is defined as follows: DISPLAYFORM0
where, INLINEFORM0 , INLINEFORM1 , INLINEFORM2 are parameter matrices for the projections of queries INLINEFORM3 , keys INLINEFORM4 and values INLINEFORM5 in the INLINEFORM6 head, respectively. Here, INLINEFORM7 , INLINEFORM8 and INLINEFORM9 are set as the input sequence INLINEFORM10 ( INLINEFORM11 ). The INLINEFORM12 is then given to the two convolutions and the output of multi-head self-attention INLINEFORM13 ( INLINEFORM14 ) is obtained.
Furthermore, we introduce some features to defined IOCs to improve the performance of the proposed model on a very small amount of training data. Here, we define two types of features, i.e., spelling features and contextual features, and map each token INLINEFORM0 ( INLINEFORM1 ) to a feature vector INLINEFORM2 , where INLINEFORM3 is the spelling feature vector and INLINEFORM4 is the contextual feature vector. Note that the values of features are then jointly learned during the process of training. In Section SECREF3 , we will explain the features in more detail.
As shown in Fig. FIGREF2 , the vector INLINEFORM0 ( INLINEFORM1 ) is a concatenation of the INLINEFORM2 , INLINEFORM3 and INLINEFORM4 . Each vector INLINEFORM5 is then given to a feed-forward neural network with one hidden layer, which outputs the corresponding probability vector INLINEFORM6 .
CRF Layer
We also introduce a CRF layer to output the most likely sequence of predicted labels. The score of a label sequence INLINEFORM0 is defined as the sum of the probabilities of unigram labels and the bigram label transition probabilities: DISPLAYFORM0
where INLINEFORM0 is a matrix that contains the transition probabilities of two subsequent labels. Vector INLINEFORM1 is the output of the token LSTM layer, and INLINEFORM2 is the probability of label INLINEFORM3 in INLINEFORM4 . INLINEFORM5 is the probability that a token with label INLINEFORM6 is followed by a token with the label INLINEFORM7 . Subsequently, these scores are turned into probabilities of the label sequence by taking a softmax function over all possible label sequences.
Features
We extract a vector of features for each tokens of input sequences. In this section, we present each feature category in detail.
Spelling Features
Since the IOCs tend to follow fixed patterns, we predefined several regular expressions and spelling rules to identify IOCs. For example, to identify a URL, we defined a regular expression INLINEFORM0 and set the value of the URL feature to 1 when the input token matches the regular expression. However, such expressions and spelling rules could introduce false positives, i.e., tokens that have the same spelling patterns as IOCs but are not malicious. In this work, we further introduce the contextual features as described next.
Contextual Features
IOCs in cybersecurity articles are often described in a predictable way: being connected to a set of contextual keywords BIBREF16 , BIBREF1 . For example, a human user can infer that the word “ntdll.exe” is the name of a malicious file on the basis of the words “download” and “compromised” from the text shown in Fig. FIGREF1 . By analyzing the whole corpus, it is interesting that malicious file names tends to co-occur with words such as "download", "malware", "malicious", etc. In this work, we consider words that can indicate the characteristics of the neighbor words as contextual keywords and develop an approach to generate features from the automatically extracted contextual keywords.
Taking the above into account, we introduce the contextual feature vector INLINEFORM0 for a given input token INLINEFORM1 , where the INLINEFORM2 element of INLINEFORM3 is defined as follows: DISPLAYFORM0
INLINEFORM0 is the frequency of token INLINEFORM1 in the whole corpus, while INLINEFORM2 is the frequency of contextual keyword INLINEFORM3 from the windowed portions of the texts centering on the token INLINEFORM4 in the whole corpus and INLINEFORM5 is the size of window. The set of contextual keywords INLINEFORM6 are automatically extracted from the annotated texts, where each contextual keyword INLINEFORM7 ( INLINEFORM8 ) satisfies the following conditions:
INLINEFORM0 , where INLINEFORM1 is the set of manually annotated IOCs and INLINEFORM2 is a the lower bound of the frequency.
INLINEFORM0 is not a punctuation or stopword.
Note that we extract contextual keywords only from manually annotated data (e.g., training set), while we compute the contextual feature vector in all of the unlabeled data. According to this definition, it is obvious that the dimension of the contextual feature vector is as the same as the number of extracted contextual keywords. The size of window INLINEFORM0 and the lower bound of frequency INLINEFORM1 are then tuned by the validation set.
Usage of Features
The feature vector for an input token is the concatenation of the token spelling feature vector and the contextaul feature vector. Here, to elucidate the best usage of the feature vector, we evaluate the feature vector by concatenating it at different locations in the proposed model, i.e., the input of the token LSTM layer ( INLINEFORM0 ), the hidden state of the token LSTM ( INLINEFORM1 ), and the output of token LSTM ( INLINEFORM2 ). Among them, to concatenate the feature vector with the LSTM hidden state vector and the sentence vector of attention in the token LSTM layer, as shown in Section SECREF4 , achieved the best performance. We speculate that the features played an important role in the task of IOCs identification and feature vectors near the output layer were able to improve the performance more significantly than those at other locations.
Datasets
For English dataset, we crawl 687 cybersecurity articles from a collection of advanced persistent threats (APT) reports which are published from 2008 to 2018. All of these cybersecurity articles are used to train the English word embedding. Afterwards, we randomly select 370 articles, and manually annotate the IOCs contained in the articles. Among the selected articles, we randomly select 70 articles as the validation set and 70 articles as the test set; the remaining articles are used for training.
For Chinese dataset, we crawl 5,427 cybersecurity articles online from 35 cybersecurity blogs which are published from 2001 to 2018. All of these cybersecurity articles are used to train the Chinese word embedding. Afterwards, we randomly select 607 articles, and manually annotate the IOCs contained in the articles. Among the selected articles, we randomly select 122 articles as the validation set and 122 articles as the test set; the remaining articles are used for training.
TABLE TABREF20 shows statistics of the datasets. The output labels are annotated with the BIO (which stands for “Begin”, “Inside” and “Outside”) scheme.
Training Details
For pre-trained token embedding, we apply word2vec BIBREF17 to all crawled 687 English APT reports and 5,427 Chinese cybersecurity articles described in Section SECREF21 respectively. The word2vec models are trained with a window size of 8, a minimum vocabulary count of 1, and 15 iterations. The negative sampling number of word2vec is set to 8 and the model type is skip-gram. The dimension of the output token embedding is set to 100.
The ANN model is trained with the stochastic gradient descent to update all parameters, i.e., token embedding, character embedding, parameters of Bi-LSTM, weights of sentence attention, weights of multi-head self-attention, token features, and transition probabilities of CRF layers at each gradient step. For regularization, the dropout is applied to the output of each sub layer of the ANN model. Further training details are given below: (a) For attention-based Bi-LSTM module, dimensions of character embedding, hidden states of character-based token embedding LSTM, hidden states of Bi-LSTM, and sentence attention are set to 25, 25, 100 and 100, respectively. For multi-head self-attention module, we employ a stack of 6 multi-head self attention layer, each of which has 4 head and dimension of each head is set to 64. (b) All of the ANN’s parameters are initialized with a uniform distribution ranging from -1 to 1. (c) We train our model with a fixed learning rate of 0.005. The minimum number of epochs for training is set as 30. After the first 30 epochs had been trained, we compute the average F1-score of the validation set by the use of the currently produced model after every epoch had been trained, and stop the training process when the average F1-score of validation set fails to increase during the last ten epochs. We train our model for, if we do not early stop the training process, 100 epochs as the maximum number. (d) We rescale the normalized gradient to ensure that its norm does not exceed 5. (e) The dropout probability is set to 0.5.
Results
As shown in TABLE TABREF24 , we report the micro average of precision, recall and F1-score for all 11 types of labels for a baseline as well as the proposed model. As the baseline, we simply judge the input token as IOCs on the basis of the spelling features described in BIBREF12 . As presented in TABLE TABREF24 , the score obtained by the proposed model is clearly higher than the baseline. Here, as described in Section SECREF14 , the sizes of window and lower bounds of frequency for selecting contextual keywords are tuned as 4 and 7 throughout the evaluation of English dataset, and tuned as 3 and 4 throughout the evaluation of Chinese dataset. The number of extracted contextual keywords from the English dataset is 1,328, and from the Chinese dataset is 331.
Furthermore, we quantitatively compare our study with other typical works of sequence labelling, i.e., the work of Huang et al. BIBREF6 , the work of Lample et al. BIBREF9 and the work of Rei et al. BIBREF18 . Huang et al. BIBREF6 proposed a bidirectional LSTM model with a CRF layer, including hand-crafted features specialized for the task of sequence labelling. Lample et al. BIBREF9 described a model where the character-level representation was concatenated with word embedding and Rei et al. BIBREF18 improved the model by introducing an attention mechanism to the character-level representations. We train these models by employing the same training set and training parameters as the proposed model. As shown in TABLE TABREF24 , the proposed model obtains the highest precision, recall and F1-score than other models in the task of IOCs extraction. Compared with the second-best model of Lample et al. BIBREF9 , the performance gain of the proposed model on the English dataset is approximately 10.1% of precision and 10.0% of recall. The performance gain of the proposed model on the Chinese dataset is approximately 4.2% of precision and 9.0% of recall.
We also quantitatively compare our study with the work of Zhou et al. BIBREF12 , which proposed a bidirectional LSTM model with a CRF layer, including hand-crafted spelling features for the task of IOC identification. As shown in TABLE TABREF24 , the proposed model obtains a slightly higher F1-score on the English dataset and significantly higher F1-score on the Chinese dataset.
TABLE TABREF26 compares several examples of correct IOC extraction produced by the proposed model with one by the work of Lample et al. BIBREF9 . In the first example, the model of Lample et al. BIBREF9 fails to identify the malicious URL “http://www7.chrome-up.date/0m5EE”, because the token only appears in the test set and consists of several parts that are uncommon for URLs, such as “www7” and “date”, and thus both the token embedding and the character embedding lack proper information to represent the token as a malicious URL. The proposed model correctly identifies the URL, where the token is defined as a URL by spelling features and is then identified as a malicious URL by the use of the context information. In the second example, the model of Lample et al. BIBREF9 fails to identify token “cr.sh” of the input Chinese text as a malicious file name, while the token is assigned with a correct label by the proposed model. It is mainly because that the token “cr.sh” is defined as a token of file information by spelling features and tends to co-occur with words, “”(download) and “”(mining software). These two words often appear nearby malicious file information and are then extracted as contextual keywords in Section SECREF14 . The token “cr.sh” is then correctly identified as a token of malicious file information by the use of the contextual features.
Analysis of Contextual Features
The proposed model provides an intuitive way to inspect the contextual information of each given token. As described in Section SECREF14 , we initialize the contextual features of each given token using the automatically extracted contextual keywords and jointly learn them during the process of training with the whole ANN model. To prove the effectiveness of the contextual features, we visualize the learned weights martix of each contextual keyword of contextual feature and show several examples in Fig. FIGREF28 . Each row of the matrix in each plot indicates the weights of contextual keywords for the given tokens. From this we see which contextual keyword were considered more important to represent the contextual information of the given token. We can see from the matrix in Fig. FIGREF28 that, for the token “spearphshing”, which is an email-spoofing attack method, the contextual keyword “email” has the largest weight. For the malware “SunOrcal”, which drops several malicious executable files, contextual keywords “droppper” and “dropper” have larger weights than other contextual keywords such as “ascii”, “port” and “type”. For non-IOC token “socket”, contextual keywords “gateway” and “port” yield larger weights than other keywords because "socket" tends to co-occur with “gateway” and “port”.
We further calculate the average weight of each contextual keyword and show the top 10 and bottom 10 largest weighted contextual keywords in TABLE TABREF29 . From this we see that contextual keywords such as, “hash” and “filename”, which tends to co-occur with malicious filenames, have the largest weights for IOCs, while the contextual keywords such as “ascii”, “password” have the largest weights for non-IOCs. Here, it is interesting to find that contextual keyword “dropped” and “droppper”, which tend to co-occur with malicious file information and malwares, yield large weights for IOCs but small weights for non-IOCs. The proposed ANN model benefits from the differences of contextual information between IOCs and non-IOCs that is represented by the contextual features, and thus, achieves better performance than the previous works.
Training the Proposed Model with Bilingual Data
Even though security articles are written in different languages, most of the IOCs are written in English, and are described in a similar pattern. Therefore, using multilingual corpora could be a solution for addressing the lack of annotated data, and the performance of the proposed model is expected to be improved by extending the training set. To examine the hypothesis, we ran a number of additional experiments using both the English dataset and Chinese dataset, both of which are described in Section SECREF21 and are not parallel data or comparable data.
As pre-trained word embeddings for the bilingual training dataset, we applied a cross-lingual word embedding obtained by the work of Duong el al BIBREF19 , where the English-Chinese cross-lingual dictionary is obtained by simply translating all the English words from English dataset to Chinese and Chinese words from Chinese dataset to English using Google translation. As contextual feature vector, we concatenate the contextual feature vector obtained from English dataset with the contextual feature vector obtained from Chinese dataset. Then we merge the English training set and the Chinese training set into one set and train the proposed model with the merged bilingual training set. TABLE TABREF31 shows that the proposed model trained with the English training set and Chinese training set achieves a small improvement of F1-score on English test set when compared with the model trained with only English training set, and a great improvement of F1-score on Chinese test set when compared with the model trained with only Chinese training set.
We compare scores of each label when the proposed model is trained with different training sets in TABLE TABREF32 . When using the English test set, the F1-scores of labels “attack method”, “attack target” and “malware” by the model trained with the English training set and Chinese training set are lower than those scores by the model trained with only the English training set. It is mainly because that tokens of these labels can be written in different languages, which harms the model trained with the bilingual training data set. In contrast, benefiting from the extension of training set, for types of labels that are often written in English, e.g., “domain ”, “file imformation”, “IPv4” and “vlunerability”, the proposed model trained with the English training set and the Chinese training set achieves higher scores than the model trained with only the English training set. When using the Chinese test set, the proposed model trained with the English training set and the Chinese training set obtained a obviously higher F1-scores than the model trained with only the Chinese training set for almost all the types of labels. It is interesting to find that types of labels “e-mail address”, “attack method”, “attacker”, which lack of instances in Chinese training set, show the biggest improvement by using the model trained with the bilingual training set.
Conclusions
To conclude, in this paper, we newly introduce a multi-head self-attention module and contextual features to the neural based sequence labelling model, which significantly improved the performance in the task of IOC identification. Based on the evaluation results of our experiments, our proposed model is proved effective on both the English test set and the Chinese test set. We further evaluated the proposed model by training the proposed model using both the English training set and the Chinese training set and compared it with models that are trained with only one training set, where the model trained with the merged bilngual training set performs better.
One of our future works is to integrate the contextual embeddings from the bidirectional language model into our proposed model. The pretrained neural language models are proved effective in the sequence labelling models BIBREF26 , BIBREF27 , BIBREF28 . It is expected to improve the performance of the proposed model by integrating both the contextual features and contextual embeddings into the neural sequence labelling model. | Unanswerable |
63496705fff20c55d4b3d8cdf4786f93e742dd3d | 63496705fff20c55d4b3d8cdf4786f93e742dd3d_0 | Q: Do they compare DeepER against other approaches?
Text: Introduction
A Question Answering (QA) system is a computer program capable of understanding questions in a natural language, finding answers to them in a knowledge base and providing answers in the same language. So broadly defined task seems very hard; BIBREF0 describes it as AI-Complete, i.e. equivalent to building a general artificial intelligence. Nonetheless, the field has attracted a lot of attention in Natural Language Processing (NLP) community as it provides a way to employ numerous NLP tools in an exploitable end-user system. It has resulted in valuable contributions within TREC competitions BIBREF1 and, quite recently, in a system called IBM Watson BIBREF2 , successfully competing with humans in the task.
However, the problem remains far from solved. Firstly, solutions designed for English are not always easily transferable to other languages with more complex syntax rules and less resources available, such as Slavonic. Secondly, vast complexity and formidable hardware requirements of IBM Watson suggest that there is still a room for improvements, making QA systems smaller and smarter.
This work attempts to contribute in both of the above areas. It introduces RAFAEL (RApid Factoid Answer Extraction aLgorithm), a complete QA system for Polish language. It is the first QA system designed to use an open-domain plain-text knowledge base in Polish to address factoid questions not only by providing the most relevant sentence, but also an entity, representing the answer itself. The Polish language, as other Slavonic, features complex inflection and relatively free word order, which poses additional challenges in QA. Chapter SECREF2 contains a detailed description of the system architecture and its constituents.
In the majority of such systems, designers' attention focus on different aspects of a sentence selection procedure. Herein, a different idea is incorporated, concentrating on an entity picking procedure. It allows to compare fewer sentences, likely to contain an answer. To do that, classical Named Entity Recognition (NER) gets replaced by Deep Entity Recognition. DeepER, introduced in this work, is a generalisation of NER which, instead of assigning each entity to one of several predefined NE categories, assigns it to a WordNet synset.
For example, let us consider a question: Which exiled European monarch returned to his country as a prime minister of a republic?. In the classical approach, we recognise the question as concerning a person and treat all persons found in texts as potential answers. Using DeepER, it is possible to limit the search to persons being monarchs, which results in more accurate answers. In particular, we could utilise information that Simeon II (our answer) is a tsar; thanks to WordNet relations we know that it implies being a monarch. DeepER is a generalisation of NER also from another point of view – it goes beyond the classical named entity categories and treats all entities equally. For example, we could answer a question Which bird migrates from the Arctic to the Antarctic and back every year?, although arctic tern is not recognized as NE by NER systems. Using DeepER, we may mark it as a seabird (hence a bird) and include among possible answers. Chapter SECREF3 outlines this approach.
The entity recognition process requires an entities library, containing known entities, their text representations (different ways of textual notation) and WordNet synsets, to which they belong. To obtain this information, the program analyses definitions of entries found in encyclopaedia (in this case the Polish Wikipedia). In previous example, it would use a Wikipedia definition: The Arctic Tern (Sterna paradisaea) is a seabird of the tern family Sternidae. This process, involving also redirect and disambiguation pages, is described in section SECREF40 . Next, having all the entities and their names, it suffices to locate their mentions in a text. The task (section SECREF73 ) is far from trivial because of a complicated named entity inflection in Polish (typical for Slavonic languages, see BIBREF3 ).
DeepER framework provides also another useful service, i.e. automatic evaluation. Usually QA systems are evaluated by verifying accordance between obtained and actual answer based on a human judgement. Plain string-to-string equality is not enough, as many entities have different text representations, e.g. John F. Kennedy is as good as John Fitzgerald Kennedy and John Kennedy, or JFK (again, the nominal inflection in Polish complicates the problem even more). However, with DeepER, a candidate answer can undergo the same recognition process and be compared to the actual expected entity, not string.
Thanks to automatic evaluation vast experiments requiring numerous evaluations may be performed swiftly; saving massive amount of time and human resources. As a test set, authentic questions from a popular Polish quiz TV show are used. Results of experiments, testing (among others) the optimal context length, a number of retrieved documents, a type of entity recognition solution, appear in section SECREF88 .
To avoid overfitting, the final system evaluation is executed on a separate test set, previously unused in development, and is checked manually. The results are shown in section SECREF93 and discussed in chapter SECREF6 . Finally, chapter SECREF7 concludes the paper.
RAFAEL
As stated in previous chapter, RAFAEL is a computer system solving a task of Polish text-based, open-domain, factoid question answering. It means that provided questions, knowledge base and returned answers are expressed in Polish and may belong to any domain. The system analyses the knowledge base, consisting of a set of plain text documents, and returns answers (as concise as possible, e.g. a person name), supplied with information about supporting sentences and documents.
What are the kinds of requests that fall into the category of factoid questions? For the purpose of this study, it is understood to include the following types:
Although the above list rules out many challenging types of questions, demanding more elaborate answers (e.g. Why was JFK killed?, What is a global warming?, How to build a fence?), it still involves very distinct problems. Although RAFAEL can recognize factoid questions from any of these types and find documents relevant to them (see more in section SECREF18 and BIBREF4 ), its answering capabilities are limited to those requesting single unnamed entities and named entities. In this document, they are called entity questions.
The task description here is similar to the TREC competitions and, completed with test data described in section SECREF80 , could play a similar role for Polish QA, i.e. provide a possibility to compare different solutions of the same problem. More information about the task, including its motivation, difficulties and a feasibility study for Polish could be found in BIBREF5 .
Related work
The problem of Question Answering is not new to the Polish NLP community (nor working on other morphologically rich languages), but none of studies presented so far coincides with the notion of plain text-based QA presented above.
First Polish QA attempts date back to 1985, when BIBREF6 presented a Polish interface to ORBIS database, containing information about the solar system. The database consisted of a set of PROLOG rules and the role of the system (called POLINT) was to translate Polish questions to appropriate queries. Another early solution, presented by BIBREF7 , could only work in a restricted domain (business information).
A system dealing with a subset of the TREC tasks was created for Bulgarian by BIBREF8 . His solution answers only three types of questions: Definition, Where-Is and Temporal. He was able to achieve good results with 100 translated TREC questions, using several manually created answer patterns, without NER or any semantic information. Another system for Bulgarian BIBREF9 participated in the CLEF 2005 competition. Its answer extraction module bases on partial grammars, playing a role of patterns for different types of questions. They could answer correctly 37 of 200 questions, of which only 16 belong to the factoid type. Previously the same team BIBREF10 took part in a Bulgarian-English track of the CLEF 2004, in which Bulgarian questions were answered using English texts.
A QA solution was also created for Slovene BIBREF11 . The task there is to answer students' questions using databases, spreadsheet files and a web service. Therefore, it differs from the problem discussed above by limited domain (issues related to a particular faculty) and the non-textual knowledge base. Unfortunately, no quantitative results are provided in this work.
More recently, several elements of a Polish QA system called Hipisek were presented by BIBREF12 . It bases on a fairly common scheme of transforming a question into a search query and finding the most appropriate sentence, satisfying question constrains. Unfortunately, a very small evaluation set (65 question) and an unspecified knowledge base (gathered by a web crawler) make it difficult to compare the results. In their later works BIBREF13 , BIBREF14 , the team concentrated on spatial reasoning using a knowledge base encoded as a set of predicates.
The approach presented by BIBREF15 is the closest to the scope of this work, as it includes analysis of Polish Wikipedia content and evaluation is based on questions translated from a TREC competition. Unfortunately, it heavily relies on a structure of Wikipedia entries, making it impossible to use with an arbitrary textual corpus.
A non-standard approach to answer patterns has been proposed by BIBREF16 . In their Czech open-domain QA system they used a set of templates associated with question types, but also presented a method to learn them semi-automatically from search results. BIBREF17 in their Bulgarian QA system concentrated on semantic matching between between a question and a possible answer checked using dependency parsing. However, they provide no data regarding an answering precision of the whole system.
The last Polish system worth mentioning has been created by BIBREF18 . Generally, their task, called Open Domain Question Answering (ODQA), resembles what is treated here, but with one major difference. A document is considered an answer; therefore they focus on improving ranking in a document retrieval stage. They have found out that it could benefit from taking nearness of query terms occurrences into account.
As some of Slavonic languages lack necessary linguistic tools and resources, only partial solutions of QA problems exist for them, e.g. document retrieval for Macedonian BIBREF19 , question classification for Croatian BIBREF20 or answer validation for Russian BIBREF21 .
The idea of DeepER in a nutshell is to improve QA by annotating a text with WordNet synsets using an entity base created by understanding definitions found in encyclopaedia. Parts of this concept have already appeared in the NLP community.
A technique of coordinating synsets assigned to a question and a possible answer emerged in a study by BIBREF45 . While a question analysis there seems very similar to this work, entity library (called proper noun ontology) generation differs a lot. The author analysed 1 GB of newswire text and extracted certain expressions, e.g. "X, such as Y" implies that Y is an instance of X. Albeit precision of resulting base was not very good (47 per cent for non-people proper names), it led to a substantial improvement of QA performance.
The idea of analysing encyclopaedic definitions to obtain this type of information already appeared, but was employed for different applications. For example, BIBREF46 described a method of building a gazetteer by analysing hyperonymy branches of nouns of first sentences in Wikipedia definitions. Unlike in this work, an original synset was replaced by a coarse-grained NER category. Another example of application is a NE recognizer BIBREF47 using words from a definition as additional features for a standard CRF classifier. In their definition analysis only the last word of the first nominal group was used.
Other researchers dealt with a task explicitly defined as classifying Wikipedia entries to NER categories. For example BIBREF48 addressed the problem by combining traditional text classification techniques (bag of words) with contexts of entity mentions. Others BIBREF49 thoroughly examined article categories as a potential source of is-a relations in a taxonomy (99 per cent of entries have at least one category). Inhomogeneity of categories turned out as the main problem, dealt with by a heuristic classifier, assigning is-a and not-is-a labels. Categories were also used as features in a NER task BIBREF50 , but it required a set of manually designed patterns to differentiate between categories of different nature.
Exploring a correspondence between Wikipedia entries and WordNet synsets found an application in automatic enriching ontologies with encyclopaedic descriptions BIBREF51 . However, only NEs already appearing in the WordNet were considered. The task (solved by bag-of-words similarity) is non-trivial only in case of polysemous words, e.g. which of the meanings of Jupiter corresponds to which Wikipedia article? Others BIBREF52 concentrated on the opposite, i.e. extending the WordNet by NEs that are not there yet by adding titles of entries as instances of synsets corresponding to their common category.
Also, some see Wikipedia as an excellent source of high-quality NER training data. Again, it requires to project entries to NE categories. A thorough study of this problem, presented by BIBREF53 , utilizes features extracted from article content (bag of words), categories, keywords, inter-article and inter-language links. A final annotated corpus turns out as good for NER training as a manually annotated gold standard.
Finally, some researchers try to generalise NER to other categories, but keep the same machine-learning-based approach. For example, BIBREF54 developed a tagger, assigning words in a text to one of 41 supersenses. Supersenses include NE categories, but also other labels, such as plant, animal or shape. The authors projected word-sense annotations of publicly available corpora to supersenses and applied perceptron-trained Hidden Markov Model for sequence classification, obtaining precision and recall around 77 per cent.
System Architecture
A general architectural scheme of RAFAEL (figure FIGREF11 ) has been inspired by similar systems developed for English; for examples see works by BIBREF22 and BIBREF23 .
Two of the steps in the diagram concern offline processing of a knowledge base. Firstly, it is indexed by a search engine to ensure efficient searching in further stages (INDEXING). Secondly, it may be annotated using a set of tools (NLP), but this could also happen at an answering stage for selected documents only.
After the system receives a question, it gets analysed (QUESTION ANALYSIS) and transformed into a data structure, called question model. One of its constituents, a search query, is used to find a set of documents, which are probably appropriate for the current problem (SEARCH). For each of the documents, all entity mentions compatible with an obtained question type (e.g. monarchs), are extracted (ENTITY RECOGNITION). For each of them, a context is generated (CONTEXT GENERATION). Finally, a distance between a question content and the entity context is computed to asses its relevance (DISTANCE MEASURE). All the mentions and their distance scores are stored and, after no more documents are left, used to select the best match (BEST ENTITY SELECTION). The system returns the entity, supplied with information about a supporting sentence and a document, as an answer.
Knowledge Base Processing
Knowledge base (KB) processing consists of two elements: indexing and annotating. The objective of the first is to create an index for efficient searching using a search engine. In the system, Lucene 3.6 is used to build two separate full-text indices: regular and stemmed using a built-in stemmer for Polish, Stempel BIBREF24 .
Secondly, texts go through a cascade of annotation tools, enriching it with the following information:
Morphosyntactic interpretations (sets of tags), using Morfeusz 0.82 BIBREF25 ,
Tagging (selection of the most probable interpretation), using a transformation-based learning tagger, PANTERA 0.9.1 BIBREF26 ,
Syntactic groups (possibly nested) with syntactic and semantic heads, using a rule-based shallow parser Spejd 1.3.7 BIBREF27 with a Polish grammar, including improved version of modifications by BIBREF28 , enabling lemmatisation of nominal syntactic groups,
Named entities, using two available tools: NERF 0.1 BIBREF29 and Liner2 2.3 BIBREF30 .
All the annotations are stored in a variant of TEI P5 standard, designed for the National Corpus of Polish BIBREF31 . As noted previously, the process of annotating is not indispensable at the stage of offline KB processing; it could be as well executed only on documents returned from the search engine (for example see Webclopedia by BIBREF22 or LASSO by BIBREF23 ). However, since during evaluation experiments the same documents undergo the process hundreds of times, it seems reasonable to process the whole KB only once.
Question Analysis
The goal of question analysis is to examine a question and extract all the information that suffices for answer finding. A resulting data structure, called question model, contains the following elements:
Question type – a description of expected answer type, instructing the system, what type of data could be returned as an answer. It has three levels of specificity:
General question type – one of the types of factoid questions, enumerated at the beginning of this chapter,
Named entity type – applicable only in case general type equals named entity. Possible values are the following: place, continent, river, lake, mountain, mountain range, island, archipelago, sea, celestial body, country, state, city, nationality, person, first name, last name, band, dynasty, organisation, company, event, date, century, year, period, number, quantity, vehicle, animal, title.
Focus synset – applicable in case of entity questions; a WordNet synset, to which a question focus belongs; necessary for DeepER.
Search query – used to find possibly relevant documents,
Question content – the words from question which are supposed to appear also in context of an answer.
The task presented above, called question classification, is an example of text classification with very short texts. It could be tackled by a general-purpose classifier; for example, BIBREF11 used SVMs (Support Vector Machines) for closed-domain Slovene QA system; BIBREF32 employed SNoW (Sparse Network of Winnows) for hierarchical classification of TREC questions. For Polish results are not satisfactory BIBREF4 because of data sparsity.
However, sometimes a solution seems quite evident, as part of the question types enforce its structure. For example, when it begins with Who or When, it belongs to person and date question types, respectively. That is why a set of 176 regular expressions (in case of RAFAEL) suffices to deal with them. They match only a subset of questions (36.15 per cent of the training set), but are highly unambiguous (precision of classification equals 95.37 per cent). Nevertheless, some BIBREF33 use solely such patterns, but need a great number of them (1,273).
Unfortunately, most of entity questions are ambiguous, i.e. it is not enough to inspect an interrogative pronoun to find an answer type. They may begin with what or which, followed by a question focus. For example, let us consider a question Which russian submarine sank in 2000 with its whole crew?. Its focus (russian submarine) carries information that the question could be answered by a named entity of type vehicle. The whole process of focus analysis is shown in figure FIGREF25 . The first nominal group after a pronoun serves as a possible lexeme name in plWordNet 2.1 BIBREF34 . As long as there are no results, it gets replaced by its semantic head. When a matching lexeme exists in WordNet, a set of all its hypernyms is extracted. If any of the elements in the set correspond to one of the named entity types, this type is recorded in the question model. Otherwise the general question type takes the value unnamed entity. A WordNet-assisted focus analysis was also implemented in one of solutions participating in a TREC competition BIBREF35 .
Search query generation is described in the next chapter. The last element of a question model, called question content, contains segments, which are to be compared with texts to find the best answer. It includes all the words of the interrogative sentence except those included in the matched pattern (Which, ?) and the focus (submarine). In our example the following are left: russian, sank, in, 2000, with, its, whole, crew. An entity mention, which context resembles this set, will be selected as an answer (see details in section SECREF33 ).
The question analysis stage explained above follows a design presented in previous works BIBREF4 , BIBREF36 , where more details could be found. The major difference lies in result processing – an original synset is not only projected to one of the named entity types, but also recorded as a focus synset in question type, utilised in DeepER to match entity types. In our example, it would only consider submarines as candidate answers.
Document Retrieval
The use of search engines in QA systems is motivated mainly by performance reasons. Theoretically, we could analyse every document in a text base and find the most relevant to our query. However, it would take excessive amount of time to process the documents, majority of which belong to irrelevant domains (839,269 articles in the test set). A search engine is used to speed up the process by selecting a set of documents and limiting any further analysis to them.
As described in section SECREF12 , a knowledge base is indexed by Lucene offline. Given a question, we need to create a search query. The problem is that an answer in the knowledge base is probably expressed differently than the question. Hence, a query created directly from words of the question would not yield results, unless using a highly-redundant KB, such as the WWW (for this type of solution see BIBREF37 ). Therefore, some of the query terms should be dropped – based on their low IDF BIBREF38 or more complex heuristics BIBREF23 . On the other hand, the query may be expanded with synonyms BIBREF22 or derived morphological forms BIBREF38 .
Finally, we need to address term matching issue – how to compare a query keyword and a text word in a morphologically-rich language, such as Polish? Apart from exact match, it also is possible to use a stemmer or fuzzy queries, available in Lucene (accepting a predefined Levenshtein distance between matching strings).
Previous experiments BIBREF36 led to the following query generation procedure:
Remove all words matched by a regular expression at the classification stage (What, Which, etc.),
Keep a question focus,
Connect all the remaining words by OR operator,
Use fuzzy term matching strategy with absolute distance equal 3 characters and fixed prefix.
Lucene handles a query and yields a ranked document list, of which N first get transferred to further analysis. The influence of value of N on answering performance is evaluated in section SECREF88 .
Entity Recognition
Having a set of proposed documents and a question type, the next step is to scan them and find all mentions of entities with appropriate types. RAFAEL includes two approaches to the problem: classical Named Entity Recognition (NER) and novel Deep Entity Recognition.
Three NERs for Polish are employed: NERF, Liner2 and Quant. NERF BIBREF29 is a tool designed within the project of the National Corpus of Polish and bases on linear-chain conditional random fields (CRF). It recognizes 13 types of NEs, possibly nested (e.g. Warsaw in University of Warsaw). Liner2 BIBREF30 also employs CRFs, but differentiates NEs of 56 types (which could be reduced to 5 for higher precision). Annotation using both of the tools happens offline within the KB preprocessing, so in the currently described stage it suffices to browse the annotations and find matching entities. As the above tools lack recognition of quantitative expressions, a new one has been developed especially for RAFAEL and called Quant. It is able to handle both numbers and quantities (using WordNet) in a variety of notations.
Appendix A contains details of implementation of named entity recognition in RAFAEL, including a description of Quant and a mapping between question types and named entity types available in NERF and Liner2. An alternative being in focus of this work, i.e. DeepER approach, is thorougly discussed in chapter SECREF3 .
RAFAEL may use any of the two approaches to entity recognition: NER (via NERF, Liner2 and Quant) or novel DeepER; this choice affects its overall performance. Experiments showing precision and recall of the whole system with respect to applied entity recognition technique are demonstrated in section SECREF88 .
An entity recognition step is performed within the question answering process and aims at selecting all entity mentions in a given annotated document. Before it begins, the entity library is read into a PATRICIA trie, a very efficient prefix tree. In this structure, every entity name becomes a key for storing a corresponding list of entities.
When a document is ready for analysis, it is searched for strings that match any of the keys in the trie. The candidate chunks (sequences of segments) come from three sources:
lemmata of words and syntactic groups,
sequences of words in surface forms (as they appear in text),
sequences of words in base forms (lemmata).
The last two techniques are necessary, because a nominal group lemmatisation often fails, especially in case of proper names. Their rich inflection in Polish BIBREF3 means that a nominal suffix of an entity may be hard to predict. Therefore, a chunk is considered to match an entity name if:
they share a common prefix,
an unmatched suffix in neither of them is longer than 3 characters,
the common prefix is longer than the unmatched chunk suffix.
Given a list of entity mentions, RAFAEL checks their compatibility with a question model. Two of its constituents are taken into account: a general question type and a synset. An entity mention agrees with NAMED_ENTITY type if its first segment starts with a capital letter and always agrees with UNNAMED_ENTITY. To pass a semantic agreement test, the synset of the question model needs to be a (direct or indirect) hypernym of one of the synsets assigned to the entity. For example, list of synsets assigned to entity Jan III Sobieski contains <król.1> (king), so it matches a question focus <władca.1, panujący.1, hierarcha.2, pan.1> (ruler) through a hypernymy path <władca.1, panujący.1, hierarcha.2, pan.1> INLINEFORM0 <monarcha.1, koronowana głowa.1> (monarch) INLINEFORM1 <król.1>. All the mentions of entities satisfying these conditions are returned for further processing.
Mention selection
When a list of entity mentions in a given document is available, we need to decide which of them most likely answers the question. The obvious way to do that is to compare surroundings of every mention with the content of the question. The procedure consists of two steps: context generation and similarity measurement.
The aim of a context generation step is to create a set of segments surrounding an entity, to which they are assigned. Without capabilities of full text understanding, two approximate approaches seem legitimate:
Sentence-based – for a given entity mention, a sentence in which it appears, serves as a context,
Segment-based – for a given entity mention, every segment sequence of length M, containing the entity, is a context.
Both of them have some advantages: relying on a single sentence ensures relation between an entity and a context, whereas the latter provides possibility of modifying context length. Obviously, the value of M should be proportional to question (precisely, its content) length.
The method of treating sentences as a context has gained most popularity (see work of BIBREF39 ), but a window of fixed size also appears in the literature; for example BIBREF38 used one with M=140 bytes.
The context generation is also related to another issue, i.e. anaphoric expressions. Some segments (e.g. this, him, they) may refer to entities that occurred earlier in a text and therefore harm a similarity estimation. It could be tackled by applying anaphora resolution, but a solution for Polish BIBREF40 remains in an early stage. Observations show that the majority of anaphora refer to an entity in a document title, so the problem is partially bypassed by adding a title to a context.
An influence of the context generation techniques on final results is shown in section SECREF88 .
To measure a similarity between a question content (explained in section SECREF18 ) and an entity context (generated by the procedures in previous section), a Jaccard similarity index BIBREF41 is computed. However, not all word co-occurrences matter equally (e.g. compare this and Honolulu), so word weights are used: INLINEFORM0
The sets INLINEFORM0 and INLINEFORM1 contain segments in base forms, whereas INLINEFORM2 denotes a weight of an INLINEFORM3 -th base form, equal to its scaled IDF computed on a document set INLINEFORM4 : INLINEFORM5
The Jaccard index is a popular solution for sentence similarity measurement in QA (for example see a system by BIBREF42 ). In case of selecting relevant documents, cosine measure is also applied. BIBREF18 compared it to Minimal Span Weighting (MSW) and observed that the latter performs better, as it takes into account a distance between matched words. A study of different techniques for sentence similarity assessment could be found in BIBREF39 .
At this stage, a large set of pairs of entity mention and its contexts with scores assigned, is available. Which of them answers the question? Choosing the one with the highest score seems an obvious solution, but we could also aggregate scores of different mentions corresponding to the same answer (entity), e.g. compute their sum or mean. However, such experiments did not yield improvement, so RAFAEL returns only a single answer with the highest score.
An answer consists of the following elements: an answer string, a supporting sentence, a supporting document and a confidence value (the score). A sentence and a document, in which the best mention appeared, are assumed to support the answer. Thanks to properties of Jaccard similarity, the mention score ranges between 0 for completely unrelated sentences to 1 for practically (ignoring inflection and a word order) the same. Therefore, it may serve as an answer confidence.
When no entity mentions satisfying constraints of a question are found, no answer is returned. This type of result could also be used when the best confidence score is below a predefined value; performance of such technique are shown in section SECREF88 . The refusal to answer in case of insufficient confidence plays an important role in Jeopardy!, hence in IBM Watson BIBREF2 , but it was also used to improve precision in other QA systems BIBREF43 .
Deep Entity Recognition
Deep Entity Recognition procedure is an alternative to applying Named Entity Recognition in QA to find entities matching question constraints. It scans a text and finds words and multi-word expressions, corresponding to entities. However, it does not assign them to one of several NE categories; instead, WordNet synsets are used. Therefore, named entities are differentiated more precisely (e.g. monarchs and athletes) and entities beyond the classical NE categories (e.g. species, events, devices) could also be recognised in a text.
It does not seem possible to perform this task relying solely on features extracted from words and surrounding text (as in NER), so it is essential to build an entity library. Such libraries already exist (Freebase, BabelNet, DBpedia or YAGO) and could provide an alternative for DeepER, but they concentrate on English. The task of adaptation of such a base to another language is far from trivial, especially for Slavonic languages with complex NE inflection BIBREF3 . An ontology taking into account Polish inflection (Prolexbase) has been created by BIBREF44 , but it contains only 40,000 names, grouped into 34 types.
Entity Library
An entity library for DeepER contains knowledge about entities that is necessary for deep entity recognition. Each of them consists of the following elements (entity #9751, describing the Polish president, Bronisław Komorowski):
Main name: Bronisław Komorowski,
Other names (aliases): Bronisław Maria Komorowski, Komorowski,
Description URL: http://pl.wikipedia.org/wiki/?curid=121267,
plWordNet synsets:
<podsekretarz1, podsekretarz stanu1, wiceminister1> (vice-minister, undersecretary),
<wicemarszałek1> (vice-speaker of the Sejm, the Polish parliament),
<polityk1> (politician),
<wysłannik1, poseł1, posłaniec2, wysłaniec1, posłannik1> (member of a parliament),
<marszałek1> (speaker of the Sejm),
<historyk1> (historian),
<minister1> (minister),
<prezydent1, prezydent miasta1> (president of a city, mayor).
A process of entity library extraction is performed offline, before question answering. The library built for deep entity recognition in RAFAEL, based on the Polish Wikipedia (857,952 articles, 51,866 disambiguation pages and 304,823 redirections), contains 809,786 entities with 1,169,452 names (972,592 unique). The algorithm does not depend on any particular feature of Wikipedia, so any corpus containing entity definitions could be used.
Figure FIGREF54 shows an exemplary process of converting the first paragraph of a Polish Wikipedia entry, describing former Polish president Lech Wałęsa, into a list of WordNet synsets. First, we omit all unessential parts of the paragraph (1). This includes text in brackets or quotes, but also introductory expressions like jeden z (one of) or typ (type of). Then, an entity name is detached from the text by matching one of definition patterns (2). In the example we can see the most common one, a dash (–). Next, all occurrences of separators (full stops, commas and semicolons) are used to divide the text into separate chunks (3). The following step employs shallow parsing annotation – only nominal groups that appear at the beginning of the chunks are passed on (4). The first chunk that does not fulfil this requirement and all its successors get excluded from further analysis (4.1). Finally, we split the coordination groups and check, whether their lemmas correspond to any lexemes in WordNet (5). If not, the process repeats with the group replaced by its semantic head. In case of polysemous words, only the first word sense (usually the most common) is taken into account.
The whole process is more complicated than the simple example shows. Generally, it consists of the following steps:
Prepare a corpus – data format and annotation process is the same as for a knowledge base, used in question answering, see section SECREF12 . It differs in scope of page categories, including not only articles, but also disambiguation and redirection pages.
For each of article pages, extract the first paragraph and apply readDefinition function. If a resulting entity has a non-empty synset list, add it to the library. If some of the redirection pages point to the entity name, add their names as entity aliases.
For each of disambiguation pages, extract all items and apply readDefinition function. If an item refers to an existing entity, extend it with extracted synsets and disambiguation page name. Create a new entity otherwise. Add redirection names as previously.
Save the obtained base for future use.
Function readDefinition( INLINEFORM0 ) – interprets a definition to assign synsets to an entity. INLINEFORM1 - annotated first paragraph of an encyclopaedic entry INLINEFORM2 - synsets describing an entity INLINEFORM3 := {} INLINEFORM4 := removeInBrackets( INLINEFORM5 ) INLINEFORM6 := removeInQuotes( INLINEFORM7 ) INLINEFORM8 in INLINEFORM9 INLINEFORM10 matches INLINEFORM11 INLINEFORM12 := match( INLINEFORM13 , INLINEFORM14 ).group(2) break INLINEFORM15 := removeDefinitionPrefixes( INLINEFORM16 ) INLINEFORM17 := split( INLINEFORM18 , INLINEFORM19 ) INLINEFORM20 in INLINEFORM21 INLINEFORM22 := firstGroupOrWord( INLINEFORM23 ) isNominal( INLINEFORM24 ) INLINEFORM25 := INLINEFORM26 INLINEFORM27 extractSynsets( INLINEFORM28 ) break INLINEFORM29
The readDefinition function (shown as algorithm SECREF40 ) analyses a given paragraph of text and extracts a set of synsets, describing an entity, to which it corresponds, as exemplified by figure FIGREF54 . Simplifying, it is done by removing all unnecessary text (in brackets or quotes), splitting it on predefined separators (commas, full stops, semicolons) and applying extractSynsets function with an appropriate stop criterion. The readDefinition makes use of the following elements:
removes everything that is between brackets ([], () or {}) from the text (step (1) in figure FIGREF54 ).
removes everything between single or double quotes from the text (step (1) in the example).
contains patterns of strings separating a defined concept from a definition, e.g. hyphens or dashes (used in step (2) of the example) or jest to (is a).
removes expressions commonly prefixing a nominal group, such as jeden z (one of), typ (a type of) or klasa (a class of), not present in the example.
a set of three characters that separate parts of a definition: ".", "," and ";".
returns the longest syntactic element (syntactic group or word) starting at the beginning of a chunk (step (4) in the example).
decides, whether a chunk is a noun in nominative, a nominal group or a coordination of nominal groups.
Function extractSynsets( INLINEFORM0 ) – recursively extracts synsets from a nominal chunk. INLINEFORM1 - a nominal chunk (a syntactic group or a single noun) INLINEFORM2 - WordNet synsets corresponding to INLINEFORM3 INLINEFORM4 := lemmatise( INLINEFORM5 ) inWordNet( INLINEFORM6 ) getLexemes( INLINEFORM7 ).synset(0) isCoordination( INLINEFORM8 ) INLINEFORM9 := {} INLINEFORM10 in INLINEFORM11 INLINEFORM12 := INLINEFORM13 INLINEFORM14 extractSynsets( INLINEFORM15 ) INLINEFORM16 isGroup( INLINEFORM17 ) extractSynsets( INLINEFORM18 .semanticHead) {}
The extractSynsets function (shown as algorithm SECREF40 ) accepts a nominal chunk and extracts WordNet synsets, corresponding to it. It operates recursively to dispose any unnecessary chunk elements and find the longest subgroup, having a counterpart in WordNet. It corresponds to step (5) in figure FIGREF54 and uses the following elements:
returns a lemma of a nominal group.
checks whether a given text corresponds to a lexeme in WordNet.
return a list of WordNet lexemes corresponding to a given text.
return a synset including a lexeme in a given word sense number.
return TRUE iff a given chunk is a coordination group.
return TRUE iff a given chunk is a group.
is an element of a syntactic group, denoted as a semantic head.
A few of design decisions reflected in these procedures require further comment. First of all, they differ a lot from the studies that involve a definition represented with a bag of words BIBREF48 , BIBREF51 , BIBREF53 . Here, a certain definition structure is assumed, i.e. a series of nominal groups divided by separators. What is more, as the full stop belongs to them, the series may continue beyond a single sentence, which has improved recall in preliminary experiments. Availability of a shallow parsing layer and group lemmatisation allows to query WordNet by syntactic groups instead of single nouns, as in work of BIBREF46 . As word order is relatively free in Polish, a nominal group cannot be assumed to end with a noun, like BIBREF47 did. Instead, a semantic head of a group is used.
Finally, the problem of lack of word sense disambiguation remains – the line getLexemes( INLINEFORM0 ).synset(0) means that always a synset connected to the first meaning of a lexeme is selected. We assume that it corresponds to the most common meaning, but that is not always the case – in our example at figure FIGREF54 <prezydent.1, prezydent miasta.1> (president of a city, i.e. mayor) precedes <prezydent.2> (president of a country, the obvious meaning). However, it does not have to harm QA performance as far as the question analysis module (section SECREF18 ) functions analogously, e.g. in case of a question beginning with który prezydent... (which president...). Therefore, the decision has been motivated by relatively good performance of this solution in previously performed experiments on question analysis BIBREF36 . It also works in other applications, e.g. gazetteers generation BIBREF46 .
To assess quality of the entity library, its content has been compared with synsets manually extracted from randomly selected 100 Wikipedia articles. 95 of them contain a description of an entity in the first paragraph. Among those, DeepER entity library includes 88 (per-entity recall 92.63 per cent). 135 synsets have been manually assigned to those entities, while the corresponding set in library contains 133 items. 106 of them are equal (per-synset precision 79,70 per cent), while 13 differ only by word sense. 16 of manually extracted synsets hove no counterpart in the entity library (per-synset recall 88.15 per cent), which instead includes 14 false synsets.
Evaluation
Evaluation of RAFAEL is typical for factoid QA systems: given a knowledge base and and questions, its responses are compared to the expected ones, prepared in advance. Section SECREF80 describes data used in this procedure, whereas section SECREF87 explains how an automatic evaluation is possible without human labour.
Data
The Polish Wikipedia serves as a knowledge base. It has been downloaded from a project site as a single database dump at 03.03.2013, from which plain text files have been extracted using Wikipedia Extractor 2.2 script. It means that only plain text is taken into account – without lists, infoboxes, tables, etc. This procedure leads to a corpus with 895,486 documents, containing 168,982,550 segments, which undergo the annotation process, described in section SECREF12 .
The questions that are to be answered with the knowledge base come from two separate sets:
Development set bases on 1500 (1130 after filtering) questions from a Polish quiz TV show, called Jeden z dziesięciu BIBREF55 . It was involved in previous experiments BIBREF4 , BIBREF36 .
Evaluation set bases on an open dataset for Polish QA systems, published by BIBREF56 . It has been gathered from Did you know... column, appearing in the main page of the Polish Wikipedia. It contains 4721 questions, from which 1000 have been analysed, which resulted in 576 satisfying the task constrains, given in chapter SECREF2 .
Table TABREF85 shows a distribution of different question types and named entity types in the sets.
To each of the questions from both sets some information has been assigned manually. It includes an identification number, an expected answer string, a general question type, a named entity type (if applicable) and an expected source document. Table TABREF86 contains several exemplary questions from the development set.
The additional information (question types and expected documents) makes it possible to evaluate only selected modules of the whole QA system. For example, we could test question classification by comparing results against given question types or entity selection by analysing only the relevant document.
Automatic Evaluation
Thanks to availability of the DeepER entity library, it is possible to automatically perform answer evaluation for all the question types that are recognised by this technique (UNNAMED_ENTITY and NAMED_ENTITY excluding dates, numbers and quantities).
Both an expected and obtained answer are represented as short strings, e.g. Bronisław Komorowski. However, it does not suffice to check their exact equality. That is caused by existence of different names for one entity (Bronisław Maria Komorowski or Komorowski), but also rich nominal inflection (Komorowskiego, Komorowskiemu, ...).
In fact, we want to compare entities, not names. Hence, deep entity recognition is a natural solution here. To check correctness of an answer, we use it as an input for the recognition process, described in section SECREF73 . Then, it is enough to check whether the expected answer appears in any of lists of names, assigned to the recognized entities. For example, let us consider a question: Kto jest obecnie prezydentem Polski? (Who is the current president of Poland?) with expected answer Bronisław Komorowski and a system answer Komorowski. The DeepER process finds many entities in the string (all the persons bearing this popular surname). One of them is the question goal, hence, has Bronisław Komorowski in its list of names.
As the process of entity recognition is imperfect, so is the automatic evaluation. However, it still lets us to notice general trends in answering performance with respect to several factors. Of course, the final evaluation needs to be checked manually.
Results
As mentioned in previous section, the results consist of two groups: experiments, showing an influence of some aspects of algorithm on performance, and a final assessment. Both use the Polish Wikipedia as a knowledge base, whereas the questions asked belong to development and evaluation sets, respectively. In this section, recall measures percentage of questions, to which RAFAEL gave any answer, whereas precision denotes percentage of question answered correctly.
When analysing results of different entity recognition techniques, we need to remember that they strongly rely on output of the question analysis, which is not perfect. In particular, tests show that 15.65 per cent of questions is assigned to wrong type and 17.81 per cent search results do not include the expected document BIBREF36 . The entity recognition (ER) stage, a focus of this work, is very unlikely to deliver valid answers in these cases. However, as the expected question type and source document are available in question metadata, it is possible to correct results of question analysis by artificially replacing a wrong type and/or adding the expected document to the retrieved set. In that way the ER modules could be evaluated, as if question analysis worked perfectly. Note that this approach slightly favours NER-based solutions as the question metadata contains general types and named entity types but lack focus synsets, used by DeepER.
Experiments
The goal of the first experiment is to test how number a of documents retrieved from the search engine and analysed by the entity recognition techniques, influences the performance. Question classification errors have been bypassed as described in the previous paragraph. Additionally, two versions have been evaluated: with and without corrections of a retrieved set of documents. Figure FIGREF89 demonstrates results for different entity recognition techniques.
As we can see, if a retrieved set contains the desired article, adding new documents slightly increases recall, while precision drops observably. That is because additional irrelevant documents usually introduce noise. However, in some cases they are useful, as increasing recall indicates. On the other hand, if we have no guarantee of presence of the expected document in a list, it seems more desirable to extend it, especially for small sizes. For sets bigger than 50 elements, the noise factor again dominates our results. Judging by F1 measure, the optimal value is 20 documents.
When it comes to the comparison, it should be noted that DeepER performs noticeably better than traditional NER. The gain in precision is small, but recall is almost twice as big. It could be easily explained by the fact that the NER solutions are unable to handle UNNAMED_ENTITY type, which accounts for 36 per cent of the entity questions.
It is also worthwhile to check how the system performs while using different values of minimal confidence rate (Jaccard similarity), as described in section UID38 . It could become useful when we demand higher precision and approve lower recall ratio. The plot in figure FIGREF90 shows answering performance using DeepER with corrected question analysis with respect to the minimal confidence rate. Generally, the system behaves as expected, but the exact values disappoint. The precision remain at a level of 25-40 per cent up to confidence 0.75, where in turn recall drops to 0.35 per cent only. Values of F1 measure suggest that 0.2 is the highest sensible confidence rate.
One more parameter worth testing, explained in section UID34 , is the context generation strategy. To find the entity with a context most similar to a question content, we could analyse a single sentence, where it appears, or a sequence of words of a predefined length. For both of these solutions, we could also add a document title, as it is likely to be referred to by anaphoric expressions. Figure FIGREF91 shows the value of precision (recall does not depend on context) for these four solutions.
We can see that inclusion of a title in a context helps to achieve a better precision. The impact of anaphoric reference to title emerges clearly in case of flexible context – the difference grows with context size. Quite surprisingly, for the optimal context length (1.5 * question size), it is on the contrary. However, because of the small difference between the techniques including title, for the sake of simplicity, the single sentence is used in the final evaluation.
Final System Evaluation
To impose a realistic challenge to the system, the evaluation set, used at this stage, substantially differs from the one used during the development (see section SECREF80 ). A configuration for the final evaluation has been prepared based on results of the experiments. All of the tested versions share the following features:
no question analysis corrections,
question classification and query generation solutions which proved best in the previous experiments (see section SECREF18 ),
a retrieved set of documents including 20 articles,
no minimal confidence,
singe sentence context with title.
Tested solutions differ with respect to entity recognition only; RAFEL variants based on the following options are considered:
quantities recognizer (Quant),
traditional NER solutions: Nerf and Liner2,
deep entity recognition (DeepER),
hybrid approach, where entity mentions were gathered from all the above sources.
Table TABREF103 shows results of the final evaluation, expressed by recall, precision, F1 measure and Mean Reciprocal Rank (MRR). Standard deviations of these values have been obtained by bootstrap resampling of the test set. Additionally, precision obtained by automatic evaluation has been added, where applicable. As we can see, only a small percentage of questions is handled by the quantitative entities recognition. NER-based solutions deal with slightly more (Nerf) or less (Liner2) than a half of the questions. When using DeepER, the recall ratio rises to 73 per cent while the precision does not differ significantly. That is because UNNAMED_ENTITY questions (unreachable for traditional NER) account for a substantial part of the test set. The maximum recall is obtained by the hybrid solution (90 per cent) but it comes at a cost of lower precision (33 per cent). On the other hand, when we take the whole ranking lists into account, traditional NERs seem to perform better (in terms of MRR).
As expected, the automatic evaluation underestimates precision, but the difference remains below 5 per cent. Judging by F1 measure, the hybrid solution seems to beat the others.
Discussion
The main strength of DeepER compared to NER, according to results shown in figure TABREF103 , is much higher recall. Table TABREF106 shows examples of questions, to which only DeepER provides a correct answer. As we can see (notice question foci in the table), they could not be assigned to any of the traditional NE categories.
The other striking fact in the results is low precision. A part of the wrong answers was inspected and most of the errors seem to result from the following phenomena:
The entity recognizers also introduce errors typical for them:
The last remark applies also to other techniques. For example, consider a word kot, which means a cat. However, it is also a name of a journal, a lake, a village, a badge (KOT), a surname of 10 persons in the Polish Wikipedia and much more. A human would usually assume the most common meaning (a cat), but the system treats them as equally probable. It introduces noise in the process, as such an entity matches many types of questions.
Another thing that demands explanation is a difference in precision of answers found using Liner2 and DeepER: in evaluation set the latter does not maintain its advantage from development set. It could be explained by different compositions of the question sets (table TABREF85 ) – the development one contains much more questions beginning with ambiguous pronouns, followed by a question focus, e.g. Który poeta... (which poet), thus providing a precise synset (a poet) for deep entity recognition. Members of the evaluation set much more frequently begin with pronouns like Kto ...(who), where a synset corresponds to a general NE type (a person).
As RAFAEL is the first Polish QA system, able to answer by entities instead of documents, we can not compare it directly to any other solution. However, the evaluation set has been created based on questions published by BIBREF56 and used for evaluation of a document retrieval system BIBREF18 . Their baseline configuration achieved a@1 (percentage of questions answered by the first document, corresponds to precision in table TABREF103 ) equal 26.09 per cent. By taking into account proximity of keyword matches (MCSW method), they improved the result to 38.63 per cent. We can see that RAFAEL, despite solving much more challenging problem, in all configurations obtains better precision than baseline; using Liner2 it beats even the best method tested on this set (MCSW).
The results suggest two possible directions of future work to improve performance of RAFAEL. Firstly, involving semantics in sentence matching could solve some of the problems mentioned above. There are a lot of techniques in that area, also in QA systems (see a variety of them used by BIBREF39 ), but their implementation in a morphologically rich language would require a thorough study. For example, there exist techniques computing a semantic similarity based on a WordNet graph BIBREF57 , which is available for Polish and proved very useful in this study. Secondly, the relatively good performance of hybrid ER indicates that it may be good to apply different entity recognizer to different questions. For example, we could evaluate them for each question type separately and select the one that performs best for a given one. However, it would require much more training data to have a substantial number of questions of each type, including the scarce ones (observe sparsity of table TABREF85 ).
When it comes to DeepER, word ambiguity seem to be the main issue for future efforts. Of course, a full-lexicon precise word-sense disambiguation tool would solve the problem, but we can't expect it in near future. Instead, we could select a synset somewhere in a path between a focus synset and a named entity type. In the example from figure FIGREF54 rather than choosing between <prezydent.1, prezydent miasta.1> (president of a city) and <prezydent.2> (president of a country) we could use <urzędnik.1, biuralista.1> (official), which covers both meanings.
Conclusions
This paper introduces RAFAEL, a complete open-domain question answering system for Polish. It is capable of analysing a given question, scanning a large corpus and extracting an answer, represented as a short string of text.
In its design, the focus has been on entity recognition techniques, used to extract all the entities compatible with a question from a given text. Apart from the traditional named entity recognition, differentiating between several broad categories of NEs, a novel technique, called Deep Entity Recognition (DeepER), has been proposed and implemented. It is able to find entities belonging to a given WordNet synset, using an entity library, gathered by interpreting definitions from encyclopaedia.
Automatic evaluation, provided by DeepER approach, has let to perform several experiments, showing answering accuracy with respect to different parameters. Their conclusions have been used to prepare final evaluation, which results have been checked manually. They suggest that the DeepER-based solution yields similar precision to NER, but is able to answer much more questions, including those beyond the traditional categories of named entities.
Appendix A: Named Entity Recognition in RAFAEL
As mentioned in section SECREF32 , apart from DeepER, RAFAEL employs also traditional NER-based solutions for entity recognition: NERF, Liner2 and Quant. Each of them uses its own typology of named entities, which covers only a part of the types, enumerated in section SECREF18 . Table TABREF118 shows a correspondence between these types. As we can see, there are a few problems:
The problems 3 and 4 are solved by an additional postprocessing code, extracting CENTURY from date and NAME and SURNAME from person_nam entities. In case of multi-segment person entities it assumes that the first and last word correspond to first and last name, respectively.
While NERF and Liner2 are standalone NER tools and details of their design are available in previously mentioned publications, Quant has been created specifically for RAFAEL. To find numbers, it annotates all chains of segments according to a predefined pattern, which accepts the following types of segments:
The pattern is matched in greedy mode, i.e. it adds as many new segments as possible. It could recognise expressions like 10 tysięcy (10 thousand), kilka milionów (several million), 10 000 or 1.698,88 (1,698.88).
Quantity is a sequence of segments, recognised as a number, followed by a unit of measurement. To check whether a word denotes a unit of measurement, the plWordNet is searched for lexemes equal to its base. Then it suffices to check whether it belongs to a synset, having <jednostka miary 1> (unit of measurement) as one of (direct or indirect) hypernyms, e.g. piętnaście kilogramów (fifteen kilograms) or 5 000 watów (5 000 watts).
Acknowledgments
Study was supported by research fellowship within "Information technologies: research and their interdisciplinary applications" agreement number POKL.04.01.01-00-051/10-00. Critical reading of the manuscript by Agnieszka Mykowiecka and Aleksandra Brzezińska is gratefully acknowledged. | Yes |
7b44bee49b7cb39cb7d5eec79af5773178c27d4d | 7b44bee49b7cb39cb7d5eec79af5773178c27d4d_0 | Q: How is the data in RAFAEL labelled?
Text: Introduction
A Question Answering (QA) system is a computer program capable of understanding questions in a natural language, finding answers to them in a knowledge base and providing answers in the same language. So broadly defined task seems very hard; BIBREF0 describes it as AI-Complete, i.e. equivalent to building a general artificial intelligence. Nonetheless, the field has attracted a lot of attention in Natural Language Processing (NLP) community as it provides a way to employ numerous NLP tools in an exploitable end-user system. It has resulted in valuable contributions within TREC competitions BIBREF1 and, quite recently, in a system called IBM Watson BIBREF2 , successfully competing with humans in the task.
However, the problem remains far from solved. Firstly, solutions designed for English are not always easily transferable to other languages with more complex syntax rules and less resources available, such as Slavonic. Secondly, vast complexity and formidable hardware requirements of IBM Watson suggest that there is still a room for improvements, making QA systems smaller and smarter.
This work attempts to contribute in both of the above areas. It introduces RAFAEL (RApid Factoid Answer Extraction aLgorithm), a complete QA system for Polish language. It is the first QA system designed to use an open-domain plain-text knowledge base in Polish to address factoid questions not only by providing the most relevant sentence, but also an entity, representing the answer itself. The Polish language, as other Slavonic, features complex inflection and relatively free word order, which poses additional challenges in QA. Chapter SECREF2 contains a detailed description of the system architecture and its constituents.
In the majority of such systems, designers' attention focus on different aspects of a sentence selection procedure. Herein, a different idea is incorporated, concentrating on an entity picking procedure. It allows to compare fewer sentences, likely to contain an answer. To do that, classical Named Entity Recognition (NER) gets replaced by Deep Entity Recognition. DeepER, introduced in this work, is a generalisation of NER which, instead of assigning each entity to one of several predefined NE categories, assigns it to a WordNet synset.
For example, let us consider a question: Which exiled European monarch returned to his country as a prime minister of a republic?. In the classical approach, we recognise the question as concerning a person and treat all persons found in texts as potential answers. Using DeepER, it is possible to limit the search to persons being monarchs, which results in more accurate answers. In particular, we could utilise information that Simeon II (our answer) is a tsar; thanks to WordNet relations we know that it implies being a monarch. DeepER is a generalisation of NER also from another point of view – it goes beyond the classical named entity categories and treats all entities equally. For example, we could answer a question Which bird migrates from the Arctic to the Antarctic and back every year?, although arctic tern is not recognized as NE by NER systems. Using DeepER, we may mark it as a seabird (hence a bird) and include among possible answers. Chapter SECREF3 outlines this approach.
The entity recognition process requires an entities library, containing known entities, their text representations (different ways of textual notation) and WordNet synsets, to which they belong. To obtain this information, the program analyses definitions of entries found in encyclopaedia (in this case the Polish Wikipedia). In previous example, it would use a Wikipedia definition: The Arctic Tern (Sterna paradisaea) is a seabird of the tern family Sternidae. This process, involving also redirect and disambiguation pages, is described in section SECREF40 . Next, having all the entities and their names, it suffices to locate their mentions in a text. The task (section SECREF73 ) is far from trivial because of a complicated named entity inflection in Polish (typical for Slavonic languages, see BIBREF3 ).
DeepER framework provides also another useful service, i.e. automatic evaluation. Usually QA systems are evaluated by verifying accordance between obtained and actual answer based on a human judgement. Plain string-to-string equality is not enough, as many entities have different text representations, e.g. John F. Kennedy is as good as John Fitzgerald Kennedy and John Kennedy, or JFK (again, the nominal inflection in Polish complicates the problem even more). However, with DeepER, a candidate answer can undergo the same recognition process and be compared to the actual expected entity, not string.
Thanks to automatic evaluation vast experiments requiring numerous evaluations may be performed swiftly; saving massive amount of time and human resources. As a test set, authentic questions from a popular Polish quiz TV show are used. Results of experiments, testing (among others) the optimal context length, a number of retrieved documents, a type of entity recognition solution, appear in section SECREF88 .
To avoid overfitting, the final system evaluation is executed on a separate test set, previously unused in development, and is checked manually. The results are shown in section SECREF93 and discussed in chapter SECREF6 . Finally, chapter SECREF7 concludes the paper.
RAFAEL
As stated in previous chapter, RAFAEL is a computer system solving a task of Polish text-based, open-domain, factoid question answering. It means that provided questions, knowledge base and returned answers are expressed in Polish and may belong to any domain. The system analyses the knowledge base, consisting of a set of plain text documents, and returns answers (as concise as possible, e.g. a person name), supplied with information about supporting sentences and documents.
What are the kinds of requests that fall into the category of factoid questions? For the purpose of this study, it is understood to include the following types:
Although the above list rules out many challenging types of questions, demanding more elaborate answers (e.g. Why was JFK killed?, What is a global warming?, How to build a fence?), it still involves very distinct problems. Although RAFAEL can recognize factoid questions from any of these types and find documents relevant to them (see more in section SECREF18 and BIBREF4 ), its answering capabilities are limited to those requesting single unnamed entities and named entities. In this document, they are called entity questions.
The task description here is similar to the TREC competitions and, completed with test data described in section SECREF80 , could play a similar role for Polish QA, i.e. provide a possibility to compare different solutions of the same problem. More information about the task, including its motivation, difficulties and a feasibility study for Polish could be found in BIBREF5 .
Related work
The problem of Question Answering is not new to the Polish NLP community (nor working on other morphologically rich languages), but none of studies presented so far coincides with the notion of plain text-based QA presented above.
First Polish QA attempts date back to 1985, when BIBREF6 presented a Polish interface to ORBIS database, containing information about the solar system. The database consisted of a set of PROLOG rules and the role of the system (called POLINT) was to translate Polish questions to appropriate queries. Another early solution, presented by BIBREF7 , could only work in a restricted domain (business information).
A system dealing with a subset of the TREC tasks was created for Bulgarian by BIBREF8 . His solution answers only three types of questions: Definition, Where-Is and Temporal. He was able to achieve good results with 100 translated TREC questions, using several manually created answer patterns, without NER or any semantic information. Another system for Bulgarian BIBREF9 participated in the CLEF 2005 competition. Its answer extraction module bases on partial grammars, playing a role of patterns for different types of questions. They could answer correctly 37 of 200 questions, of which only 16 belong to the factoid type. Previously the same team BIBREF10 took part in a Bulgarian-English track of the CLEF 2004, in which Bulgarian questions were answered using English texts.
A QA solution was also created for Slovene BIBREF11 . The task there is to answer students' questions using databases, spreadsheet files and a web service. Therefore, it differs from the problem discussed above by limited domain (issues related to a particular faculty) and the non-textual knowledge base. Unfortunately, no quantitative results are provided in this work.
More recently, several elements of a Polish QA system called Hipisek were presented by BIBREF12 . It bases on a fairly common scheme of transforming a question into a search query and finding the most appropriate sentence, satisfying question constrains. Unfortunately, a very small evaluation set (65 question) and an unspecified knowledge base (gathered by a web crawler) make it difficult to compare the results. In their later works BIBREF13 , BIBREF14 , the team concentrated on spatial reasoning using a knowledge base encoded as a set of predicates.
The approach presented by BIBREF15 is the closest to the scope of this work, as it includes analysis of Polish Wikipedia content and evaluation is based on questions translated from a TREC competition. Unfortunately, it heavily relies on a structure of Wikipedia entries, making it impossible to use with an arbitrary textual corpus.
A non-standard approach to answer patterns has been proposed by BIBREF16 . In their Czech open-domain QA system they used a set of templates associated with question types, but also presented a method to learn them semi-automatically from search results. BIBREF17 in their Bulgarian QA system concentrated on semantic matching between between a question and a possible answer checked using dependency parsing. However, they provide no data regarding an answering precision of the whole system.
The last Polish system worth mentioning has been created by BIBREF18 . Generally, their task, called Open Domain Question Answering (ODQA), resembles what is treated here, but with one major difference. A document is considered an answer; therefore they focus on improving ranking in a document retrieval stage. They have found out that it could benefit from taking nearness of query terms occurrences into account.
As some of Slavonic languages lack necessary linguistic tools and resources, only partial solutions of QA problems exist for them, e.g. document retrieval for Macedonian BIBREF19 , question classification for Croatian BIBREF20 or answer validation for Russian BIBREF21 .
The idea of DeepER in a nutshell is to improve QA by annotating a text with WordNet synsets using an entity base created by understanding definitions found in encyclopaedia. Parts of this concept have already appeared in the NLP community.
A technique of coordinating synsets assigned to a question and a possible answer emerged in a study by BIBREF45 . While a question analysis there seems very similar to this work, entity library (called proper noun ontology) generation differs a lot. The author analysed 1 GB of newswire text and extracted certain expressions, e.g. "X, such as Y" implies that Y is an instance of X. Albeit precision of resulting base was not very good (47 per cent for non-people proper names), it led to a substantial improvement of QA performance.
The idea of analysing encyclopaedic definitions to obtain this type of information already appeared, but was employed for different applications. For example, BIBREF46 described a method of building a gazetteer by analysing hyperonymy branches of nouns of first sentences in Wikipedia definitions. Unlike in this work, an original synset was replaced by a coarse-grained NER category. Another example of application is a NE recognizer BIBREF47 using words from a definition as additional features for a standard CRF classifier. In their definition analysis only the last word of the first nominal group was used.
Other researchers dealt with a task explicitly defined as classifying Wikipedia entries to NER categories. For example BIBREF48 addressed the problem by combining traditional text classification techniques (bag of words) with contexts of entity mentions. Others BIBREF49 thoroughly examined article categories as a potential source of is-a relations in a taxonomy (99 per cent of entries have at least one category). Inhomogeneity of categories turned out as the main problem, dealt with by a heuristic classifier, assigning is-a and not-is-a labels. Categories were also used as features in a NER task BIBREF50 , but it required a set of manually designed patterns to differentiate between categories of different nature.
Exploring a correspondence between Wikipedia entries and WordNet synsets found an application in automatic enriching ontologies with encyclopaedic descriptions BIBREF51 . However, only NEs already appearing in the WordNet were considered. The task (solved by bag-of-words similarity) is non-trivial only in case of polysemous words, e.g. which of the meanings of Jupiter corresponds to which Wikipedia article? Others BIBREF52 concentrated on the opposite, i.e. extending the WordNet by NEs that are not there yet by adding titles of entries as instances of synsets corresponding to their common category.
Also, some see Wikipedia as an excellent source of high-quality NER training data. Again, it requires to project entries to NE categories. A thorough study of this problem, presented by BIBREF53 , utilizes features extracted from article content (bag of words), categories, keywords, inter-article and inter-language links. A final annotated corpus turns out as good for NER training as a manually annotated gold standard.
Finally, some researchers try to generalise NER to other categories, but keep the same machine-learning-based approach. For example, BIBREF54 developed a tagger, assigning words in a text to one of 41 supersenses. Supersenses include NE categories, but also other labels, such as plant, animal or shape. The authors projected word-sense annotations of publicly available corpora to supersenses and applied perceptron-trained Hidden Markov Model for sequence classification, obtaining precision and recall around 77 per cent.
System Architecture
A general architectural scheme of RAFAEL (figure FIGREF11 ) has been inspired by similar systems developed for English; for examples see works by BIBREF22 and BIBREF23 .
Two of the steps in the diagram concern offline processing of a knowledge base. Firstly, it is indexed by a search engine to ensure efficient searching in further stages (INDEXING). Secondly, it may be annotated using a set of tools (NLP), but this could also happen at an answering stage for selected documents only.
After the system receives a question, it gets analysed (QUESTION ANALYSIS) and transformed into a data structure, called question model. One of its constituents, a search query, is used to find a set of documents, which are probably appropriate for the current problem (SEARCH). For each of the documents, all entity mentions compatible with an obtained question type (e.g. monarchs), are extracted (ENTITY RECOGNITION). For each of them, a context is generated (CONTEXT GENERATION). Finally, a distance between a question content and the entity context is computed to asses its relevance (DISTANCE MEASURE). All the mentions and their distance scores are stored and, after no more documents are left, used to select the best match (BEST ENTITY SELECTION). The system returns the entity, supplied with information about a supporting sentence and a document, as an answer.
Knowledge Base Processing
Knowledge base (KB) processing consists of two elements: indexing and annotating. The objective of the first is to create an index for efficient searching using a search engine. In the system, Lucene 3.6 is used to build two separate full-text indices: regular and stemmed using a built-in stemmer for Polish, Stempel BIBREF24 .
Secondly, texts go through a cascade of annotation tools, enriching it with the following information:
Morphosyntactic interpretations (sets of tags), using Morfeusz 0.82 BIBREF25 ,
Tagging (selection of the most probable interpretation), using a transformation-based learning tagger, PANTERA 0.9.1 BIBREF26 ,
Syntactic groups (possibly nested) with syntactic and semantic heads, using a rule-based shallow parser Spejd 1.3.7 BIBREF27 with a Polish grammar, including improved version of modifications by BIBREF28 , enabling lemmatisation of nominal syntactic groups,
Named entities, using two available tools: NERF 0.1 BIBREF29 and Liner2 2.3 BIBREF30 .
All the annotations are stored in a variant of TEI P5 standard, designed for the National Corpus of Polish BIBREF31 . As noted previously, the process of annotating is not indispensable at the stage of offline KB processing; it could be as well executed only on documents returned from the search engine (for example see Webclopedia by BIBREF22 or LASSO by BIBREF23 ). However, since during evaluation experiments the same documents undergo the process hundreds of times, it seems reasonable to process the whole KB only once.
Question Analysis
The goal of question analysis is to examine a question and extract all the information that suffices for answer finding. A resulting data structure, called question model, contains the following elements:
Question type – a description of expected answer type, instructing the system, what type of data could be returned as an answer. It has three levels of specificity:
General question type – one of the types of factoid questions, enumerated at the beginning of this chapter,
Named entity type – applicable only in case general type equals named entity. Possible values are the following: place, continent, river, lake, mountain, mountain range, island, archipelago, sea, celestial body, country, state, city, nationality, person, first name, last name, band, dynasty, organisation, company, event, date, century, year, period, number, quantity, vehicle, animal, title.
Focus synset – applicable in case of entity questions; a WordNet synset, to which a question focus belongs; necessary for DeepER.
Search query – used to find possibly relevant documents,
Question content – the words from question which are supposed to appear also in context of an answer.
The task presented above, called question classification, is an example of text classification with very short texts. It could be tackled by a general-purpose classifier; for example, BIBREF11 used SVMs (Support Vector Machines) for closed-domain Slovene QA system; BIBREF32 employed SNoW (Sparse Network of Winnows) for hierarchical classification of TREC questions. For Polish results are not satisfactory BIBREF4 because of data sparsity.
However, sometimes a solution seems quite evident, as part of the question types enforce its structure. For example, when it begins with Who or When, it belongs to person and date question types, respectively. That is why a set of 176 regular expressions (in case of RAFAEL) suffices to deal with them. They match only a subset of questions (36.15 per cent of the training set), but are highly unambiguous (precision of classification equals 95.37 per cent). Nevertheless, some BIBREF33 use solely such patterns, but need a great number of them (1,273).
Unfortunately, most of entity questions are ambiguous, i.e. it is not enough to inspect an interrogative pronoun to find an answer type. They may begin with what or which, followed by a question focus. For example, let us consider a question Which russian submarine sank in 2000 with its whole crew?. Its focus (russian submarine) carries information that the question could be answered by a named entity of type vehicle. The whole process of focus analysis is shown in figure FIGREF25 . The first nominal group after a pronoun serves as a possible lexeme name in plWordNet 2.1 BIBREF34 . As long as there are no results, it gets replaced by its semantic head. When a matching lexeme exists in WordNet, a set of all its hypernyms is extracted. If any of the elements in the set correspond to one of the named entity types, this type is recorded in the question model. Otherwise the general question type takes the value unnamed entity. A WordNet-assisted focus analysis was also implemented in one of solutions participating in a TREC competition BIBREF35 .
Search query generation is described in the next chapter. The last element of a question model, called question content, contains segments, which are to be compared with texts to find the best answer. It includes all the words of the interrogative sentence except those included in the matched pattern (Which, ?) and the focus (submarine). In our example the following are left: russian, sank, in, 2000, with, its, whole, crew. An entity mention, which context resembles this set, will be selected as an answer (see details in section SECREF33 ).
The question analysis stage explained above follows a design presented in previous works BIBREF4 , BIBREF36 , where more details could be found. The major difference lies in result processing – an original synset is not only projected to one of the named entity types, but also recorded as a focus synset in question type, utilised in DeepER to match entity types. In our example, it would only consider submarines as candidate answers.
Document Retrieval
The use of search engines in QA systems is motivated mainly by performance reasons. Theoretically, we could analyse every document in a text base and find the most relevant to our query. However, it would take excessive amount of time to process the documents, majority of which belong to irrelevant domains (839,269 articles in the test set). A search engine is used to speed up the process by selecting a set of documents and limiting any further analysis to them.
As described in section SECREF12 , a knowledge base is indexed by Lucene offline. Given a question, we need to create a search query. The problem is that an answer in the knowledge base is probably expressed differently than the question. Hence, a query created directly from words of the question would not yield results, unless using a highly-redundant KB, such as the WWW (for this type of solution see BIBREF37 ). Therefore, some of the query terms should be dropped – based on their low IDF BIBREF38 or more complex heuristics BIBREF23 . On the other hand, the query may be expanded with synonyms BIBREF22 or derived morphological forms BIBREF38 .
Finally, we need to address term matching issue – how to compare a query keyword and a text word in a morphologically-rich language, such as Polish? Apart from exact match, it also is possible to use a stemmer or fuzzy queries, available in Lucene (accepting a predefined Levenshtein distance between matching strings).
Previous experiments BIBREF36 led to the following query generation procedure:
Remove all words matched by a regular expression at the classification stage (What, Which, etc.),
Keep a question focus,
Connect all the remaining words by OR operator,
Use fuzzy term matching strategy with absolute distance equal 3 characters and fixed prefix.
Lucene handles a query and yields a ranked document list, of which N first get transferred to further analysis. The influence of value of N on answering performance is evaluated in section SECREF88 .
Entity Recognition
Having a set of proposed documents and a question type, the next step is to scan them and find all mentions of entities with appropriate types. RAFAEL includes two approaches to the problem: classical Named Entity Recognition (NER) and novel Deep Entity Recognition.
Three NERs for Polish are employed: NERF, Liner2 and Quant. NERF BIBREF29 is a tool designed within the project of the National Corpus of Polish and bases on linear-chain conditional random fields (CRF). It recognizes 13 types of NEs, possibly nested (e.g. Warsaw in University of Warsaw). Liner2 BIBREF30 also employs CRFs, but differentiates NEs of 56 types (which could be reduced to 5 for higher precision). Annotation using both of the tools happens offline within the KB preprocessing, so in the currently described stage it suffices to browse the annotations and find matching entities. As the above tools lack recognition of quantitative expressions, a new one has been developed especially for RAFAEL and called Quant. It is able to handle both numbers and quantities (using WordNet) in a variety of notations.
Appendix A contains details of implementation of named entity recognition in RAFAEL, including a description of Quant and a mapping between question types and named entity types available in NERF and Liner2. An alternative being in focus of this work, i.e. DeepER approach, is thorougly discussed in chapter SECREF3 .
RAFAEL may use any of the two approaches to entity recognition: NER (via NERF, Liner2 and Quant) or novel DeepER; this choice affects its overall performance. Experiments showing precision and recall of the whole system with respect to applied entity recognition technique are demonstrated in section SECREF88 .
An entity recognition step is performed within the question answering process and aims at selecting all entity mentions in a given annotated document. Before it begins, the entity library is read into a PATRICIA trie, a very efficient prefix tree. In this structure, every entity name becomes a key for storing a corresponding list of entities.
When a document is ready for analysis, it is searched for strings that match any of the keys in the trie. The candidate chunks (sequences of segments) come from three sources:
lemmata of words and syntactic groups,
sequences of words in surface forms (as they appear in text),
sequences of words in base forms (lemmata).
The last two techniques are necessary, because a nominal group lemmatisation often fails, especially in case of proper names. Their rich inflection in Polish BIBREF3 means that a nominal suffix of an entity may be hard to predict. Therefore, a chunk is considered to match an entity name if:
they share a common prefix,
an unmatched suffix in neither of them is longer than 3 characters,
the common prefix is longer than the unmatched chunk suffix.
Given a list of entity mentions, RAFAEL checks their compatibility with a question model. Two of its constituents are taken into account: a general question type and a synset. An entity mention agrees with NAMED_ENTITY type if its first segment starts with a capital letter and always agrees with UNNAMED_ENTITY. To pass a semantic agreement test, the synset of the question model needs to be a (direct or indirect) hypernym of one of the synsets assigned to the entity. For example, list of synsets assigned to entity Jan III Sobieski contains <król.1> (king), so it matches a question focus <władca.1, panujący.1, hierarcha.2, pan.1> (ruler) through a hypernymy path <władca.1, panujący.1, hierarcha.2, pan.1> INLINEFORM0 <monarcha.1, koronowana głowa.1> (monarch) INLINEFORM1 <król.1>. All the mentions of entities satisfying these conditions are returned for further processing.
Mention selection
When a list of entity mentions in a given document is available, we need to decide which of them most likely answers the question. The obvious way to do that is to compare surroundings of every mention with the content of the question. The procedure consists of two steps: context generation and similarity measurement.
The aim of a context generation step is to create a set of segments surrounding an entity, to which they are assigned. Without capabilities of full text understanding, two approximate approaches seem legitimate:
Sentence-based – for a given entity mention, a sentence in which it appears, serves as a context,
Segment-based – for a given entity mention, every segment sequence of length M, containing the entity, is a context.
Both of them have some advantages: relying on a single sentence ensures relation between an entity and a context, whereas the latter provides possibility of modifying context length. Obviously, the value of M should be proportional to question (precisely, its content) length.
The method of treating sentences as a context has gained most popularity (see work of BIBREF39 ), but a window of fixed size also appears in the literature; for example BIBREF38 used one with M=140 bytes.
The context generation is also related to another issue, i.e. anaphoric expressions. Some segments (e.g. this, him, they) may refer to entities that occurred earlier in a text and therefore harm a similarity estimation. It could be tackled by applying anaphora resolution, but a solution for Polish BIBREF40 remains in an early stage. Observations show that the majority of anaphora refer to an entity in a document title, so the problem is partially bypassed by adding a title to a context.
An influence of the context generation techniques on final results is shown in section SECREF88 .
To measure a similarity between a question content (explained in section SECREF18 ) and an entity context (generated by the procedures in previous section), a Jaccard similarity index BIBREF41 is computed. However, not all word co-occurrences matter equally (e.g. compare this and Honolulu), so word weights are used: INLINEFORM0
The sets INLINEFORM0 and INLINEFORM1 contain segments in base forms, whereas INLINEFORM2 denotes a weight of an INLINEFORM3 -th base form, equal to its scaled IDF computed on a document set INLINEFORM4 : INLINEFORM5
The Jaccard index is a popular solution for sentence similarity measurement in QA (for example see a system by BIBREF42 ). In case of selecting relevant documents, cosine measure is also applied. BIBREF18 compared it to Minimal Span Weighting (MSW) and observed that the latter performs better, as it takes into account a distance between matched words. A study of different techniques for sentence similarity assessment could be found in BIBREF39 .
At this stage, a large set of pairs of entity mention and its contexts with scores assigned, is available. Which of them answers the question? Choosing the one with the highest score seems an obvious solution, but we could also aggregate scores of different mentions corresponding to the same answer (entity), e.g. compute their sum or mean. However, such experiments did not yield improvement, so RAFAEL returns only a single answer with the highest score.
An answer consists of the following elements: an answer string, a supporting sentence, a supporting document and a confidence value (the score). A sentence and a document, in which the best mention appeared, are assumed to support the answer. Thanks to properties of Jaccard similarity, the mention score ranges between 0 for completely unrelated sentences to 1 for practically (ignoring inflection and a word order) the same. Therefore, it may serve as an answer confidence.
When no entity mentions satisfying constraints of a question are found, no answer is returned. This type of result could also be used when the best confidence score is below a predefined value; performance of such technique are shown in section SECREF88 . The refusal to answer in case of insufficient confidence plays an important role in Jeopardy!, hence in IBM Watson BIBREF2 , but it was also used to improve precision in other QA systems BIBREF43 .
Deep Entity Recognition
Deep Entity Recognition procedure is an alternative to applying Named Entity Recognition in QA to find entities matching question constraints. It scans a text and finds words and multi-word expressions, corresponding to entities. However, it does not assign them to one of several NE categories; instead, WordNet synsets are used. Therefore, named entities are differentiated more precisely (e.g. monarchs and athletes) and entities beyond the classical NE categories (e.g. species, events, devices) could also be recognised in a text.
It does not seem possible to perform this task relying solely on features extracted from words and surrounding text (as in NER), so it is essential to build an entity library. Such libraries already exist (Freebase, BabelNet, DBpedia or YAGO) and could provide an alternative for DeepER, but they concentrate on English. The task of adaptation of such a base to another language is far from trivial, especially for Slavonic languages with complex NE inflection BIBREF3 . An ontology taking into account Polish inflection (Prolexbase) has been created by BIBREF44 , but it contains only 40,000 names, grouped into 34 types.
Entity Library
An entity library for DeepER contains knowledge about entities that is necessary for deep entity recognition. Each of them consists of the following elements (entity #9751, describing the Polish president, Bronisław Komorowski):
Main name: Bronisław Komorowski,
Other names (aliases): Bronisław Maria Komorowski, Komorowski,
Description URL: http://pl.wikipedia.org/wiki/?curid=121267,
plWordNet synsets:
<podsekretarz1, podsekretarz stanu1, wiceminister1> (vice-minister, undersecretary),
<wicemarszałek1> (vice-speaker of the Sejm, the Polish parliament),
<polityk1> (politician),
<wysłannik1, poseł1, posłaniec2, wysłaniec1, posłannik1> (member of a parliament),
<marszałek1> (speaker of the Sejm),
<historyk1> (historian),
<minister1> (minister),
<prezydent1, prezydent miasta1> (president of a city, mayor).
A process of entity library extraction is performed offline, before question answering. The library built for deep entity recognition in RAFAEL, based on the Polish Wikipedia (857,952 articles, 51,866 disambiguation pages and 304,823 redirections), contains 809,786 entities with 1,169,452 names (972,592 unique). The algorithm does not depend on any particular feature of Wikipedia, so any corpus containing entity definitions could be used.
Figure FIGREF54 shows an exemplary process of converting the first paragraph of a Polish Wikipedia entry, describing former Polish president Lech Wałęsa, into a list of WordNet synsets. First, we omit all unessential parts of the paragraph (1). This includes text in brackets or quotes, but also introductory expressions like jeden z (one of) or typ (type of). Then, an entity name is detached from the text by matching one of definition patterns (2). In the example we can see the most common one, a dash (–). Next, all occurrences of separators (full stops, commas and semicolons) are used to divide the text into separate chunks (3). The following step employs shallow parsing annotation – only nominal groups that appear at the beginning of the chunks are passed on (4). The first chunk that does not fulfil this requirement and all its successors get excluded from further analysis (4.1). Finally, we split the coordination groups and check, whether their lemmas correspond to any lexemes in WordNet (5). If not, the process repeats with the group replaced by its semantic head. In case of polysemous words, only the first word sense (usually the most common) is taken into account.
The whole process is more complicated than the simple example shows. Generally, it consists of the following steps:
Prepare a corpus – data format and annotation process is the same as for a knowledge base, used in question answering, see section SECREF12 . It differs in scope of page categories, including not only articles, but also disambiguation and redirection pages.
For each of article pages, extract the first paragraph and apply readDefinition function. If a resulting entity has a non-empty synset list, add it to the library. If some of the redirection pages point to the entity name, add their names as entity aliases.
For each of disambiguation pages, extract all items and apply readDefinition function. If an item refers to an existing entity, extend it with extracted synsets and disambiguation page name. Create a new entity otherwise. Add redirection names as previously.
Save the obtained base for future use.
Function readDefinition( INLINEFORM0 ) – interprets a definition to assign synsets to an entity. INLINEFORM1 - annotated first paragraph of an encyclopaedic entry INLINEFORM2 - synsets describing an entity INLINEFORM3 := {} INLINEFORM4 := removeInBrackets( INLINEFORM5 ) INLINEFORM6 := removeInQuotes( INLINEFORM7 ) INLINEFORM8 in INLINEFORM9 INLINEFORM10 matches INLINEFORM11 INLINEFORM12 := match( INLINEFORM13 , INLINEFORM14 ).group(2) break INLINEFORM15 := removeDefinitionPrefixes( INLINEFORM16 ) INLINEFORM17 := split( INLINEFORM18 , INLINEFORM19 ) INLINEFORM20 in INLINEFORM21 INLINEFORM22 := firstGroupOrWord( INLINEFORM23 ) isNominal( INLINEFORM24 ) INLINEFORM25 := INLINEFORM26 INLINEFORM27 extractSynsets( INLINEFORM28 ) break INLINEFORM29
The readDefinition function (shown as algorithm SECREF40 ) analyses a given paragraph of text and extracts a set of synsets, describing an entity, to which it corresponds, as exemplified by figure FIGREF54 . Simplifying, it is done by removing all unnecessary text (in brackets or quotes), splitting it on predefined separators (commas, full stops, semicolons) and applying extractSynsets function with an appropriate stop criterion. The readDefinition makes use of the following elements:
removes everything that is between brackets ([], () or {}) from the text (step (1) in figure FIGREF54 ).
removes everything between single or double quotes from the text (step (1) in the example).
contains patterns of strings separating a defined concept from a definition, e.g. hyphens or dashes (used in step (2) of the example) or jest to (is a).
removes expressions commonly prefixing a nominal group, such as jeden z (one of), typ (a type of) or klasa (a class of), not present in the example.
a set of three characters that separate parts of a definition: ".", "," and ";".
returns the longest syntactic element (syntactic group or word) starting at the beginning of a chunk (step (4) in the example).
decides, whether a chunk is a noun in nominative, a nominal group or a coordination of nominal groups.
Function extractSynsets( INLINEFORM0 ) – recursively extracts synsets from a nominal chunk. INLINEFORM1 - a nominal chunk (a syntactic group or a single noun) INLINEFORM2 - WordNet synsets corresponding to INLINEFORM3 INLINEFORM4 := lemmatise( INLINEFORM5 ) inWordNet( INLINEFORM6 ) getLexemes( INLINEFORM7 ).synset(0) isCoordination( INLINEFORM8 ) INLINEFORM9 := {} INLINEFORM10 in INLINEFORM11 INLINEFORM12 := INLINEFORM13 INLINEFORM14 extractSynsets( INLINEFORM15 ) INLINEFORM16 isGroup( INLINEFORM17 ) extractSynsets( INLINEFORM18 .semanticHead) {}
The extractSynsets function (shown as algorithm SECREF40 ) accepts a nominal chunk and extracts WordNet synsets, corresponding to it. It operates recursively to dispose any unnecessary chunk elements and find the longest subgroup, having a counterpart in WordNet. It corresponds to step (5) in figure FIGREF54 and uses the following elements:
returns a lemma of a nominal group.
checks whether a given text corresponds to a lexeme in WordNet.
return a list of WordNet lexemes corresponding to a given text.
return a synset including a lexeme in a given word sense number.
return TRUE iff a given chunk is a coordination group.
return TRUE iff a given chunk is a group.
is an element of a syntactic group, denoted as a semantic head.
A few of design decisions reflected in these procedures require further comment. First of all, they differ a lot from the studies that involve a definition represented with a bag of words BIBREF48 , BIBREF51 , BIBREF53 . Here, a certain definition structure is assumed, i.e. a series of nominal groups divided by separators. What is more, as the full stop belongs to them, the series may continue beyond a single sentence, which has improved recall in preliminary experiments. Availability of a shallow parsing layer and group lemmatisation allows to query WordNet by syntactic groups instead of single nouns, as in work of BIBREF46 . As word order is relatively free in Polish, a nominal group cannot be assumed to end with a noun, like BIBREF47 did. Instead, a semantic head of a group is used.
Finally, the problem of lack of word sense disambiguation remains – the line getLexemes( INLINEFORM0 ).synset(0) means that always a synset connected to the first meaning of a lexeme is selected. We assume that it corresponds to the most common meaning, but that is not always the case – in our example at figure FIGREF54 <prezydent.1, prezydent miasta.1> (president of a city, i.e. mayor) precedes <prezydent.2> (president of a country, the obvious meaning). However, it does not have to harm QA performance as far as the question analysis module (section SECREF18 ) functions analogously, e.g. in case of a question beginning with który prezydent... (which president...). Therefore, the decision has been motivated by relatively good performance of this solution in previously performed experiments on question analysis BIBREF36 . It also works in other applications, e.g. gazetteers generation BIBREF46 .
To assess quality of the entity library, its content has been compared with synsets manually extracted from randomly selected 100 Wikipedia articles. 95 of them contain a description of an entity in the first paragraph. Among those, DeepER entity library includes 88 (per-entity recall 92.63 per cent). 135 synsets have been manually assigned to those entities, while the corresponding set in library contains 133 items. 106 of them are equal (per-synset precision 79,70 per cent), while 13 differ only by word sense. 16 of manually extracted synsets hove no counterpart in the entity library (per-synset recall 88.15 per cent), which instead includes 14 false synsets.
Evaluation
Evaluation of RAFAEL is typical for factoid QA systems: given a knowledge base and and questions, its responses are compared to the expected ones, prepared in advance. Section SECREF80 describes data used in this procedure, whereas section SECREF87 explains how an automatic evaluation is possible without human labour.
Data
The Polish Wikipedia serves as a knowledge base. It has been downloaded from a project site as a single database dump at 03.03.2013, from which plain text files have been extracted using Wikipedia Extractor 2.2 script. It means that only plain text is taken into account – without lists, infoboxes, tables, etc. This procedure leads to a corpus with 895,486 documents, containing 168,982,550 segments, which undergo the annotation process, described in section SECREF12 .
The questions that are to be answered with the knowledge base come from two separate sets:
Development set bases on 1500 (1130 after filtering) questions from a Polish quiz TV show, called Jeden z dziesięciu BIBREF55 . It was involved in previous experiments BIBREF4 , BIBREF36 .
Evaluation set bases on an open dataset for Polish QA systems, published by BIBREF56 . It has been gathered from Did you know... column, appearing in the main page of the Polish Wikipedia. It contains 4721 questions, from which 1000 have been analysed, which resulted in 576 satisfying the task constrains, given in chapter SECREF2 .
Table TABREF85 shows a distribution of different question types and named entity types in the sets.
To each of the questions from both sets some information has been assigned manually. It includes an identification number, an expected answer string, a general question type, a named entity type (if applicable) and an expected source document. Table TABREF86 contains several exemplary questions from the development set.
The additional information (question types and expected documents) makes it possible to evaluate only selected modules of the whole QA system. For example, we could test question classification by comparing results against given question types or entity selection by analysing only the relevant document.
Automatic Evaluation
Thanks to availability of the DeepER entity library, it is possible to automatically perform answer evaluation for all the question types that are recognised by this technique (UNNAMED_ENTITY and NAMED_ENTITY excluding dates, numbers and quantities).
Both an expected and obtained answer are represented as short strings, e.g. Bronisław Komorowski. However, it does not suffice to check their exact equality. That is caused by existence of different names for one entity (Bronisław Maria Komorowski or Komorowski), but also rich nominal inflection (Komorowskiego, Komorowskiemu, ...).
In fact, we want to compare entities, not names. Hence, deep entity recognition is a natural solution here. To check correctness of an answer, we use it as an input for the recognition process, described in section SECREF73 . Then, it is enough to check whether the expected answer appears in any of lists of names, assigned to the recognized entities. For example, let us consider a question: Kto jest obecnie prezydentem Polski? (Who is the current president of Poland?) with expected answer Bronisław Komorowski and a system answer Komorowski. The DeepER process finds many entities in the string (all the persons bearing this popular surname). One of them is the question goal, hence, has Bronisław Komorowski in its list of names.
As the process of entity recognition is imperfect, so is the automatic evaluation. However, it still lets us to notice general trends in answering performance with respect to several factors. Of course, the final evaluation needs to be checked manually.
Results
As mentioned in previous section, the results consist of two groups: experiments, showing an influence of some aspects of algorithm on performance, and a final assessment. Both use the Polish Wikipedia as a knowledge base, whereas the questions asked belong to development and evaluation sets, respectively. In this section, recall measures percentage of questions, to which RAFAEL gave any answer, whereas precision denotes percentage of question answered correctly.
When analysing results of different entity recognition techniques, we need to remember that they strongly rely on output of the question analysis, which is not perfect. In particular, tests show that 15.65 per cent of questions is assigned to wrong type and 17.81 per cent search results do not include the expected document BIBREF36 . The entity recognition (ER) stage, a focus of this work, is very unlikely to deliver valid answers in these cases. However, as the expected question type and source document are available in question metadata, it is possible to correct results of question analysis by artificially replacing a wrong type and/or adding the expected document to the retrieved set. In that way the ER modules could be evaluated, as if question analysis worked perfectly. Note that this approach slightly favours NER-based solutions as the question metadata contains general types and named entity types but lack focus synsets, used by DeepER.
Experiments
The goal of the first experiment is to test how number a of documents retrieved from the search engine and analysed by the entity recognition techniques, influences the performance. Question classification errors have been bypassed as described in the previous paragraph. Additionally, two versions have been evaluated: with and without corrections of a retrieved set of documents. Figure FIGREF89 demonstrates results for different entity recognition techniques.
As we can see, if a retrieved set contains the desired article, adding new documents slightly increases recall, while precision drops observably. That is because additional irrelevant documents usually introduce noise. However, in some cases they are useful, as increasing recall indicates. On the other hand, if we have no guarantee of presence of the expected document in a list, it seems more desirable to extend it, especially for small sizes. For sets bigger than 50 elements, the noise factor again dominates our results. Judging by F1 measure, the optimal value is 20 documents.
When it comes to the comparison, it should be noted that DeepER performs noticeably better than traditional NER. The gain in precision is small, but recall is almost twice as big. It could be easily explained by the fact that the NER solutions are unable to handle UNNAMED_ENTITY type, which accounts for 36 per cent of the entity questions.
It is also worthwhile to check how the system performs while using different values of minimal confidence rate (Jaccard similarity), as described in section UID38 . It could become useful when we demand higher precision and approve lower recall ratio. The plot in figure FIGREF90 shows answering performance using DeepER with corrected question analysis with respect to the minimal confidence rate. Generally, the system behaves as expected, but the exact values disappoint. The precision remain at a level of 25-40 per cent up to confidence 0.75, where in turn recall drops to 0.35 per cent only. Values of F1 measure suggest that 0.2 is the highest sensible confidence rate.
One more parameter worth testing, explained in section UID34 , is the context generation strategy. To find the entity with a context most similar to a question content, we could analyse a single sentence, where it appears, or a sequence of words of a predefined length. For both of these solutions, we could also add a document title, as it is likely to be referred to by anaphoric expressions. Figure FIGREF91 shows the value of precision (recall does not depend on context) for these four solutions.
We can see that inclusion of a title in a context helps to achieve a better precision. The impact of anaphoric reference to title emerges clearly in case of flexible context – the difference grows with context size. Quite surprisingly, for the optimal context length (1.5 * question size), it is on the contrary. However, because of the small difference between the techniques including title, for the sake of simplicity, the single sentence is used in the final evaluation.
Final System Evaluation
To impose a realistic challenge to the system, the evaluation set, used at this stage, substantially differs from the one used during the development (see section SECREF80 ). A configuration for the final evaluation has been prepared based on results of the experiments. All of the tested versions share the following features:
no question analysis corrections,
question classification and query generation solutions which proved best in the previous experiments (see section SECREF18 ),
a retrieved set of documents including 20 articles,
no minimal confidence,
singe sentence context with title.
Tested solutions differ with respect to entity recognition only; RAFEL variants based on the following options are considered:
quantities recognizer (Quant),
traditional NER solutions: Nerf and Liner2,
deep entity recognition (DeepER),
hybrid approach, where entity mentions were gathered from all the above sources.
Table TABREF103 shows results of the final evaluation, expressed by recall, precision, F1 measure and Mean Reciprocal Rank (MRR). Standard deviations of these values have been obtained by bootstrap resampling of the test set. Additionally, precision obtained by automatic evaluation has been added, where applicable. As we can see, only a small percentage of questions is handled by the quantitative entities recognition. NER-based solutions deal with slightly more (Nerf) or less (Liner2) than a half of the questions. When using DeepER, the recall ratio rises to 73 per cent while the precision does not differ significantly. That is because UNNAMED_ENTITY questions (unreachable for traditional NER) account for a substantial part of the test set. The maximum recall is obtained by the hybrid solution (90 per cent) but it comes at a cost of lower precision (33 per cent). On the other hand, when we take the whole ranking lists into account, traditional NERs seem to perform better (in terms of MRR).
As expected, the automatic evaluation underestimates precision, but the difference remains below 5 per cent. Judging by F1 measure, the hybrid solution seems to beat the others.
Discussion
The main strength of DeepER compared to NER, according to results shown in figure TABREF103 , is much higher recall. Table TABREF106 shows examples of questions, to which only DeepER provides a correct answer. As we can see (notice question foci in the table), they could not be assigned to any of the traditional NE categories.
The other striking fact in the results is low precision. A part of the wrong answers was inspected and most of the errors seem to result from the following phenomena:
The entity recognizers also introduce errors typical for them:
The last remark applies also to other techniques. For example, consider a word kot, which means a cat. However, it is also a name of a journal, a lake, a village, a badge (KOT), a surname of 10 persons in the Polish Wikipedia and much more. A human would usually assume the most common meaning (a cat), but the system treats them as equally probable. It introduces noise in the process, as such an entity matches many types of questions.
Another thing that demands explanation is a difference in precision of answers found using Liner2 and DeepER: in evaluation set the latter does not maintain its advantage from development set. It could be explained by different compositions of the question sets (table TABREF85 ) – the development one contains much more questions beginning with ambiguous pronouns, followed by a question focus, e.g. Który poeta... (which poet), thus providing a precise synset (a poet) for deep entity recognition. Members of the evaluation set much more frequently begin with pronouns like Kto ...(who), where a synset corresponds to a general NE type (a person).
As RAFAEL is the first Polish QA system, able to answer by entities instead of documents, we can not compare it directly to any other solution. However, the evaluation set has been created based on questions published by BIBREF56 and used for evaluation of a document retrieval system BIBREF18 . Their baseline configuration achieved a@1 (percentage of questions answered by the first document, corresponds to precision in table TABREF103 ) equal 26.09 per cent. By taking into account proximity of keyword matches (MCSW method), they improved the result to 38.63 per cent. We can see that RAFAEL, despite solving much more challenging problem, in all configurations obtains better precision than baseline; using Liner2 it beats even the best method tested on this set (MCSW).
The results suggest two possible directions of future work to improve performance of RAFAEL. Firstly, involving semantics in sentence matching could solve some of the problems mentioned above. There are a lot of techniques in that area, also in QA systems (see a variety of them used by BIBREF39 ), but their implementation in a morphologically rich language would require a thorough study. For example, there exist techniques computing a semantic similarity based on a WordNet graph BIBREF57 , which is available for Polish and proved very useful in this study. Secondly, the relatively good performance of hybrid ER indicates that it may be good to apply different entity recognizer to different questions. For example, we could evaluate them for each question type separately and select the one that performs best for a given one. However, it would require much more training data to have a substantial number of questions of each type, including the scarce ones (observe sparsity of table TABREF85 ).
When it comes to DeepER, word ambiguity seem to be the main issue for future efforts. Of course, a full-lexicon precise word-sense disambiguation tool would solve the problem, but we can't expect it in near future. Instead, we could select a synset somewhere in a path between a focus synset and a named entity type. In the example from figure FIGREF54 rather than choosing between <prezydent.1, prezydent miasta.1> (president of a city) and <prezydent.2> (president of a country) we could use <urzędnik.1, biuralista.1> (official), which covers both meanings.
Conclusions
This paper introduces RAFAEL, a complete open-domain question answering system for Polish. It is capable of analysing a given question, scanning a large corpus and extracting an answer, represented as a short string of text.
In its design, the focus has been on entity recognition techniques, used to extract all the entities compatible with a question from a given text. Apart from the traditional named entity recognition, differentiating between several broad categories of NEs, a novel technique, called Deep Entity Recognition (DeepER), has been proposed and implemented. It is able to find entities belonging to a given WordNet synset, using an entity library, gathered by interpreting definitions from encyclopaedia.
Automatic evaluation, provided by DeepER approach, has let to perform several experiments, showing answering accuracy with respect to different parameters. Their conclusions have been used to prepare final evaluation, which results have been checked manually. They suggest that the DeepER-based solution yields similar precision to NER, but is able to answer much more questions, including those beyond the traditional categories of named entities.
Appendix A: Named Entity Recognition in RAFAEL
As mentioned in section SECREF32 , apart from DeepER, RAFAEL employs also traditional NER-based solutions for entity recognition: NERF, Liner2 and Quant. Each of them uses its own typology of named entities, which covers only a part of the types, enumerated in section SECREF18 . Table TABREF118 shows a correspondence between these types. As we can see, there are a few problems:
The problems 3 and 4 are solved by an additional postprocessing code, extracting CENTURY from date and NAME and SURNAME from person_nam entities. In case of multi-segment person entities it assumes that the first and last word correspond to first and last name, respectively.
While NERF and Liner2 are standalone NER tools and details of their design are available in previously mentioned publications, Quant has been created specifically for RAFAEL. To find numbers, it annotates all chains of segments according to a predefined pattern, which accepts the following types of segments:
The pattern is matched in greedy mode, i.e. it adds as many new segments as possible. It could recognise expressions like 10 tysięcy (10 thousand), kilka milionów (several million), 10 000 or 1.698,88 (1,698.88).
Quantity is a sequence of segments, recognised as a number, followed by a unit of measurement. To check whether a word denotes a unit of measurement, the plWordNet is searched for lexemes equal to its base. Then it suffices to check whether it belongs to a synset, having <jednostka miary 1> (unit of measurement) as one of (direct or indirect) hypernyms, e.g. piętnaście kilogramów (fifteen kilograms) or 5 000 watów (5 000 watts).
Acknowledgments
Study was supported by research fellowship within "Information technologies: research and their interdisciplinary applications" agreement number POKL.04.01.01-00-051/10-00. Critical reading of the manuscript by Agnieszka Mykowiecka and Aleksandra Brzezińska is gratefully acknowledged. | Using a set of annotation tools such as Morfeusz, PANTERA, Spejd, NERF and Liner |
6d54bad91b6ccd1108d1ddbff1d217c6806e0842 | 6d54bad91b6ccd1108d1ddbff1d217c6806e0842_0 | Q: How do they handle polysemous words in their entity library?
Text: Introduction
A Question Answering (QA) system is a computer program capable of understanding questions in a natural language, finding answers to them in a knowledge base and providing answers in the same language. So broadly defined task seems very hard; BIBREF0 describes it as AI-Complete, i.e. equivalent to building a general artificial intelligence. Nonetheless, the field has attracted a lot of attention in Natural Language Processing (NLP) community as it provides a way to employ numerous NLP tools in an exploitable end-user system. It has resulted in valuable contributions within TREC competitions BIBREF1 and, quite recently, in a system called IBM Watson BIBREF2 , successfully competing with humans in the task.
However, the problem remains far from solved. Firstly, solutions designed for English are not always easily transferable to other languages with more complex syntax rules and less resources available, such as Slavonic. Secondly, vast complexity and formidable hardware requirements of IBM Watson suggest that there is still a room for improvements, making QA systems smaller and smarter.
This work attempts to contribute in both of the above areas. It introduces RAFAEL (RApid Factoid Answer Extraction aLgorithm), a complete QA system for Polish language. It is the first QA system designed to use an open-domain plain-text knowledge base in Polish to address factoid questions not only by providing the most relevant sentence, but also an entity, representing the answer itself. The Polish language, as other Slavonic, features complex inflection and relatively free word order, which poses additional challenges in QA. Chapter SECREF2 contains a detailed description of the system architecture and its constituents.
In the majority of such systems, designers' attention focus on different aspects of a sentence selection procedure. Herein, a different idea is incorporated, concentrating on an entity picking procedure. It allows to compare fewer sentences, likely to contain an answer. To do that, classical Named Entity Recognition (NER) gets replaced by Deep Entity Recognition. DeepER, introduced in this work, is a generalisation of NER which, instead of assigning each entity to one of several predefined NE categories, assigns it to a WordNet synset.
For example, let us consider a question: Which exiled European monarch returned to his country as a prime minister of a republic?. In the classical approach, we recognise the question as concerning a person and treat all persons found in texts as potential answers. Using DeepER, it is possible to limit the search to persons being monarchs, which results in more accurate answers. In particular, we could utilise information that Simeon II (our answer) is a tsar; thanks to WordNet relations we know that it implies being a monarch. DeepER is a generalisation of NER also from another point of view – it goes beyond the classical named entity categories and treats all entities equally. For example, we could answer a question Which bird migrates from the Arctic to the Antarctic and back every year?, although arctic tern is not recognized as NE by NER systems. Using DeepER, we may mark it as a seabird (hence a bird) and include among possible answers. Chapter SECREF3 outlines this approach.
The entity recognition process requires an entities library, containing known entities, their text representations (different ways of textual notation) and WordNet synsets, to which they belong. To obtain this information, the program analyses definitions of entries found in encyclopaedia (in this case the Polish Wikipedia). In previous example, it would use a Wikipedia definition: The Arctic Tern (Sterna paradisaea) is a seabird of the tern family Sternidae. This process, involving also redirect and disambiguation pages, is described in section SECREF40 . Next, having all the entities and their names, it suffices to locate their mentions in a text. The task (section SECREF73 ) is far from trivial because of a complicated named entity inflection in Polish (typical for Slavonic languages, see BIBREF3 ).
DeepER framework provides also another useful service, i.e. automatic evaluation. Usually QA systems are evaluated by verifying accordance between obtained and actual answer based on a human judgement. Plain string-to-string equality is not enough, as many entities have different text representations, e.g. John F. Kennedy is as good as John Fitzgerald Kennedy and John Kennedy, or JFK (again, the nominal inflection in Polish complicates the problem even more). However, with DeepER, a candidate answer can undergo the same recognition process and be compared to the actual expected entity, not string.
Thanks to automatic evaluation vast experiments requiring numerous evaluations may be performed swiftly; saving massive amount of time and human resources. As a test set, authentic questions from a popular Polish quiz TV show are used. Results of experiments, testing (among others) the optimal context length, a number of retrieved documents, a type of entity recognition solution, appear in section SECREF88 .
To avoid overfitting, the final system evaluation is executed on a separate test set, previously unused in development, and is checked manually. The results are shown in section SECREF93 and discussed in chapter SECREF6 . Finally, chapter SECREF7 concludes the paper.
RAFAEL
As stated in previous chapter, RAFAEL is a computer system solving a task of Polish text-based, open-domain, factoid question answering. It means that provided questions, knowledge base and returned answers are expressed in Polish and may belong to any domain. The system analyses the knowledge base, consisting of a set of plain text documents, and returns answers (as concise as possible, e.g. a person name), supplied with information about supporting sentences and documents.
What are the kinds of requests that fall into the category of factoid questions? For the purpose of this study, it is understood to include the following types:
Although the above list rules out many challenging types of questions, demanding more elaborate answers (e.g. Why was JFK killed?, What is a global warming?, How to build a fence?), it still involves very distinct problems. Although RAFAEL can recognize factoid questions from any of these types and find documents relevant to them (see more in section SECREF18 and BIBREF4 ), its answering capabilities are limited to those requesting single unnamed entities and named entities. In this document, they are called entity questions.
The task description here is similar to the TREC competitions and, completed with test data described in section SECREF80 , could play a similar role for Polish QA, i.e. provide a possibility to compare different solutions of the same problem. More information about the task, including its motivation, difficulties and a feasibility study for Polish could be found in BIBREF5 .
Related work
The problem of Question Answering is not new to the Polish NLP community (nor working on other morphologically rich languages), but none of studies presented so far coincides with the notion of plain text-based QA presented above.
First Polish QA attempts date back to 1985, when BIBREF6 presented a Polish interface to ORBIS database, containing information about the solar system. The database consisted of a set of PROLOG rules and the role of the system (called POLINT) was to translate Polish questions to appropriate queries. Another early solution, presented by BIBREF7 , could only work in a restricted domain (business information).
A system dealing with a subset of the TREC tasks was created for Bulgarian by BIBREF8 . His solution answers only three types of questions: Definition, Where-Is and Temporal. He was able to achieve good results with 100 translated TREC questions, using several manually created answer patterns, without NER or any semantic information. Another system for Bulgarian BIBREF9 participated in the CLEF 2005 competition. Its answer extraction module bases on partial grammars, playing a role of patterns for different types of questions. They could answer correctly 37 of 200 questions, of which only 16 belong to the factoid type. Previously the same team BIBREF10 took part in a Bulgarian-English track of the CLEF 2004, in which Bulgarian questions were answered using English texts.
A QA solution was also created for Slovene BIBREF11 . The task there is to answer students' questions using databases, spreadsheet files and a web service. Therefore, it differs from the problem discussed above by limited domain (issues related to a particular faculty) and the non-textual knowledge base. Unfortunately, no quantitative results are provided in this work.
More recently, several elements of a Polish QA system called Hipisek were presented by BIBREF12 . It bases on a fairly common scheme of transforming a question into a search query and finding the most appropriate sentence, satisfying question constrains. Unfortunately, a very small evaluation set (65 question) and an unspecified knowledge base (gathered by a web crawler) make it difficult to compare the results. In their later works BIBREF13 , BIBREF14 , the team concentrated on spatial reasoning using a knowledge base encoded as a set of predicates.
The approach presented by BIBREF15 is the closest to the scope of this work, as it includes analysis of Polish Wikipedia content and evaluation is based on questions translated from a TREC competition. Unfortunately, it heavily relies on a structure of Wikipedia entries, making it impossible to use with an arbitrary textual corpus.
A non-standard approach to answer patterns has been proposed by BIBREF16 . In their Czech open-domain QA system they used a set of templates associated with question types, but also presented a method to learn them semi-automatically from search results. BIBREF17 in their Bulgarian QA system concentrated on semantic matching between between a question and a possible answer checked using dependency parsing. However, they provide no data regarding an answering precision of the whole system.
The last Polish system worth mentioning has been created by BIBREF18 . Generally, their task, called Open Domain Question Answering (ODQA), resembles what is treated here, but with one major difference. A document is considered an answer; therefore they focus on improving ranking in a document retrieval stage. They have found out that it could benefit from taking nearness of query terms occurrences into account.
As some of Slavonic languages lack necessary linguistic tools and resources, only partial solutions of QA problems exist for them, e.g. document retrieval for Macedonian BIBREF19 , question classification for Croatian BIBREF20 or answer validation for Russian BIBREF21 .
The idea of DeepER in a nutshell is to improve QA by annotating a text with WordNet synsets using an entity base created by understanding definitions found in encyclopaedia. Parts of this concept have already appeared in the NLP community.
A technique of coordinating synsets assigned to a question and a possible answer emerged in a study by BIBREF45 . While a question analysis there seems very similar to this work, entity library (called proper noun ontology) generation differs a lot. The author analysed 1 GB of newswire text and extracted certain expressions, e.g. "X, such as Y" implies that Y is an instance of X. Albeit precision of resulting base was not very good (47 per cent for non-people proper names), it led to a substantial improvement of QA performance.
The idea of analysing encyclopaedic definitions to obtain this type of information already appeared, but was employed for different applications. For example, BIBREF46 described a method of building a gazetteer by analysing hyperonymy branches of nouns of first sentences in Wikipedia definitions. Unlike in this work, an original synset was replaced by a coarse-grained NER category. Another example of application is a NE recognizer BIBREF47 using words from a definition as additional features for a standard CRF classifier. In their definition analysis only the last word of the first nominal group was used.
Other researchers dealt with a task explicitly defined as classifying Wikipedia entries to NER categories. For example BIBREF48 addressed the problem by combining traditional text classification techniques (bag of words) with contexts of entity mentions. Others BIBREF49 thoroughly examined article categories as a potential source of is-a relations in a taxonomy (99 per cent of entries have at least one category). Inhomogeneity of categories turned out as the main problem, dealt with by a heuristic classifier, assigning is-a and not-is-a labels. Categories were also used as features in a NER task BIBREF50 , but it required a set of manually designed patterns to differentiate between categories of different nature.
Exploring a correspondence between Wikipedia entries and WordNet synsets found an application in automatic enriching ontologies with encyclopaedic descriptions BIBREF51 . However, only NEs already appearing in the WordNet were considered. The task (solved by bag-of-words similarity) is non-trivial only in case of polysemous words, e.g. which of the meanings of Jupiter corresponds to which Wikipedia article? Others BIBREF52 concentrated on the opposite, i.e. extending the WordNet by NEs that are not there yet by adding titles of entries as instances of synsets corresponding to their common category.
Also, some see Wikipedia as an excellent source of high-quality NER training data. Again, it requires to project entries to NE categories. A thorough study of this problem, presented by BIBREF53 , utilizes features extracted from article content (bag of words), categories, keywords, inter-article and inter-language links. A final annotated corpus turns out as good for NER training as a manually annotated gold standard.
Finally, some researchers try to generalise NER to other categories, but keep the same machine-learning-based approach. For example, BIBREF54 developed a tagger, assigning words in a text to one of 41 supersenses. Supersenses include NE categories, but also other labels, such as plant, animal or shape. The authors projected word-sense annotations of publicly available corpora to supersenses and applied perceptron-trained Hidden Markov Model for sequence classification, obtaining precision and recall around 77 per cent.
System Architecture
A general architectural scheme of RAFAEL (figure FIGREF11 ) has been inspired by similar systems developed for English; for examples see works by BIBREF22 and BIBREF23 .
Two of the steps in the diagram concern offline processing of a knowledge base. Firstly, it is indexed by a search engine to ensure efficient searching in further stages (INDEXING). Secondly, it may be annotated using a set of tools (NLP), but this could also happen at an answering stage for selected documents only.
After the system receives a question, it gets analysed (QUESTION ANALYSIS) and transformed into a data structure, called question model. One of its constituents, a search query, is used to find a set of documents, which are probably appropriate for the current problem (SEARCH). For each of the documents, all entity mentions compatible with an obtained question type (e.g. monarchs), are extracted (ENTITY RECOGNITION). For each of them, a context is generated (CONTEXT GENERATION). Finally, a distance between a question content and the entity context is computed to asses its relevance (DISTANCE MEASURE). All the mentions and their distance scores are stored and, after no more documents are left, used to select the best match (BEST ENTITY SELECTION). The system returns the entity, supplied with information about a supporting sentence and a document, as an answer.
Knowledge Base Processing
Knowledge base (KB) processing consists of two elements: indexing and annotating. The objective of the first is to create an index for efficient searching using a search engine. In the system, Lucene 3.6 is used to build two separate full-text indices: regular and stemmed using a built-in stemmer for Polish, Stempel BIBREF24 .
Secondly, texts go through a cascade of annotation tools, enriching it with the following information:
Morphosyntactic interpretations (sets of tags), using Morfeusz 0.82 BIBREF25 ,
Tagging (selection of the most probable interpretation), using a transformation-based learning tagger, PANTERA 0.9.1 BIBREF26 ,
Syntactic groups (possibly nested) with syntactic and semantic heads, using a rule-based shallow parser Spejd 1.3.7 BIBREF27 with a Polish grammar, including improved version of modifications by BIBREF28 , enabling lemmatisation of nominal syntactic groups,
Named entities, using two available tools: NERF 0.1 BIBREF29 and Liner2 2.3 BIBREF30 .
All the annotations are stored in a variant of TEI P5 standard, designed for the National Corpus of Polish BIBREF31 . As noted previously, the process of annotating is not indispensable at the stage of offline KB processing; it could be as well executed only on documents returned from the search engine (for example see Webclopedia by BIBREF22 or LASSO by BIBREF23 ). However, since during evaluation experiments the same documents undergo the process hundreds of times, it seems reasonable to process the whole KB only once.
Question Analysis
The goal of question analysis is to examine a question and extract all the information that suffices for answer finding. A resulting data structure, called question model, contains the following elements:
Question type – a description of expected answer type, instructing the system, what type of data could be returned as an answer. It has three levels of specificity:
General question type – one of the types of factoid questions, enumerated at the beginning of this chapter,
Named entity type – applicable only in case general type equals named entity. Possible values are the following: place, continent, river, lake, mountain, mountain range, island, archipelago, sea, celestial body, country, state, city, nationality, person, first name, last name, band, dynasty, organisation, company, event, date, century, year, period, number, quantity, vehicle, animal, title.
Focus synset – applicable in case of entity questions; a WordNet synset, to which a question focus belongs; necessary for DeepER.
Search query – used to find possibly relevant documents,
Question content – the words from question which are supposed to appear also in context of an answer.
The task presented above, called question classification, is an example of text classification with very short texts. It could be tackled by a general-purpose classifier; for example, BIBREF11 used SVMs (Support Vector Machines) for closed-domain Slovene QA system; BIBREF32 employed SNoW (Sparse Network of Winnows) for hierarchical classification of TREC questions. For Polish results are not satisfactory BIBREF4 because of data sparsity.
However, sometimes a solution seems quite evident, as part of the question types enforce its structure. For example, when it begins with Who or When, it belongs to person and date question types, respectively. That is why a set of 176 regular expressions (in case of RAFAEL) suffices to deal with them. They match only a subset of questions (36.15 per cent of the training set), but are highly unambiguous (precision of classification equals 95.37 per cent). Nevertheless, some BIBREF33 use solely such patterns, but need a great number of them (1,273).
Unfortunately, most of entity questions are ambiguous, i.e. it is not enough to inspect an interrogative pronoun to find an answer type. They may begin with what or which, followed by a question focus. For example, let us consider a question Which russian submarine sank in 2000 with its whole crew?. Its focus (russian submarine) carries information that the question could be answered by a named entity of type vehicle. The whole process of focus analysis is shown in figure FIGREF25 . The first nominal group after a pronoun serves as a possible lexeme name in plWordNet 2.1 BIBREF34 . As long as there are no results, it gets replaced by its semantic head. When a matching lexeme exists in WordNet, a set of all its hypernyms is extracted. If any of the elements in the set correspond to one of the named entity types, this type is recorded in the question model. Otherwise the general question type takes the value unnamed entity. A WordNet-assisted focus analysis was also implemented in one of solutions participating in a TREC competition BIBREF35 .
Search query generation is described in the next chapter. The last element of a question model, called question content, contains segments, which are to be compared with texts to find the best answer. It includes all the words of the interrogative sentence except those included in the matched pattern (Which, ?) and the focus (submarine). In our example the following are left: russian, sank, in, 2000, with, its, whole, crew. An entity mention, which context resembles this set, will be selected as an answer (see details in section SECREF33 ).
The question analysis stage explained above follows a design presented in previous works BIBREF4 , BIBREF36 , where more details could be found. The major difference lies in result processing – an original synset is not only projected to one of the named entity types, but also recorded as a focus synset in question type, utilised in DeepER to match entity types. In our example, it would only consider submarines as candidate answers.
Document Retrieval
The use of search engines in QA systems is motivated mainly by performance reasons. Theoretically, we could analyse every document in a text base and find the most relevant to our query. However, it would take excessive amount of time to process the documents, majority of which belong to irrelevant domains (839,269 articles in the test set). A search engine is used to speed up the process by selecting a set of documents and limiting any further analysis to them.
As described in section SECREF12 , a knowledge base is indexed by Lucene offline. Given a question, we need to create a search query. The problem is that an answer in the knowledge base is probably expressed differently than the question. Hence, a query created directly from words of the question would not yield results, unless using a highly-redundant KB, such as the WWW (for this type of solution see BIBREF37 ). Therefore, some of the query terms should be dropped – based on their low IDF BIBREF38 or more complex heuristics BIBREF23 . On the other hand, the query may be expanded with synonyms BIBREF22 or derived morphological forms BIBREF38 .
Finally, we need to address term matching issue – how to compare a query keyword and a text word in a morphologically-rich language, such as Polish? Apart from exact match, it also is possible to use a stemmer or fuzzy queries, available in Lucene (accepting a predefined Levenshtein distance between matching strings).
Previous experiments BIBREF36 led to the following query generation procedure:
Remove all words matched by a regular expression at the classification stage (What, Which, etc.),
Keep a question focus,
Connect all the remaining words by OR operator,
Use fuzzy term matching strategy with absolute distance equal 3 characters and fixed prefix.
Lucene handles a query and yields a ranked document list, of which N first get transferred to further analysis. The influence of value of N on answering performance is evaluated in section SECREF88 .
Entity Recognition
Having a set of proposed documents and a question type, the next step is to scan them and find all mentions of entities with appropriate types. RAFAEL includes two approaches to the problem: classical Named Entity Recognition (NER) and novel Deep Entity Recognition.
Three NERs for Polish are employed: NERF, Liner2 and Quant. NERF BIBREF29 is a tool designed within the project of the National Corpus of Polish and bases on linear-chain conditional random fields (CRF). It recognizes 13 types of NEs, possibly nested (e.g. Warsaw in University of Warsaw). Liner2 BIBREF30 also employs CRFs, but differentiates NEs of 56 types (which could be reduced to 5 for higher precision). Annotation using both of the tools happens offline within the KB preprocessing, so in the currently described stage it suffices to browse the annotations and find matching entities. As the above tools lack recognition of quantitative expressions, a new one has been developed especially for RAFAEL and called Quant. It is able to handle both numbers and quantities (using WordNet) in a variety of notations.
Appendix A contains details of implementation of named entity recognition in RAFAEL, including a description of Quant and a mapping between question types and named entity types available in NERF and Liner2. An alternative being in focus of this work, i.e. DeepER approach, is thorougly discussed in chapter SECREF3 .
RAFAEL may use any of the two approaches to entity recognition: NER (via NERF, Liner2 and Quant) or novel DeepER; this choice affects its overall performance. Experiments showing precision and recall of the whole system with respect to applied entity recognition technique are demonstrated in section SECREF88 .
An entity recognition step is performed within the question answering process and aims at selecting all entity mentions in a given annotated document. Before it begins, the entity library is read into a PATRICIA trie, a very efficient prefix tree. In this structure, every entity name becomes a key for storing a corresponding list of entities.
When a document is ready for analysis, it is searched for strings that match any of the keys in the trie. The candidate chunks (sequences of segments) come from three sources:
lemmata of words and syntactic groups,
sequences of words in surface forms (as they appear in text),
sequences of words in base forms (lemmata).
The last two techniques are necessary, because a nominal group lemmatisation often fails, especially in case of proper names. Their rich inflection in Polish BIBREF3 means that a nominal suffix of an entity may be hard to predict. Therefore, a chunk is considered to match an entity name if:
they share a common prefix,
an unmatched suffix in neither of them is longer than 3 characters,
the common prefix is longer than the unmatched chunk suffix.
Given a list of entity mentions, RAFAEL checks their compatibility with a question model. Two of its constituents are taken into account: a general question type and a synset. An entity mention agrees with NAMED_ENTITY type if its first segment starts with a capital letter and always agrees with UNNAMED_ENTITY. To pass a semantic agreement test, the synset of the question model needs to be a (direct or indirect) hypernym of one of the synsets assigned to the entity. For example, list of synsets assigned to entity Jan III Sobieski contains <król.1> (king), so it matches a question focus <władca.1, panujący.1, hierarcha.2, pan.1> (ruler) through a hypernymy path <władca.1, panujący.1, hierarcha.2, pan.1> INLINEFORM0 <monarcha.1, koronowana głowa.1> (monarch) INLINEFORM1 <król.1>. All the mentions of entities satisfying these conditions are returned for further processing.
Mention selection
When a list of entity mentions in a given document is available, we need to decide which of them most likely answers the question. The obvious way to do that is to compare surroundings of every mention with the content of the question. The procedure consists of two steps: context generation and similarity measurement.
The aim of a context generation step is to create a set of segments surrounding an entity, to which they are assigned. Without capabilities of full text understanding, two approximate approaches seem legitimate:
Sentence-based – for a given entity mention, a sentence in which it appears, serves as a context,
Segment-based – for a given entity mention, every segment sequence of length M, containing the entity, is a context.
Both of them have some advantages: relying on a single sentence ensures relation between an entity and a context, whereas the latter provides possibility of modifying context length. Obviously, the value of M should be proportional to question (precisely, its content) length.
The method of treating sentences as a context has gained most popularity (see work of BIBREF39 ), but a window of fixed size also appears in the literature; for example BIBREF38 used one with M=140 bytes.
The context generation is also related to another issue, i.e. anaphoric expressions. Some segments (e.g. this, him, they) may refer to entities that occurred earlier in a text and therefore harm a similarity estimation. It could be tackled by applying anaphora resolution, but a solution for Polish BIBREF40 remains in an early stage. Observations show that the majority of anaphora refer to an entity in a document title, so the problem is partially bypassed by adding a title to a context.
An influence of the context generation techniques on final results is shown in section SECREF88 .
To measure a similarity between a question content (explained in section SECREF18 ) and an entity context (generated by the procedures in previous section), a Jaccard similarity index BIBREF41 is computed. However, not all word co-occurrences matter equally (e.g. compare this and Honolulu), so word weights are used: INLINEFORM0
The sets INLINEFORM0 and INLINEFORM1 contain segments in base forms, whereas INLINEFORM2 denotes a weight of an INLINEFORM3 -th base form, equal to its scaled IDF computed on a document set INLINEFORM4 : INLINEFORM5
The Jaccard index is a popular solution for sentence similarity measurement in QA (for example see a system by BIBREF42 ). In case of selecting relevant documents, cosine measure is also applied. BIBREF18 compared it to Minimal Span Weighting (MSW) and observed that the latter performs better, as it takes into account a distance between matched words. A study of different techniques for sentence similarity assessment could be found in BIBREF39 .
At this stage, a large set of pairs of entity mention and its contexts with scores assigned, is available. Which of them answers the question? Choosing the one with the highest score seems an obvious solution, but we could also aggregate scores of different mentions corresponding to the same answer (entity), e.g. compute their sum or mean. However, such experiments did not yield improvement, so RAFAEL returns only a single answer with the highest score.
An answer consists of the following elements: an answer string, a supporting sentence, a supporting document and a confidence value (the score). A sentence and a document, in which the best mention appeared, are assumed to support the answer. Thanks to properties of Jaccard similarity, the mention score ranges between 0 for completely unrelated sentences to 1 for practically (ignoring inflection and a word order) the same. Therefore, it may serve as an answer confidence.
When no entity mentions satisfying constraints of a question are found, no answer is returned. This type of result could also be used when the best confidence score is below a predefined value; performance of such technique are shown in section SECREF88 . The refusal to answer in case of insufficient confidence plays an important role in Jeopardy!, hence in IBM Watson BIBREF2 , but it was also used to improve precision in other QA systems BIBREF43 .
Deep Entity Recognition
Deep Entity Recognition procedure is an alternative to applying Named Entity Recognition in QA to find entities matching question constraints. It scans a text and finds words and multi-word expressions, corresponding to entities. However, it does not assign them to one of several NE categories; instead, WordNet synsets are used. Therefore, named entities are differentiated more precisely (e.g. monarchs and athletes) and entities beyond the classical NE categories (e.g. species, events, devices) could also be recognised in a text.
It does not seem possible to perform this task relying solely on features extracted from words and surrounding text (as in NER), so it is essential to build an entity library. Such libraries already exist (Freebase, BabelNet, DBpedia or YAGO) and could provide an alternative for DeepER, but they concentrate on English. The task of adaptation of such a base to another language is far from trivial, especially for Slavonic languages with complex NE inflection BIBREF3 . An ontology taking into account Polish inflection (Prolexbase) has been created by BIBREF44 , but it contains only 40,000 names, grouped into 34 types.
Entity Library
An entity library for DeepER contains knowledge about entities that is necessary for deep entity recognition. Each of them consists of the following elements (entity #9751, describing the Polish president, Bronisław Komorowski):
Main name: Bronisław Komorowski,
Other names (aliases): Bronisław Maria Komorowski, Komorowski,
Description URL: http://pl.wikipedia.org/wiki/?curid=121267,
plWordNet synsets:
<podsekretarz1, podsekretarz stanu1, wiceminister1> (vice-minister, undersecretary),
<wicemarszałek1> (vice-speaker of the Sejm, the Polish parliament),
<polityk1> (politician),
<wysłannik1, poseł1, posłaniec2, wysłaniec1, posłannik1> (member of a parliament),
<marszałek1> (speaker of the Sejm),
<historyk1> (historian),
<minister1> (minister),
<prezydent1, prezydent miasta1> (president of a city, mayor).
A process of entity library extraction is performed offline, before question answering. The library built for deep entity recognition in RAFAEL, based on the Polish Wikipedia (857,952 articles, 51,866 disambiguation pages and 304,823 redirections), contains 809,786 entities with 1,169,452 names (972,592 unique). The algorithm does not depend on any particular feature of Wikipedia, so any corpus containing entity definitions could be used.
Figure FIGREF54 shows an exemplary process of converting the first paragraph of a Polish Wikipedia entry, describing former Polish president Lech Wałęsa, into a list of WordNet synsets. First, we omit all unessential parts of the paragraph (1). This includes text in brackets or quotes, but also introductory expressions like jeden z (one of) or typ (type of). Then, an entity name is detached from the text by matching one of definition patterns (2). In the example we can see the most common one, a dash (–). Next, all occurrences of separators (full stops, commas and semicolons) are used to divide the text into separate chunks (3). The following step employs shallow parsing annotation – only nominal groups that appear at the beginning of the chunks are passed on (4). The first chunk that does not fulfil this requirement and all its successors get excluded from further analysis (4.1). Finally, we split the coordination groups and check, whether their lemmas correspond to any lexemes in WordNet (5). If not, the process repeats with the group replaced by its semantic head. In case of polysemous words, only the first word sense (usually the most common) is taken into account.
The whole process is more complicated than the simple example shows. Generally, it consists of the following steps:
Prepare a corpus – data format and annotation process is the same as for a knowledge base, used in question answering, see section SECREF12 . It differs in scope of page categories, including not only articles, but also disambiguation and redirection pages.
For each of article pages, extract the first paragraph and apply readDefinition function. If a resulting entity has a non-empty synset list, add it to the library. If some of the redirection pages point to the entity name, add their names as entity aliases.
For each of disambiguation pages, extract all items and apply readDefinition function. If an item refers to an existing entity, extend it with extracted synsets and disambiguation page name. Create a new entity otherwise. Add redirection names as previously.
Save the obtained base for future use.
Function readDefinition( INLINEFORM0 ) – interprets a definition to assign synsets to an entity. INLINEFORM1 - annotated first paragraph of an encyclopaedic entry INLINEFORM2 - synsets describing an entity INLINEFORM3 := {} INLINEFORM4 := removeInBrackets( INLINEFORM5 ) INLINEFORM6 := removeInQuotes( INLINEFORM7 ) INLINEFORM8 in INLINEFORM9 INLINEFORM10 matches INLINEFORM11 INLINEFORM12 := match( INLINEFORM13 , INLINEFORM14 ).group(2) break INLINEFORM15 := removeDefinitionPrefixes( INLINEFORM16 ) INLINEFORM17 := split( INLINEFORM18 , INLINEFORM19 ) INLINEFORM20 in INLINEFORM21 INLINEFORM22 := firstGroupOrWord( INLINEFORM23 ) isNominal( INLINEFORM24 ) INLINEFORM25 := INLINEFORM26 INLINEFORM27 extractSynsets( INLINEFORM28 ) break INLINEFORM29
The readDefinition function (shown as algorithm SECREF40 ) analyses a given paragraph of text and extracts a set of synsets, describing an entity, to which it corresponds, as exemplified by figure FIGREF54 . Simplifying, it is done by removing all unnecessary text (in brackets or quotes), splitting it on predefined separators (commas, full stops, semicolons) and applying extractSynsets function with an appropriate stop criterion. The readDefinition makes use of the following elements:
removes everything that is between brackets ([], () or {}) from the text (step (1) in figure FIGREF54 ).
removes everything between single or double quotes from the text (step (1) in the example).
contains patterns of strings separating a defined concept from a definition, e.g. hyphens or dashes (used in step (2) of the example) or jest to (is a).
removes expressions commonly prefixing a nominal group, such as jeden z (one of), typ (a type of) or klasa (a class of), not present in the example.
a set of three characters that separate parts of a definition: ".", "," and ";".
returns the longest syntactic element (syntactic group or word) starting at the beginning of a chunk (step (4) in the example).
decides, whether a chunk is a noun in nominative, a nominal group or a coordination of nominal groups.
Function extractSynsets( INLINEFORM0 ) – recursively extracts synsets from a nominal chunk. INLINEFORM1 - a nominal chunk (a syntactic group or a single noun) INLINEFORM2 - WordNet synsets corresponding to INLINEFORM3 INLINEFORM4 := lemmatise( INLINEFORM5 ) inWordNet( INLINEFORM6 ) getLexemes( INLINEFORM7 ).synset(0) isCoordination( INLINEFORM8 ) INLINEFORM9 := {} INLINEFORM10 in INLINEFORM11 INLINEFORM12 := INLINEFORM13 INLINEFORM14 extractSynsets( INLINEFORM15 ) INLINEFORM16 isGroup( INLINEFORM17 ) extractSynsets( INLINEFORM18 .semanticHead) {}
The extractSynsets function (shown as algorithm SECREF40 ) accepts a nominal chunk and extracts WordNet synsets, corresponding to it. It operates recursively to dispose any unnecessary chunk elements and find the longest subgroup, having a counterpart in WordNet. It corresponds to step (5) in figure FIGREF54 and uses the following elements:
returns a lemma of a nominal group.
checks whether a given text corresponds to a lexeme in WordNet.
return a list of WordNet lexemes corresponding to a given text.
return a synset including a lexeme in a given word sense number.
return TRUE iff a given chunk is a coordination group.
return TRUE iff a given chunk is a group.
is an element of a syntactic group, denoted as a semantic head.
A few of design decisions reflected in these procedures require further comment. First of all, they differ a lot from the studies that involve a definition represented with a bag of words BIBREF48 , BIBREF51 , BIBREF53 . Here, a certain definition structure is assumed, i.e. a series of nominal groups divided by separators. What is more, as the full stop belongs to them, the series may continue beyond a single sentence, which has improved recall in preliminary experiments. Availability of a shallow parsing layer and group lemmatisation allows to query WordNet by syntactic groups instead of single nouns, as in work of BIBREF46 . As word order is relatively free in Polish, a nominal group cannot be assumed to end with a noun, like BIBREF47 did. Instead, a semantic head of a group is used.
Finally, the problem of lack of word sense disambiguation remains – the line getLexemes( INLINEFORM0 ).synset(0) means that always a synset connected to the first meaning of a lexeme is selected. We assume that it corresponds to the most common meaning, but that is not always the case – in our example at figure FIGREF54 <prezydent.1, prezydent miasta.1> (president of a city, i.e. mayor) precedes <prezydent.2> (president of a country, the obvious meaning). However, it does not have to harm QA performance as far as the question analysis module (section SECREF18 ) functions analogously, e.g. in case of a question beginning with który prezydent... (which president...). Therefore, the decision has been motivated by relatively good performance of this solution in previously performed experiments on question analysis BIBREF36 . It also works in other applications, e.g. gazetteers generation BIBREF46 .
To assess quality of the entity library, its content has been compared with synsets manually extracted from randomly selected 100 Wikipedia articles. 95 of them contain a description of an entity in the first paragraph. Among those, DeepER entity library includes 88 (per-entity recall 92.63 per cent). 135 synsets have been manually assigned to those entities, while the corresponding set in library contains 133 items. 106 of them are equal (per-synset precision 79,70 per cent), while 13 differ only by word sense. 16 of manually extracted synsets hove no counterpart in the entity library (per-synset recall 88.15 per cent), which instead includes 14 false synsets.
Evaluation
Evaluation of RAFAEL is typical for factoid QA systems: given a knowledge base and and questions, its responses are compared to the expected ones, prepared in advance. Section SECREF80 describes data used in this procedure, whereas section SECREF87 explains how an automatic evaluation is possible without human labour.
Data
The Polish Wikipedia serves as a knowledge base. It has been downloaded from a project site as a single database dump at 03.03.2013, from which plain text files have been extracted using Wikipedia Extractor 2.2 script. It means that only plain text is taken into account – without lists, infoboxes, tables, etc. This procedure leads to a corpus with 895,486 documents, containing 168,982,550 segments, which undergo the annotation process, described in section SECREF12 .
The questions that are to be answered with the knowledge base come from two separate sets:
Development set bases on 1500 (1130 after filtering) questions from a Polish quiz TV show, called Jeden z dziesięciu BIBREF55 . It was involved in previous experiments BIBREF4 , BIBREF36 .
Evaluation set bases on an open dataset for Polish QA systems, published by BIBREF56 . It has been gathered from Did you know... column, appearing in the main page of the Polish Wikipedia. It contains 4721 questions, from which 1000 have been analysed, which resulted in 576 satisfying the task constrains, given in chapter SECREF2 .
Table TABREF85 shows a distribution of different question types and named entity types in the sets.
To each of the questions from both sets some information has been assigned manually. It includes an identification number, an expected answer string, a general question type, a named entity type (if applicable) and an expected source document. Table TABREF86 contains several exemplary questions from the development set.
The additional information (question types and expected documents) makes it possible to evaluate only selected modules of the whole QA system. For example, we could test question classification by comparing results against given question types or entity selection by analysing only the relevant document.
Automatic Evaluation
Thanks to availability of the DeepER entity library, it is possible to automatically perform answer evaluation for all the question types that are recognised by this technique (UNNAMED_ENTITY and NAMED_ENTITY excluding dates, numbers and quantities).
Both an expected and obtained answer are represented as short strings, e.g. Bronisław Komorowski. However, it does not suffice to check their exact equality. That is caused by existence of different names for one entity (Bronisław Maria Komorowski or Komorowski), but also rich nominal inflection (Komorowskiego, Komorowskiemu, ...).
In fact, we want to compare entities, not names. Hence, deep entity recognition is a natural solution here. To check correctness of an answer, we use it as an input for the recognition process, described in section SECREF73 . Then, it is enough to check whether the expected answer appears in any of lists of names, assigned to the recognized entities. For example, let us consider a question: Kto jest obecnie prezydentem Polski? (Who is the current president of Poland?) with expected answer Bronisław Komorowski and a system answer Komorowski. The DeepER process finds many entities in the string (all the persons bearing this popular surname). One of them is the question goal, hence, has Bronisław Komorowski in its list of names.
As the process of entity recognition is imperfect, so is the automatic evaluation. However, it still lets us to notice general trends in answering performance with respect to several factors. Of course, the final evaluation needs to be checked manually.
Results
As mentioned in previous section, the results consist of two groups: experiments, showing an influence of some aspects of algorithm on performance, and a final assessment. Both use the Polish Wikipedia as a knowledge base, whereas the questions asked belong to development and evaluation sets, respectively. In this section, recall measures percentage of questions, to which RAFAEL gave any answer, whereas precision denotes percentage of question answered correctly.
When analysing results of different entity recognition techniques, we need to remember that they strongly rely on output of the question analysis, which is not perfect. In particular, tests show that 15.65 per cent of questions is assigned to wrong type and 17.81 per cent search results do not include the expected document BIBREF36 . The entity recognition (ER) stage, a focus of this work, is very unlikely to deliver valid answers in these cases. However, as the expected question type and source document are available in question metadata, it is possible to correct results of question analysis by artificially replacing a wrong type and/or adding the expected document to the retrieved set. In that way the ER modules could be evaluated, as if question analysis worked perfectly. Note that this approach slightly favours NER-based solutions as the question metadata contains general types and named entity types but lack focus synsets, used by DeepER.
Experiments
The goal of the first experiment is to test how number a of documents retrieved from the search engine and analysed by the entity recognition techniques, influences the performance. Question classification errors have been bypassed as described in the previous paragraph. Additionally, two versions have been evaluated: with and without corrections of a retrieved set of documents. Figure FIGREF89 demonstrates results for different entity recognition techniques.
As we can see, if a retrieved set contains the desired article, adding new documents slightly increases recall, while precision drops observably. That is because additional irrelevant documents usually introduce noise. However, in some cases they are useful, as increasing recall indicates. On the other hand, if we have no guarantee of presence of the expected document in a list, it seems more desirable to extend it, especially for small sizes. For sets bigger than 50 elements, the noise factor again dominates our results. Judging by F1 measure, the optimal value is 20 documents.
When it comes to the comparison, it should be noted that DeepER performs noticeably better than traditional NER. The gain in precision is small, but recall is almost twice as big. It could be easily explained by the fact that the NER solutions are unable to handle UNNAMED_ENTITY type, which accounts for 36 per cent of the entity questions.
It is also worthwhile to check how the system performs while using different values of minimal confidence rate (Jaccard similarity), as described in section UID38 . It could become useful when we demand higher precision and approve lower recall ratio. The plot in figure FIGREF90 shows answering performance using DeepER with corrected question analysis with respect to the minimal confidence rate. Generally, the system behaves as expected, but the exact values disappoint. The precision remain at a level of 25-40 per cent up to confidence 0.75, where in turn recall drops to 0.35 per cent only. Values of F1 measure suggest that 0.2 is the highest sensible confidence rate.
One more parameter worth testing, explained in section UID34 , is the context generation strategy. To find the entity with a context most similar to a question content, we could analyse a single sentence, where it appears, or a sequence of words of a predefined length. For both of these solutions, we could also add a document title, as it is likely to be referred to by anaphoric expressions. Figure FIGREF91 shows the value of precision (recall does not depend on context) for these four solutions.
We can see that inclusion of a title in a context helps to achieve a better precision. The impact of anaphoric reference to title emerges clearly in case of flexible context – the difference grows with context size. Quite surprisingly, for the optimal context length (1.5 * question size), it is on the contrary. However, because of the small difference between the techniques including title, for the sake of simplicity, the single sentence is used in the final evaluation.
Final System Evaluation
To impose a realistic challenge to the system, the evaluation set, used at this stage, substantially differs from the one used during the development (see section SECREF80 ). A configuration for the final evaluation has been prepared based on results of the experiments. All of the tested versions share the following features:
no question analysis corrections,
question classification and query generation solutions which proved best in the previous experiments (see section SECREF18 ),
a retrieved set of documents including 20 articles,
no minimal confidence,
singe sentence context with title.
Tested solutions differ with respect to entity recognition only; RAFEL variants based on the following options are considered:
quantities recognizer (Quant),
traditional NER solutions: Nerf and Liner2,
deep entity recognition (DeepER),
hybrid approach, where entity mentions were gathered from all the above sources.
Table TABREF103 shows results of the final evaluation, expressed by recall, precision, F1 measure and Mean Reciprocal Rank (MRR). Standard deviations of these values have been obtained by bootstrap resampling of the test set. Additionally, precision obtained by automatic evaluation has been added, where applicable. As we can see, only a small percentage of questions is handled by the quantitative entities recognition. NER-based solutions deal with slightly more (Nerf) or less (Liner2) than a half of the questions. When using DeepER, the recall ratio rises to 73 per cent while the precision does not differ significantly. That is because UNNAMED_ENTITY questions (unreachable for traditional NER) account for a substantial part of the test set. The maximum recall is obtained by the hybrid solution (90 per cent) but it comes at a cost of lower precision (33 per cent). On the other hand, when we take the whole ranking lists into account, traditional NERs seem to perform better (in terms of MRR).
As expected, the automatic evaluation underestimates precision, but the difference remains below 5 per cent. Judging by F1 measure, the hybrid solution seems to beat the others.
Discussion
The main strength of DeepER compared to NER, according to results shown in figure TABREF103 , is much higher recall. Table TABREF106 shows examples of questions, to which only DeepER provides a correct answer. As we can see (notice question foci in the table), they could not be assigned to any of the traditional NE categories.
The other striking fact in the results is low precision. A part of the wrong answers was inspected and most of the errors seem to result from the following phenomena:
The entity recognizers also introduce errors typical for them:
The last remark applies also to other techniques. For example, consider a word kot, which means a cat. However, it is also a name of a journal, a lake, a village, a badge (KOT), a surname of 10 persons in the Polish Wikipedia and much more. A human would usually assume the most common meaning (a cat), but the system treats them as equally probable. It introduces noise in the process, as such an entity matches many types of questions.
Another thing that demands explanation is a difference in precision of answers found using Liner2 and DeepER: in evaluation set the latter does not maintain its advantage from development set. It could be explained by different compositions of the question sets (table TABREF85 ) – the development one contains much more questions beginning with ambiguous pronouns, followed by a question focus, e.g. Który poeta... (which poet), thus providing a precise synset (a poet) for deep entity recognition. Members of the evaluation set much more frequently begin with pronouns like Kto ...(who), where a synset corresponds to a general NE type (a person).
As RAFAEL is the first Polish QA system, able to answer by entities instead of documents, we can not compare it directly to any other solution. However, the evaluation set has been created based on questions published by BIBREF56 and used for evaluation of a document retrieval system BIBREF18 . Their baseline configuration achieved a@1 (percentage of questions answered by the first document, corresponds to precision in table TABREF103 ) equal 26.09 per cent. By taking into account proximity of keyword matches (MCSW method), they improved the result to 38.63 per cent. We can see that RAFAEL, despite solving much more challenging problem, in all configurations obtains better precision than baseline; using Liner2 it beats even the best method tested on this set (MCSW).
The results suggest two possible directions of future work to improve performance of RAFAEL. Firstly, involving semantics in sentence matching could solve some of the problems mentioned above. There are a lot of techniques in that area, also in QA systems (see a variety of them used by BIBREF39 ), but their implementation in a morphologically rich language would require a thorough study. For example, there exist techniques computing a semantic similarity based on a WordNet graph BIBREF57 , which is available for Polish and proved very useful in this study. Secondly, the relatively good performance of hybrid ER indicates that it may be good to apply different entity recognizer to different questions. For example, we could evaluate them for each question type separately and select the one that performs best for a given one. However, it would require much more training data to have a substantial number of questions of each type, including the scarce ones (observe sparsity of table TABREF85 ).
When it comes to DeepER, word ambiguity seem to be the main issue for future efforts. Of course, a full-lexicon precise word-sense disambiguation tool would solve the problem, but we can't expect it in near future. Instead, we could select a synset somewhere in a path between a focus synset and a named entity type. In the example from figure FIGREF54 rather than choosing between <prezydent.1, prezydent miasta.1> (president of a city) and <prezydent.2> (president of a country) we could use <urzędnik.1, biuralista.1> (official), which covers both meanings.
Conclusions
This paper introduces RAFAEL, a complete open-domain question answering system for Polish. It is capable of analysing a given question, scanning a large corpus and extracting an answer, represented as a short string of text.
In its design, the focus has been on entity recognition techniques, used to extract all the entities compatible with a question from a given text. Apart from the traditional named entity recognition, differentiating between several broad categories of NEs, a novel technique, called Deep Entity Recognition (DeepER), has been proposed and implemented. It is able to find entities belonging to a given WordNet synset, using an entity library, gathered by interpreting definitions from encyclopaedia.
Automatic evaluation, provided by DeepER approach, has let to perform several experiments, showing answering accuracy with respect to different parameters. Their conclusions have been used to prepare final evaluation, which results have been checked manually. They suggest that the DeepER-based solution yields similar precision to NER, but is able to answer much more questions, including those beyond the traditional categories of named entities.
Appendix A: Named Entity Recognition in RAFAEL
As mentioned in section SECREF32 , apart from DeepER, RAFAEL employs also traditional NER-based solutions for entity recognition: NERF, Liner2 and Quant. Each of them uses its own typology of named entities, which covers only a part of the types, enumerated in section SECREF18 . Table TABREF118 shows a correspondence between these types. As we can see, there are a few problems:
The problems 3 and 4 are solved by an additional postprocessing code, extracting CENTURY from date and NAME and SURNAME from person_nam entities. In case of multi-segment person entities it assumes that the first and last word correspond to first and last name, respectively.
While NERF and Liner2 are standalone NER tools and details of their design are available in previously mentioned publications, Quant has been created specifically for RAFAEL. To find numbers, it annotates all chains of segments according to a predefined pattern, which accepts the following types of segments:
The pattern is matched in greedy mode, i.e. it adds as many new segments as possible. It could recognise expressions like 10 tysięcy (10 thousand), kilka milionów (several million), 10 000 or 1.698,88 (1,698.88).
Quantity is a sequence of segments, recognised as a number, followed by a unit of measurement. To check whether a word denotes a unit of measurement, the plWordNet is searched for lexemes equal to its base. Then it suffices to check whether it belongs to a synset, having <jednostka miary 1> (unit of measurement) as one of (direct or indirect) hypernyms, e.g. piętnaście kilogramów (fifteen kilograms) or 5 000 watów (5 000 watts).
Acknowledgments
Study was supported by research fellowship within "Information technologies: research and their interdisciplinary applications" agreement number POKL.04.01.01-00-051/10-00. Critical reading of the manuscript by Agnieszka Mykowiecka and Aleksandra Brzezińska is gratefully acknowledged. | only the first word sense (usually the most common) is taken into account |
238ec3c1e1093ce2f5122ee60209b969f7669fae | 238ec3c1e1093ce2f5122ee60209b969f7669fae_0 | Q: How is the fluctuation in the sense of the word and its neighbors measured?
Text: Introduction
Distributed representation of word sense provides us with the ability to perform several operations on the word. One of the most important operations on a word is to obtain the set of words whose meaning is similar to the word, or whose usage in text is similar to the word. We call this set the neighbor of the word. When a word has several senses, it is called a polysemic word. When a word has only one sense, it is called a monosemic word. We have observed that the neighbor of a polysemic word consists of words that resemble the primary sense of the polysemic word. We can explain this fact as follows. Even though a word may be a polysemic, it usually corresponds to a single vector in distributed representation. This vector is primarily determined by the major sense, which is most frequently used. The information about a word's minor sense is subtle, and the effect of a minor sense is difficult to distinguish from statistical fluctuation.
To measure the effect of a minor sense, this paper proposes to use the concept of surrounding uniformity. The surrounding uniformity roughly corresponds to statistical fluctuation in the vectors that correspond to the words in the neighbor. We have found that there is a difference in the surrounding uniformity between a monosemic word and a polysemic word. This paper describes how to compute surrounding uniformity for a given word, and discuss the relationship between surrounding uniformity and polysemy.
Related Work
The distributed word representation can be computed as weight vectors of neurons, which learn language modeling BIBREF0 . We can obtain a distributed representation of a word using the Word2Vec software BIBREF1 which enable us to perform vector addition/subtraction on a word's meaning. The theoretical background is analyzed by BIBREF2 , where the operation is to factorize a word-context matrix, where the elements in the matrix are some function of the given word and its context pairs. This analysis gives us insight into how the vector is affected by multiple senses or multiple context sets. If a word has two senses, the obtained representation for the word will be a linearly interpolated point between the two points of their senses.
The importance of multiple senses is well recognized in word sense detection in distributed representation. The usual approach is to compute the corresponding vectors for each sense of a word BIBREF3 , BIBREF4 . In this approach, first, the context is clustered. Then, the vector for each cluster is computed. However, the major problem faced by this approach is that all target words need to be assumed as polysemic words first, and their contexts are always required to be clustered. Another approach is to use external language resources for word sense, and to classify the context BIBREF5 . The problem with this approach is that it requires language resources of meanings to obtain the meaning of a polysemic word. If we know whether a given word is polysemic or monosemic thorough a relatively simple method, we can concentrate our attention on polysemic words.
Senses and Contexts
In this paper, we assume that the sense of a word is determined by the distribution of contexts in which the word appears in a given corpus. If a word comes to be used in new contexts, the word comes to have a new sense. If we could have an infinitely sizes corpus, this sense might converge into the sense in the dictionary. In reality, the size of the corpus in hand is limited, and some senses indicated in a dictionary may not appear in the corpus. The distinction between the senses in a dictionary and the senses in the corpus is important in this paper, because it is crucial for discussing polysemy. All discussions in this paper depend on the corpus in hand. We now use the FIL9 corpus (http://mattmahoney.net/dc/textdata), which primarily consists of a description of believed facts, rather than conversations. We can expect that the senses that are mainly used in conversation would not appear in this corpus.
In this paper, we analyze auxiliary verbs, which are polysemic words from a dictionary. If the corpus is limited to a description of believed facts, we may regard auxiliary verbs as monosemic words, since their contexts are limited. In addition, we particularly analyze the relationship between the auxiliary verb “may”, and name of the month “May”. In the dictionary, these two are regarded as two different words, rather than as two different senses of one word. By ignoring the upper/lower case characters, these two words have same character sequence and the word “may” becomes a polysemic word, which has two types of context in the given corpus.
Proposed Method
Our proposed method is based on the following measures. Let $\vec{w}$ be the vector corresponding to the given word. Let $N$ be the size of the neighbor, such as 4. First, we choose $N$ neighboring words whose angle with the given word is the smallest. This operation is already implemented in the Word2Vec software. Let $\vec{a_i}$ ( $\vec{w}$ ) be the vectors corresponding to $i$ th vector of the neighbor of the word.
We choose the uniformity of vectors, which can be regarded as general case of triangle inequality. The uniformity of a set of vectors is a ratio, i.e., the size of the vector of the vector addition of the vectors divided by the scalar sum of the sizes of the vectors. If and only if all directions of the vectors are the same, the uniformity becomes 1.0. We compute this uniformity for the neighbors, including the word itself. Surrounding Uniformity (SU) can be expressed as follows: $SU(\vec{w}) = \frac{|\vec{s}(\vec{w})|}{|\vec{w}| + \sum _{i}^{N}|\vec{a_i}(\vec{w})|}$
where $\vec{s}(\vec{w}) = \vec{w} + \sum _{i}^{N} \vec{a_i}(\vec{w}).$
When computing SU, we consider the set of words whose vectors are reliable. We choose these words as the most frequently appearing words in corpus. The size of words is denoted as $limit$ . If a word is not in this set, or the word does not have sufficient number of neighbors in this set, we consider that the value of SU is undefined, and that the word does not have this value.
Our method performs a statistical test to determine whether a given word is used polysemously in the text, according to the following steps:
This is a basic statistical test BIBREF6 to detect outliers.
Note that we cannot compute the variance if some $a_i$ does not have the value of SU. Further, it may be also possible that all $a_i$ may have the same SU, sharing identical neighbors. In this case, the variance becomes an extreme value, that is, 0. In these cases, we consider that we cannot perform the statistical test.
Experimental Settings and Examples of Calculation
We used FIL9, which is freely available as the test corpus for Word2Vec and is derived from Wikipedia. We compute 200-dimensional distributed vector representations with default parameter. In this situation, all-uppercase are converted into lower case. This is why all proper nouns are in lower case in this example. First we selected stable words as the 1000 words that appear most frequently in the text. We compute surrounding uniformity of these words. We define the given word $w$ and its neighboring word $a_i$ are limited to stable words. We then determine the search scope for stable neighboring words and set $N$ , which is the number of neighbors used to compute the surrounding uniformity, to 4. For example, if there are 7 stable words in the search scope, we use only the top 4 words to compute the surrounding uniformity.
Table 1 shows the uniformity of auxiliary verbs in this setting. We were able to compute the surrounding uniformity for 160 words; for the remaining 840 words, there were fewer than the required 4 stable neighboring words in the search scope and the surrounding uniformity could not be determined.
For the case of the word “may”, neighbor words are “can”, “should”, “might”, and “will”. Their surrounding uniformities are, 0.9252 (“can”), 0.9232 (“should”), 0.9179 (“might”), and 0.9266 (“will”). Then $m$ is equal to 0.9232, and $\sigma $ is equal to 0.0038. Therefore, $m-3\sigma $ is 0.9118, which is greater than 0.8917 (“may”). Since the surrounding uniformity of the word “may” is regarded as an outlier, we think of “may” as polysemic. In this setting, the word “may” is polysemic because the program works in a case-insensitive mode, and the word “may” could be both an auxiliary verb and the name of a month.
The next example is the word “might”, whose surrounding uniformity is smaller than every neighbor word. For the word “might”, neighbor words are “would”, “could”, “should”, and “cannot”. Their surrounding uniformities are 0.9266 (“would”), 0.9290 (“could”), 0.9232 (“should”), and 0.9224 (“cannot”). Hence, $m$ is equal to 0.9253, and $\sigma $ is equal to 0.0032. Therefore, $m-3\sigma $ is 0.9157, which is less than 0.9179 (“might”). We cannot say 0.9179 is an outlier, and thus we cannot say the word “might” is polysemic.
Figure 1 shows the distribution of vectors.
The vector of “may” is placed in the interpolated position between “may” as an auxiliary verb and “may” as the name of a month. Since the word “may” is more frequently used as auxiliary verb, the vector is placed near other auxiliary verbs. However, the position of “may” could be an outlier for other auxiliary verbs.
In addition, we should show the results of names of months because these names will have the same contexts when the word is used as the name of a month. The word “may” has other contexts as auxiliary verbs. The word “august” has the sense of an adjective in the dictionary. The word “march” has a sense of a verb. Other names are monosemic words in the dictionary. Table 2 shows the surrounding uniformity for all the names of the months.
If we apply the test, only the word “may” passes the test. The example that fails the test is the word “august”, whose surrounding uniformity is also smaller than every neighbor word. For the case of the word “august”, $m$ is equal to 0.9808, and $\sigma $ is equal to 0.0005. Therefore, $m-3\sigma $ becomes 0.9793, which is less than 0.9802 (“august”). We cannot say the word “august” is polysemic, but the value of uniformity is very close to the lower bound. Other names have a greater uniformity than the corresponding lower bound. In summary, the proposed method can detect the polysemic “may”, but cannot detect the polysemicity of “august” and “march”.
Although we can claim nothing if the statistical test fails, even the negatives have a practical value for this test. For the case of the word “august”, it can be used as an adjective. Although we cannot say the word “august” is polysemic from the proposed procedure, we cannot claim that the word “august” is monosemic. We think this failure is caused by a few, if any, contexts of “august” as an adjective. In that case, the clustering context will be difficult in practice. Therefore, the proposed test will be meaningful even for a negative result, when the result is used to judge whether further analysis of the context is worthwhile. This discussion should be also true for the word “march”, which may be used as a verb.
There are other interesting words for which the proposed method detects polysemicity. These words are “james”, “mark”, and “bill”. The neighboring words are names of persons, such as “john”, “richard”, “robert”, “william”, “david”, “charles”, “henry”, “thomas”, “michael”, and “edward”. “mark” and “bill” have the same spell of the regular noun. The word “james” does not have such words and is subject to error analysis.
Evaluation
First, we set the value of $limit$ to 1000, and $N$ to 4. We then performed the statistical test of these 1000 words. From these, 33 words passed test, and we assume that these words belong to the set POLY. Further, we are unable to performs the statistical test for 127 words. We say that the remaining 840 words belong to the set MONO.
As evaluation, we attempted to measure the agreement of human judgment for the all words of POLY and MONO. However, during the valuation, we found that many of the errors come from the problem of Word2Vec. For example, the vector of “sir” and the vector of “william” are very close because “sir william” should be very close to “william”. This is similar for “w” and “george".
Therefore, we first selected words whose 10 neighboring words seem reasonable neighbors for human judgments, and performed human judgments of polysemicity. We also focused the words that have bigger SU than 0.75. This is because the statistical test will be reliable when SU is large. Table 3 shows that list of words that passed the test, and have higher SU than 0.75.
Table 3 shows all the words in POLY that are judged by human. Similarly Table 4 shows all the words in MONO that are judged by human.
We have sampled words from MONO because there are many words in MONO. In these tables, the SU of surrounding words are also presented.
Table 5 shows the confusion matrix for computer human judgment.
As there exists a case for which the number is less than or equal to 5, we need Yate's continuity correction. It achieves statistical significance with level of $\alpha =0.05$ . The disagreement in POLY in Table 5 for the word “james” attracted our attention.
Error analysis
The disagreement in MONO could be because we chose $3\sigma $ , which can detect polysemicity in extremely apparent cases. Even so, the word “james” passes the proposed statistical test. Therefore, the word “james” is worth investing in.
After examining the context of “james”, we found that it can be used as the name of river and a person. Table 6 shows the various names and how many times the name is used with the word “river”.
The word “james” is most frequently used with “river”. This may make the word pass the statistical test.
Discussion
The majority of the polysemicity presented in this paper exists due to the Word2Vec compute the distributed representation after ignoring cases. This polysemicity might not be regarded as polysemicity with more careful preprocessing.
The behavior of proposed method depends on the Word2Vec options and the size of the corpus. If Word2Vec does not have a reasonable neighbor that consists of words of similar usage, the proposed method cannot work effectively. In addition, a problem arising due the use of Word2Vec for our application is the placement of the vector “sir” and the vector “william” in similar position. Therefore, we may need to utilize another method to compute the distributed representation of words. We use the FIL9 corpus for the experiment. Though this corpus is freely available to everyone, the size may not be sufficient. Although we can detect the polysemicity of “may”, we cannot detect the polysemicity of “august” and “march”. The statistical test cannot detect the right answer if we do not have sufficient data; therefore, this failure may be interpreted as insufficient usage of “march” as verb, and “august” as adverb, owing to its origin from Wikipedia, which is in essence a description of facts.
We believe we need to find a way to select the number of neighbors to improve the accuracy of the test. To make the statistical test more accurate, we need more samples from the neighbors. At the same time, since we assume that we can measure the statistical fluctuation from the neighbors, we need to exclude words of a different nature from the neighbors. It is natural that the right number for a neighbor may be different according to the word. The number that we choose is the minimum value for the statistical test, and has room to adjust for improvement.
We computed the neighbor and surrounding uniformity of the 1000 most frequently used words in FIL9. We observed that proper nouns tend to have a large surrounding uniformity, whereas prepositions tend to have a small surrounding uniformity. It is an interesting observation that the surrounding uniformity reflects the part of speech information, although it is difficult to determine the class of a word from the value of the surrounding uniformity alone. For the ease of confirming this observation, the obtained table can be downloaded from the reference (http://www.ss.cs.tut.ac.jp/FIL9SU/).
Conclusion
In this paper, we proposed a method to detect polysemy based on the distributed representation by Word2Vec. We computed the surrounding uniformity of word vector and formed a statistical test. We illustrated several examples to this measure, and explained the statistical test for detecting polysemy. In addition, we have also discussed the feasibility of this test. | Our method performs a statistical test to determine whether a given word is used polysemously in the text, according to the following steps:
1) Setting N, the size of the neighbor.
2) Choosing N neighboring words ai in the order whose angle with the vector of the given word w is the smallest.
3) Computing the surrounding uniformity for ai(0 < i ≤ N) and w.
4) Computing the mean m and the sample variance σ for the uniformities of ai .
5) Checking whether the uniformity of w is less than m − 3σ. If the value is less than m − 3σ, we may regard w as a polysemic word. |
f704d182c9e01a2002381b76bf21e4bb3c0d3efc | f704d182c9e01a2002381b76bf21e4bb3c0d3efc_0 | Q: Among various transfer learning techniques, which technique yields to the best performance?
Text: Introduction
Question answering (QA) is the task of retrieving answers to a question given one or more contexts. It has been explored both in the open-domain setting BIBREF0 as well as domain-specific settings, such as BioASQ for the biomedical domain BIBREF1 . The BioASQ challenge provides $\approx 900$ factoid and list questions, i.e., questions with one and several answers, respectively. This work focuses on answering these questions, for example: Which drugs are included in the FEC-75 regimen? $\rightarrow $ fluorouracil, epirubicin, and cyclophosphamide.
We further restrict our focus to extractive QA, i.e., QA instances where the correct answers can be represented as spans in the contexts. Contexts are relevant documents which are provided by an information retrieval (IR) system.
Traditionally, a QA pipeline consists of named-entity recognition, question classification, and answer processing steps BIBREF2 . These methods have been applied to biomedical datasets, with moderate success BIBREF3 . The creation of large-scale, open-domain datasets such as SQuAD BIBREF4 have recently enabled the development of neural QA systems, e.g., wang2016machine, dcn, seo2016bidirectional, weissenborn2017fastqa, leading to impressive performance gains over more traditional systems.
However, creating large-scale QA datasets for more specific domains, such as the biomedical, would be very expensive because of the need for domain experts, and therefore not desirable. The recent success of deep learning based methods on open-domain QA datasets raises the question whether the capabilities of trained models are transferable to another domain via domain adaptation techniques. Although domain adaptation has been studied for traditional QA systems BIBREF5 and deep learning systems BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , it has to our knowledge not yet been applied for end-to-end neural QA systems.
To bridge this gap we employ various domain adaptation techniques to transfer knowledge from a trained, state-of-the-art neural QA system (FastQA, weissenborn2017fastqa) to the biomedical domain using the much smaller BioASQ dataset. In order to answer list questions in addition to factoid questions, we extend FastQA with a novel answering mechanism. We evaluate various transfer learning techniques comprehensively. For factoid questions, we show that mere fine-tuning reaches state-of-the-art results, which can further be improved by a forgetting cost regularization BIBREF9 . On list questions, the results are competitive to existing systems. Our manual analysis of a subset of the factoid questions suggests that the results are even better than the automatic evaluation states, revealing that many of the "incorrect" answers are in fact synonyms to the gold-standard answer.
Model
Our network architecture is based on FastQA BIBREF15 , a state-of-the-art neural QA system. Because the network architecture itself is exchangeable, we treat it as a black box, with subtle changes at the input and output layer as well as to the decoding and training procedure. These changes are described in the following. See Figure 1 for an overview of the system.
Input Layer
In a first step, words are embedded into a high-dimensional vector space. We use three sources of embeddings, which are concatenated to form a single embedding vector:
GloVe embeddings: 300-dimensional GloVe vectors BIBREF14 . These are open-domain word vectors trained on 840 billion tokens from web documents. The vectors are not updated during training.
Character embeddings: As used in FastQA BIBREF15 and proposed originally by seo2016bidirectional, we employ a 1-dimensional convolutional neural network which computes word embeddings from the characters of the word.
Biomedical Word2Vec embeddings: 200-dimensional vectors trained using Word2Vec BIBREF18 on about 10 million PubMed abstracts BIBREF19 . These vectors are specific to the biomedical domain and we expect them to help on biomedical QA.
As an optional step, we add entity tag features to the token embeddings via concatenation. Entity tags are provided by a dictionary-based entity tagger based on the UMLS Metathesaurus. The entity tag feature vector is a 127-dimensional bit vector that for each of the UMLS semantic types states whether the current token is part of an entity of that type. This step is only applied if explicitly noted.
Finally, a one-hot encoding of the question type (factoid or list) is appended to all the input vectors. With these embedding vectors as input, we invoke FastQA to produce start and end scores for each of the $n$ context tokens. We denote start scores by $y_{start}^{i}$ and end scores conditioned on a predicted start at position $i$ by $y_{end}^{i, j}$ , with start index $i \in [1, n]$ and end index $j \in [i, n]$ .
Output Layer
In our adapted output layer, we convert the start and end scores to span probabilities. The computation of these probabilities is independent of the question type. The interpretation, however, depends on the question type: While for factoid questions, the list of answer spans is interpreted as a ranked list of answer candidates, for list questions, answers above a certain probability threshold are interpreted as the set of answers to the question.
Given the start scores $y_{start}^1, ..., y_{start}^n$ and end scores $y_{end}^{i, 1}, ..., y_{end}^{i, n}$ , we compute the start and end probabilities as follows:
$$p_{start}^i = \sigma (y_{start}^i)$$ (Eq. 16)
$$p_{end}^{i, \cdot } = \operatorname{softmax}(y_{end}^{i, \cdot })$$ (Eq. 17)
where $\sigma (x)$ is the sigmoid function. As a consequence, multiple tokens can be chosen as likely start tokens, but the network is expected to select a single end token for a given start token, hence the $\operatorname{softmax}$ function. Finally, the probability that a given span $(i, j)$ answers the question is $p_{span}^{i, j} = p_{start}^{i} \cdot p_{end}^{i, j}$ . This extension generalizes the FastQA output layer such that multiple answer spans with different start positions can have a high probability, allowing us to retrieve multiple answers for list questions.
Decoding
Given a trained model, start probabilities can be obtained by running a forward pass and computing the start probability as in Equation 16 . For the top 20 starts, we compute the end probabilities as given by Eq. 17 . From the start and end probabilities, we extract the top 20 answer spans ranked by $p_{span}^{i, j}$ . As a simple post-processing step, we remove duplicate strings and retain only those with the highest probability.
For factoid questions, we output the 5 most likely answer spans as our ranked list of answers. For list questions, we learn a probability cutoff threshold $t$ that defines the set of list answers $A = \lbrace (i, j) | p_{span}^{i, j} \ge t\rbrace $ . We choose $t$ to be the threshold that optimizes the list F1 score on the respective development set.
Domain Adaptation
Our training procedure consists of two phases: In the pre-training phase, we train the model on SQuAD, using a token F1 score as the training objective as by weissenborn2017fastqa. We will refer to the resulting parameters as the base model. In the fine-tuning phase, we initialize the model parameters with the base model and then continue our optimization on the BioASQ dataset with a smaller learning rate.
To avoid catastrophic forgetting during fine-tuning as a means to regularize our model, we optionally add an additional forgetting cost term $L_{fc}$ , as proposed by riemer2017forgettingcost. It is defined as the cross-entropy loss between the current predictions and the base model's predictions.
We also add an L2 loss term $L_{l2}$ which penalizes deviations from the base model's parameters. Note that a more advanced approach would be to apply this loss selectively on weights which are particularly important in the source domain BIBREF10 . The final loss is computed as $L_{final} = L_{original} + C_{fc} \cdot L_{fc} + C_{l2} \cdot L_{l2}$ where $C_{fc}$ and $C_{l2}$ are hyperparameters which are set to 0 unless otherwise noted.
In this section, we evaluate various domain adaptation techniques. The results of the experiments are summarized in Table 1 .
As a baseline without transfer learning, Experiment 1 trains the model on BioASQ only. Because the BioASQ dataset by itself is very small, a dropout rate of $0.7$ was used, because it worked best in preliminary experiments. We observe a rather low performance, which is expected when applying deep learning to such a small dataset.
Experiments 2 and 3 evaluate the pure fine-tuning approach: Our base model is a system trained on SQuAD only and tested on BioASQ (Experiment 2). For Experiment 3, we fine-tuned the base model on the BioASQ4B training set. We observe that performance increases significantly, especially on list questions. This increase is expected, because the network is trained on biomedical- and list questions, which are not part of the SQuAD dataset, for the first time. Overall, the performance of the fine-tuned model on both question types is much higher than the baseline system without transfer learning.
In order to evaluate the impact of using biomedical word embeddings, we repeat Experiment 3 without them (Experiment 4). We see a factoid and list performance drop of $3.3$ and $1.2$ percentage points, respectively, showing that biomedical word embeddings help increase performance.
In Experiment 5, we append entity features to the word vector, as described in Section "Input Layer" . Even though these features provide the network with domain-specific knowledge, we found that it actually harms performance on factoid questions. Because most of the entity features are only active during fine-tuning with the small dataset, we conjecture that the performance decrease is due to over-fitting.
We continue our study with techniques to combat catastrophic forgetting as a means to regularize training during fine-tuning. In Experiment 6 of Table 1 we fine-tune the base model on a half-half mixture of BioASQ and SQuAD questions (BioASQ questions have been upsampled accordingly). This form of joint training yielded no significant performance gains. Experiment 7 regularizes the model via an additional forgetting cost term, as proposed by riemer2017forgettingcost and explained in Section "Domain Adaptation" . We generally found that this technique only increases performance for factoid questions where the performance boost was largest for $C_{fc} = 100.0$ . The fact that the forgetting loss decreases performance on list questions is not surprising, as predictions are pushed more towards the predictions of the base model, which has very poor performance on list questions.
Experiment 8 adds an L2 loss which penalizes deviations from the base model's parameters. We found that performance decreases as we increase the value of $C_{l2}$ which shows that this technique does not help at all. For the sake of completeness we report results for $C_{l2} = 0.3$ , the lowest value that yielded a significant drop in performance.
Datasets
SQuAD BIBREF4 is a dataset of $\approx 100,000$ questions with relevant contexts and answers that sparked research interest into the development of neural QA systems recently. The contexts are excerpts of Wikipedia articles for which crowd-source workers generated questions-answer pairs. Because of the large amount of training examples in SQuAD, it lends itself perfectly as our source dataset.
The BioASQ challenge provides a biomedical QA dataset BIBREF1 consisting of questions, relevant contexts (called snippets) from PubMed abstracts and possible answers to the question. It was carefully created with the help of biomedical experts.
In this work, we focus on Task B, Phase B of the BioASQ challenge, in which systems must answer questions from gold-standard snippets. These questions can be either yes/no questions, summary questions, factoid questions, or list questions. Because we employ an extractive QA system, we restrict this study to answering factoid and list questions by extracting answer spans from the provided contexts.
The 2017 BioASQ training dataset contains $1,799$ questions, of which 413 are factoid and 486 are list questions. The questions have $\approx 20$ snippets on average, each of which are on average $\approx 34$ tokens long. We found that around $65\%$ of the factoid questions and around $92\%$ of the list questions have at least one extractable answer. For questions with extractable answers, answers spans are computed via a simple substring search in the provided snippets. All other questions are ignored during training and treated as answered incorrectly during evaluation.
Training
We minimize the cross-entropy loss for the gold standard answer spans. However, for multiple answer spans that refer to the same answer (e.g. synonyms), we only minimize the loss for the span of the lowest loss. We use the ADAM BIBREF20 for optimization on SQuAD with a learning rate starting at $10^{-3}$ which is halved whenever performance drops between checkpoints. During the fine-tuning phase, we continue optimization on the BioASQ dataset with a smaller learning rate starting at $10^{-4}$ . During both phases, the model is regularized by variational dropout of rate $0.5$ BIBREF21 .
Evaluation
The official evaluation measures from BioASQ are mean reciprocal rank (MRR) for factoid questions and F1 score for list questions . For factoid questions, the list of ranked answers can be at most five entries long. The F1 score is measured on the gold standard list elements. For both measures, case-insensitive string matches are used to check the correctness of a given answer. A list of synonyms is provided for all gold-standard answers. If the system's response matches one of them, the answer counts as correct.
For evaluation, we use two different fine-tuning datasets, depending on the experiment: BioASQ3B, which contains all questions of the first three BioASQ challenges, and BioASQ4B which additionally contains the test questions of the fourth challenge. BioASQ4B is used as the training dataset for the fifth BioASQ challenge whereas BioASQ3B was used for training during the fourth challenge.
Because the datasets are small, we perform 5-fold cross-validation and report the average performance across the five folds. We use the larger BioASQ4B dataset except when evaluating the ensemble and when comparing to participating systems of previous BioASQ challenges.
All models were implemented using TensorFlow BIBREF22 with a hidden size of 100. Because the context in BioASQ usually comprises multiple snippets, they are processed independently in parallel for each question. Answers from all snippets belonging to a question are merged and ranked according to their individual probabilities.
Ensemble
Model ensembles are a common method to tweak the performance of a machine learning system. Ensembles combine multiple model predictions, for example by averaging, in order to improve generalization and prevent over-fitting. We evaluate the utility of an ensemble by training five models on the BioASQ3B dataset using 5-fold cross-validation. Each of the models is evaluated on the 4B test data, i.e., data which is not included in BioASQ3B.
During application, we run an ensemble by averaging the start and end scores of individual models before they are passed to the sigmoid / softmax functions as defined in Eq. 16 and 17 . In Table 2 we summarize the average performance of the five models, the best performance across the five models, and the performance of the ensemble. We observe performance gains of 3 percentage points on factoid questions and a less than 1 percentage point on list questions, relative to the best single model. This demonstrates a small performance gain that is consistent with the literature.
Comparison to competing BioASQ systems
Because the final results of the fifth BioASQ challenge are not available at the time of writing, we compare our system to the best systems in last year's challenge . For comparison, we use the best single model and the model ensemble trained on BioASQ3B (see Section "Ensemble" ). We then evaluate the model on the 5 batches of last year's challenge using the official BioASQ evaluation tool. Each batch contains 100 questions of which only some are factoid and list questions. Note that the results underestimate our system's performance, because our competing system's responses have been manually evaluated by humans while our system's responses are evaluated automatically using string matching against a potentially incomplete list of synonyms. In fact, our qualitative analysis in Section "Qualitative Analysis" shows that many answers are counted as incorrect, but are synonyms of the gold-standard answer. The results are summarized in Table 3 and compared to the best systems in the challenge in each of the batches and question type categories.
With our system winning four out of five batches on factoid questions, we consider it state-of-the-art in biomedical factoid question answering, especially when considering that our results might be higher on manual evaluation. The results on list questions are slightly worse, but still very competitive. This is surprising, given that the network never saw a list question prior to the fine-tuning phase. Due to small test set sizes, the sampling error in each batch is large, causing the single model to outperform the model ensemble on some batches.
Qualitative Analysis
In order to get a better insight into the quality of the predictions, we manually validated the predictions for the factoid questions of batch 5 of the fourth BioASQ challenge as given by the best single model (see Table 3 ). There are in total 33 factoid questions, of which 23 have as the gold standard answer a span in one of the contexts. According to the official BioASQ evaluation, only 4 questions are predicted correctly (i.e., the gold standard answer is ranked highest). However, we identified 10 rank-1 answers which are not counted as correct but are synonyms to the gold standard answer. Examples include "CMT4D disease" instead of "Charcot-Marie-Tooth (CMT) 4D disease", "tafazzin" instead of "Tafazzin (TAZ) gene", and " $\beta $ -glucocerebrosidase" instead of "Beta glucocerebrosidase". In total, we labeled 14 questions as correct and 24 questions as having their correct answer in the top 5 predictions.
In the following, we give examples of mistakes made by the system. Questions are presented in italics. In the context, we underline predicted answers and present correct answers in boldface.
We identified eight questions for which the semantic type of the top answer differs from the question answer type. Some of these cases are completely wrong predictions. However, this category also includes subtle mistakes like the following:
In which yeast chromosome does the rDNA cluster reside?
The rDNA cluster in Saccharomyces cerevisiae is located 450 kb from the left end and 610 kb from the right end of chromosome XII...
Here, it predicted a yeast species the rDNA cluster is located in, but ignored that the question is asking for a chromosome.
Another type of mistakes is that the top answer is somewhat correct, but is missing essential information. We labeled four predictions with this category, like the following example:
How early during pregnancy does non-invasive cffDNA testing allow sex determination of the fetus?
Gold Standard Answer: "6th to 10th week of gestation" or "first trimester of pregnancy"
Given Top Answer: "6th-10th"
In summary, to our judgment, 14 of 33 questions ( $42.4\%$ ) are answered correctly, and 24 of 33 questions ( $72.7\%$ ) are answered correctly in one of the top 5 answers. These are surprisingly high numbers considering low MRR score of $23.7\%$ of the automatic evaluation (Table 3 ).
Discussion and future work
The most significant result of this work is that state-of-the-art results in biomedical question answering can be achieved even in the absence of domain-specific feature engineering. Most competing systems require structured domain-specific resources, such as biomedical ontologies, parsers, and entity taggers. While these resources are available in the biomedical domain, they are not available in most domains.
Our system, on the other hand, requires a large open-domain QA dataset, biomedical word embeddings (which are trained in an unsupervised fashion), and a small biomedical QA dataset. This suggests that our methodology is easily transferable to other domains as well.
Furthermore, we explored several supervised domain adaptation techniques. In particular, we demonstrated the usefulness of forgetting cost for factoid questions. The decreased performance on list questions is not surprising, because the model's performance on those questions is very poor prior to fine-tuning which is due to the lack of list questions in SQuAD. We believe that large scale open-domain corpora for list questions would enhance performance further.
Unsupervised domain adaptation could be an interesting direction for future work, because the biomedical domain offers large amounts of textual data, some of which might even contain questions and their corresponding answers. We believe that leveraging these resources holds potential to further improve biomedical QA.
Conclusion
In this paper, we described a deep learning approach to address the task of biomedical question answering by using domain adaptation techniques. Our experiments reveal that mere fine-tuning in combination with biomedical word embeddings yield state-of-the-art performance on biomedical QA, despite the small amount of in-domain training data and the lack of domain-dependent feature engineering. Techniques to overcome catastrophic forgetting, such as a forgetting cost, can further boost performance for factoid questions. Overall, we show that employing domain adaptation on neural QA systems trained on large-scale, open-domain datasets can yield good performance in domains where large datasets are not available.
Acknowledgments
This research was supported by the German Federal Ministry of Education and Research (BMBF) through Software Campus project GeNIE (01IS12050). | Unanswerable |
da544015511e535503dee2eaf4912a5e36c806cd | da544015511e535503dee2eaf4912a5e36c806cd_0 | Q: What is the architecture of the model?
Text: Introduction
Quickly making sense of large amounts of linguistic data is an important application of language technology. For example, after the 2011 Japanese tsunami, natural language processing was used to quickly filter social media streams for messages about the safety of individuals, and to populate a person finder database BIBREF0. Japanese text is high-resource, but there are many cases where it would be useful to make sense of speech in low-resource languages. For example, in Uganda, as in many parts of the world, the primary source of news is local radio stations, which broadcast in many languages. A pilot study from the United Nations Global Pulse Lab identified these radio stations as a potentially useful source of information about a variety of urgent topics related to refugees, small-scale disasters, disease outbreaks, and healthcare BIBREF1. With many radio broadcasts coming in simultaneously, even simple classification of speech for known topics would be helpful to decision-makers working on humanitarian projects.
Recent research has shown that it is possible train direct Speech-to-text Translation (ST) systems from speech paired only with translations BIBREF2, BIBREF3, BIBREF4. Since no transcription is required, this could be useful in very low-resource settings, even for languages with no writing systems. In realistic low-resource settings where only a few hours of training data is available, these systems produce poor translations BIBREF5, but it has long been recognized that there are good uses for bad translations BIBREF6. Could classifying the original speech be one of those uses?
We answer this question affirmatively: using ST to translate speech to text, we then classify by topic using supervised models (Figure FIGREF1). We test our method on a corpus of conversational Spanish speech paired with English text translations. Using an ST model trained on 20 hours of Spanish-English data, we are able to predict topics correctly 71% of the time. With even worse ST, we can still predict topics with an accuracy of 61%.
Methods ::: Speech-to-text translation.
We use the method of BIBREF5 to train neural sequence-to-sequence Spanish-English ST models. As in that study, before training ST, we pre-train the models using English ASR data from the Switchboard Telephone speech corpus BIBREF7, which consists of around 300 hours of English speech and transcripts. This was reported to substantially improve translation quality when the training set for ST was only tens of hours.
Methods ::: Topic modeling and classification.
To classify the translated documents, we first need a set of topic labels, which were not already available for our dataset. So, we initially discover a set of topics from the target-language training text using a topic model. To classify the translations of the test data, we choose the most probable topic according to the learned topic model. To train our topic model, we use Nonnegative Matrix Factorization BIBREF8, BIBREF9.
Experimental Setup ::: Data.
We use the Fisher Spanish speech corpus BIBREF11, which consists of 819 phone calls, with an average duration of 12 minutes, amounting to a total of 160 hours of data. We discard the associated transcripts and pair the speech with English translations BIBREF12, BIBREF13. To simulate a low-resource scenario, we sampled 90 calls (20h) of data (train20h) to train both ST and topic models, reserving 450 calls (100h) to evaluate topic models (eval100h). Our experiments required ST models of varying quality, so we also trained models with decreasing amounts of data: ST-10h, ST-5h, and ST-2.5h are trained on 10, 5, and 2.5 hours of data respectively, sampled from train20h. To evaluate ST only, we use the designated Fisher test set, as in previous work.
Experimental Setup ::: Fine-grained topic analysis.
In the Fisher protocol, callers were prompted with one of 25 possible topics. It would seem appealing to use the prompts as topic labels, but we observed that many conversations quickly departed from the initial prompt and meandered from topic to topic. For example, one call starts: “Ok today's topic is marriage or we can talk about anything else...”. Within minutes, the topic shifts to jobs: “I'm working oh I do tattoos.” To isolate different topics within a single call, we split each call into 1 minute long segments to use as `documents'. This gives us 1K training and 5.5K test segments, but leaves us with no human-annotated topic labels for them.
Obtaining gold topic labels for our data would require substantial manual annotation, so we instead use the human translations from the 1K (train20h) training set utterances to train the NMF topic model with scikit-learn BIBREF14, and then use this model to infer topics on the evaluation set. These silver topics act as an oracle: they tell us what a topic model would infer if it had perfect translations. NMF and model hyperparameters are described in Appendix SECREF7.
To evaluate our ST models, we apply our ST model to test audio, and then predict topics from the translations using the NMF model trained on the human translations of the training data (Figure FIGREF1). To report accuracy we compare the predicted labels and silver labels, i.e., we ask whether the topic inferred from our predicted translation (ST) agrees with one inferred from a gold translation (human).
Results ::: Spanish-English ST.
To put our topic modeling results in context, we first report ST results. Figure FIGREF9 plots the BLEU scores on the Fisher test set and on eval100h for Spanish-English ST models. The scores are very similar for both sets when computed using a single human reference; scores are 8 points higher on the Fisher test set if all 4 of its available references are used. The state-of-the-art BLEU score on the Fisher test set is 47.3 (using 4 references), reported by BIBREF3, who trained an ST model on the entire 160 hours of data in the Fisher training corpus. By contrast, 20 hour model (ST-20h) achieves a BLEU score of 18.1. Examining the translations (Table TABREF10), we see that while they are mediocre, they contain words that might enable correct topic classification.
Results ::: Topic Modeling on training data.
Turning to our main task of classification, we first review the set of topics discovered from the human translations of train20h (Table TABREF13). We explored different numbers of topics, and chose 10 after reviewing the results. We assigned a name to each topic after manually reviewing the most informative terms; for topics with less coherent sets of informative terms, we include misc in their names.
We argued above that the silver labels are sensible for evaluation despite not always matching the assigned call topic prompts, since they indicate what an automatic topic classifier would predict given correct translations and they capture finer-grained changes in topic. Table TABREF14 shows a few examples where the silver labels differ from the assigned call topic prompts. In the first example, the topic model was arguably incorrect, failing to pick up the prompt juries, and instead focusing on the other words, predicting intro-misc. But in the other examples, the topic model is reasonable, in fact correctly identifying the topic in the third example where the transcripts indicate that the annotation was wrong (specifying the topic prompt as music). The topic model also classifies a large proportion of discussions as intro-misc (typically at the start of the call) and family-misc (often where the callers stray from their assigned topic).
Our analysis also supports our observation that discussed topics stray from the prompted topic in most speech segments. For example, among segments in the 17 training data calls with the prompt religion, only 36% have the silver label religion, and the most frequently assigned label is family-misc with 46%. Further details are in Appendix SECREF9.
Results ::: Topic classification on test data
Now we turn to our main experiment. For each of the audio utterances in eval100h, we have four ST model translations: ST-2.5h, 5h, 10h, 20h (in increasing order of quality). We feed each of these into the topic model from Table TABREF13 to get the topic distribution and use the highest scoring topic as the predicted label.
Figure FIGREF16 compares the frequencies of the silver labels with the predictions from the ST-20h model. The family-misc topic is predicted most often—almost 50% of the time. This is reasonable since this topic includes words associated with small talk. Other topics such as music, religion and welfare also occur with a high enough frequency to allow for a reasonable evaluation.
Figure FIGREF17 shows the accuracy for all ST models, treating the silver topic labels as the correct topics. We use the family-misc topic as a majority class naive baseline, giving an accuracy of 49.6%. We observe that ST models trained on 10 hours or more of data outperform the naive-baseline by more than 10% absolute, with ST-20h scoring 71.8% and ST-10h scoring 61.6%. Those trained on less than 5 hours of data score close to or below that of the naive baseline: 51% for ST-5h and 48% for ST-2.5h.
Since topics vary in frequency, we look at label-specific accuracy to see if the ST models are simply predicting frequent topics correctly. Figure FIGREF18 shows a normalized confusion matrix for the ST-20h model. Each row sums to 100%, representing the distribution of predicted topics for any given silver topic, so the numbers on the diagonal can be interpreted as the topic-wise recall. For example, a prediction of music recalls 88% of the relevant speech segments. We see that the model has an recall of more than 50% for all 10 topics, making it quite effective for our motivating task. The family-misc topic (capturing small-talk) is often predicted when other silver topics are present, with e.g. 23% of the silver dating topics predicted as family-misc.
Related work
We have shown that even low-quality ST can be useful for speech classification. Previous work has also looked at speech analysis without high-quality ASR. In a task quite related to ours, BIBREF15 showed how to cluster speech segments in a completely unsupervised way. In contrast, we learn to classify speech using supervision, but what is important about our result is it shows that a small amount of supervision goes a long way. A slightly different approach to quickly analysing speech is the established task of Keyword spotting BIBREF16, BIBREF17, which simply asks whether any of a specific set of keywords appears in each segment. Recent studies have extended the early work to end-to-end keyword spotting BIBREF18, BIBREF19 and to semantic keyword retrieval, where non-exact but relevant keyword matches are retrieved BIBREF20, BIBREF21, BIBREF22. In all these studies, the query and search languages are the same, while we consider the cross-lingual case.
There has been some limited work on cross-lingual keyword spotting BIBREF23, where ASR is cascaded with text-based cross-lingual retrieval. Some recent studies have attempted to use vision as a complementary modality to do cross-lingual retrieval BIBREF24, BIBREF25. But cross-lingual topic classification for speech has not been considered elsewhere, as far as we know.
Conclusions and future work
Our results show that poor speech translation can still be useful for speech classification in low-resource settings. By varying the amount of training data, we found that translations with a BLEU score as low as 13 are still able to correctly classify 61% of the speech segments.
Cross-lingual topic modeling may be useful when the target language is high-resource. Here, we learned target topics just from the 20 hours of translations, but in future work, we could use a larger text corpus in the high-resource language to learn a more general topic model covering a wider set of topics, and/or combine it with keyword lists curated for specific scenarios like disaster recovery BIBREF26.
Acknowledgments
This work was supported in part by a James S McDonnell Foundation Scholar Award and a Google faculty research award. We thank Ida Szubert, Marco Damonte, and Clara Vania for helpful comments on previous drafts of this paper.
Using NMF for topic modeling
We now describe how we learn topics using NMF. Given a set of text documents as input, the model will output (1) for each document, a distribution over the selected number of topics (henceforth, the document-topic distribution), and (2) for each topic, a distribution over the set of unique terms in the text (henceforth, the topic-term distribution).
Using NMF for topic modeling ::: Text processing
Our training set (train20h) has 1080 English sentences. We start by generating a tf-idf representation for each of these. The English text contains 170K tokens and 6K terms (vocabulary size). As we are looking for topics which are coarse-level categories, we do not use the entire vocabulary, but instead focus only on the high importance terms. We lowercase the English translations and remove all punctuation, and stopwords. We further remove the terms occurring in more than 10% of the documents and those which occur in less than 2 documents, keeping only the 1000 most frequent out of the remaining.
After preprocessing the training set, we have a feature matrix $V$ with dimensions $1080\times 1000$, where each row is a document, and each column represents the tf-idf scores over the 1000 selected terms. The feature matrix will be sparse as only a few terms would occur in a document, and will also be non-negative as tf-idf values are greater than or equal to 0.
Using NMF for topic modeling ::: Learning topics
NMF is a matrix factorization method, which given the matrix $V$, factorizes it into two matrices: $W$ with dimensions $1080\times t$ (long-narrow), and $H$ with dimensions $t\times 1000$ (short-wide), where $t$ is a hyper-parameter. Figure FIGREF21 shows this decomposition when $t$ is set to 10.
In the context of topic modeling, $t$ is the number of topics we want to learn; $W$ is the document-topic distribution, where for each document (row) the column with the highest value is the most-likely topic; and $H$ is the topic-term distribution, where each row is a topic, and the columns with the highest values are terms most relevant to it.
The values for $W$ and $H$ are numerically approximated using a multiplicative update rule BIBREF27, with the Frobenius norm of the reconstruction error as the objective function. In this work, we use the machine-learning toolkit scikit-learn BIBREF14 for feature extraction, and to perform NMF, using default values as described at scikit-learn.org.
Using NMF for topic modeling ::: Making topic predictions
Using our topic-term distribution matrix $H$, we can now make topic predictions for new text input. Our evaluation set (eval100h) has 5376 English sentences. For each of these, we have the gold text, and also the ST model output. We preprocess and represent these using the same procedure as before (SECREF19) giving us the feature matrix $V^{^{\prime }}_{gold}$ for gold, and $V^{^{\prime }}_{ST}$ for ST output, each with dimensions $5376\times 1000$. Our goal is to learn the document-topic distributions $W^{^{\prime }}_{gold}$ and $W^{^{\prime }}_{ST}$, where:
The values for each $W^{^{\prime }}$ matrix are again numerically approximated using the same objective function as before, but keeping $H$ fixed.
Using NMF for topic modeling ::: Silver labels and evaluation
We use the highest scoring topic for each document as the prediction. The silver labels are therefore computed as $argmax(W^{^{\prime }}_{gold})$, and for ST as $argmax(W^{^{\prime }}_{ST})$. We can now compute the accuracy over these two sets of predictions.
Fisher corpus: assigned topics
Figure FIGREF24 shows the topics assigned to callers in the Fisher speech corpus. Some topic prompts overlap, for example, music-preference asks callers to discuss what kind of music they like to listen to, and music-social-message asks them to discuss the social impact of music. For both these topics, we would expect the text to contain similar terms. Similarly the topics cellphones-usage, tech-devices and telemarketing-spam also overlap. Such differences might be difficult for an unsupervised topic modeling algorithm to pick up.
Table TABREF25 shows the topics learned by NMF by using human English translations from the entire 160 hours of training data as input, when the number of topics is set to 25. We observe that some new topics are found that were not discovered by the 20hr/10-topic model and that match the assigned topic prompts, such as juries and housing. However, there are also several incoherent topics, and we don't find a major improvement over the topics learned by just using 20 hours of training data, with the number of topics set to 10.
Tracking topic drift over conversations
To measure how often speakers stray from assigned topic prompts, we take a closer look at the calls in train20h with the assigned prompt of religion. This is the most frequently assigned prompt in the Fisher dataset (17 calls in train20h). We also select this topic for further analysis as it contains terms which are strongly indicative, such as god, bible, etc. and should be relatively easier for our topic model to detect.
Figure FIGREF26 shows the trend of discussion topics over time. Overall, only 36% of the total dialog segments in these calls have the silver label religion, and the most frequently assigned label is family-misc with 46%. We observe that the first segment is often labeled as intro-misc, around 70% of the time, which is expected as speakers begin by introducing themselves. Figure FIGREF26 shows that a similar trend emerges for calls assigned the prompt music (14 calls in train20h). Silver labels for music account for 45% of the call segments and family-misc for around 38%. | BIBREF5 to train neural sequence-to-sequence, NMF topic model with scikit-learn BIBREF14 |
7bc993b32484d6ae3c86d0b351a68e59fd2757a5 | 7bc993b32484d6ae3c86d0b351a68e59fd2757a5_0 | Q: What language do they look at?
Text: Introduction
Quickly making sense of large amounts of linguistic data is an important application of language technology. For example, after the 2011 Japanese tsunami, natural language processing was used to quickly filter social media streams for messages about the safety of individuals, and to populate a person finder database BIBREF0. Japanese text is high-resource, but there are many cases where it would be useful to make sense of speech in low-resource languages. For example, in Uganda, as in many parts of the world, the primary source of news is local radio stations, which broadcast in many languages. A pilot study from the United Nations Global Pulse Lab identified these radio stations as a potentially useful source of information about a variety of urgent topics related to refugees, small-scale disasters, disease outbreaks, and healthcare BIBREF1. With many radio broadcasts coming in simultaneously, even simple classification of speech for known topics would be helpful to decision-makers working on humanitarian projects.
Recent research has shown that it is possible train direct Speech-to-text Translation (ST) systems from speech paired only with translations BIBREF2, BIBREF3, BIBREF4. Since no transcription is required, this could be useful in very low-resource settings, even for languages with no writing systems. In realistic low-resource settings where only a few hours of training data is available, these systems produce poor translations BIBREF5, but it has long been recognized that there are good uses for bad translations BIBREF6. Could classifying the original speech be one of those uses?
We answer this question affirmatively: using ST to translate speech to text, we then classify by topic using supervised models (Figure FIGREF1). We test our method on a corpus of conversational Spanish speech paired with English text translations. Using an ST model trained on 20 hours of Spanish-English data, we are able to predict topics correctly 71% of the time. With even worse ST, we can still predict topics with an accuracy of 61%.
Methods ::: Speech-to-text translation.
We use the method of BIBREF5 to train neural sequence-to-sequence Spanish-English ST models. As in that study, before training ST, we pre-train the models using English ASR data from the Switchboard Telephone speech corpus BIBREF7, which consists of around 300 hours of English speech and transcripts. This was reported to substantially improve translation quality when the training set for ST was only tens of hours.
Methods ::: Topic modeling and classification.
To classify the translated documents, we first need a set of topic labels, which were not already available for our dataset. So, we initially discover a set of topics from the target-language training text using a topic model. To classify the translations of the test data, we choose the most probable topic according to the learned topic model. To train our topic model, we use Nonnegative Matrix Factorization BIBREF8, BIBREF9.
Experimental Setup ::: Data.
We use the Fisher Spanish speech corpus BIBREF11, which consists of 819 phone calls, with an average duration of 12 minutes, amounting to a total of 160 hours of data. We discard the associated transcripts and pair the speech with English translations BIBREF12, BIBREF13. To simulate a low-resource scenario, we sampled 90 calls (20h) of data (train20h) to train both ST and topic models, reserving 450 calls (100h) to evaluate topic models (eval100h). Our experiments required ST models of varying quality, so we also trained models with decreasing amounts of data: ST-10h, ST-5h, and ST-2.5h are trained on 10, 5, and 2.5 hours of data respectively, sampled from train20h. To evaluate ST only, we use the designated Fisher test set, as in previous work.
Experimental Setup ::: Fine-grained topic analysis.
In the Fisher protocol, callers were prompted with one of 25 possible topics. It would seem appealing to use the prompts as topic labels, but we observed that many conversations quickly departed from the initial prompt and meandered from topic to topic. For example, one call starts: “Ok today's topic is marriage or we can talk about anything else...”. Within minutes, the topic shifts to jobs: “I'm working oh I do tattoos.” To isolate different topics within a single call, we split each call into 1 minute long segments to use as `documents'. This gives us 1K training and 5.5K test segments, but leaves us with no human-annotated topic labels for them.
Obtaining gold topic labels for our data would require substantial manual annotation, so we instead use the human translations from the 1K (train20h) training set utterances to train the NMF topic model with scikit-learn BIBREF14, and then use this model to infer topics on the evaluation set. These silver topics act as an oracle: they tell us what a topic model would infer if it had perfect translations. NMF and model hyperparameters are described in Appendix SECREF7.
To evaluate our ST models, we apply our ST model to test audio, and then predict topics from the translations using the NMF model trained on the human translations of the training data (Figure FIGREF1). To report accuracy we compare the predicted labels and silver labels, i.e., we ask whether the topic inferred from our predicted translation (ST) agrees with one inferred from a gold translation (human).
Results ::: Spanish-English ST.
To put our topic modeling results in context, we first report ST results. Figure FIGREF9 plots the BLEU scores on the Fisher test set and on eval100h for Spanish-English ST models. The scores are very similar for both sets when computed using a single human reference; scores are 8 points higher on the Fisher test set if all 4 of its available references are used. The state-of-the-art BLEU score on the Fisher test set is 47.3 (using 4 references), reported by BIBREF3, who trained an ST model on the entire 160 hours of data in the Fisher training corpus. By contrast, 20 hour model (ST-20h) achieves a BLEU score of 18.1. Examining the translations (Table TABREF10), we see that while they are mediocre, they contain words that might enable correct topic classification.
Results ::: Topic Modeling on training data.
Turning to our main task of classification, we first review the set of topics discovered from the human translations of train20h (Table TABREF13). We explored different numbers of topics, and chose 10 after reviewing the results. We assigned a name to each topic after manually reviewing the most informative terms; for topics with less coherent sets of informative terms, we include misc in their names.
We argued above that the silver labels are sensible for evaluation despite not always matching the assigned call topic prompts, since they indicate what an automatic topic classifier would predict given correct translations and they capture finer-grained changes in topic. Table TABREF14 shows a few examples where the silver labels differ from the assigned call topic prompts. In the first example, the topic model was arguably incorrect, failing to pick up the prompt juries, and instead focusing on the other words, predicting intro-misc. But in the other examples, the topic model is reasonable, in fact correctly identifying the topic in the third example where the transcripts indicate that the annotation was wrong (specifying the topic prompt as music). The topic model also classifies a large proportion of discussions as intro-misc (typically at the start of the call) and family-misc (often where the callers stray from their assigned topic).
Our analysis also supports our observation that discussed topics stray from the prompted topic in most speech segments. For example, among segments in the 17 training data calls with the prompt religion, only 36% have the silver label religion, and the most frequently assigned label is family-misc with 46%. Further details are in Appendix SECREF9.
Results ::: Topic classification on test data
Now we turn to our main experiment. For each of the audio utterances in eval100h, we have four ST model translations: ST-2.5h, 5h, 10h, 20h (in increasing order of quality). We feed each of these into the topic model from Table TABREF13 to get the topic distribution and use the highest scoring topic as the predicted label.
Figure FIGREF16 compares the frequencies of the silver labels with the predictions from the ST-20h model. The family-misc topic is predicted most often—almost 50% of the time. This is reasonable since this topic includes words associated with small talk. Other topics such as music, religion and welfare also occur with a high enough frequency to allow for a reasonable evaluation.
Figure FIGREF17 shows the accuracy for all ST models, treating the silver topic labels as the correct topics. We use the family-misc topic as a majority class naive baseline, giving an accuracy of 49.6%. We observe that ST models trained on 10 hours or more of data outperform the naive-baseline by more than 10% absolute, with ST-20h scoring 71.8% and ST-10h scoring 61.6%. Those trained on less than 5 hours of data score close to or below that of the naive baseline: 51% for ST-5h and 48% for ST-2.5h.
Since topics vary in frequency, we look at label-specific accuracy to see if the ST models are simply predicting frequent topics correctly. Figure FIGREF18 shows a normalized confusion matrix for the ST-20h model. Each row sums to 100%, representing the distribution of predicted topics for any given silver topic, so the numbers on the diagonal can be interpreted as the topic-wise recall. For example, a prediction of music recalls 88% of the relevant speech segments. We see that the model has an recall of more than 50% for all 10 topics, making it quite effective for our motivating task. The family-misc topic (capturing small-talk) is often predicted when other silver topics are present, with e.g. 23% of the silver dating topics predicted as family-misc.
Related work
We have shown that even low-quality ST can be useful for speech classification. Previous work has also looked at speech analysis without high-quality ASR. In a task quite related to ours, BIBREF15 showed how to cluster speech segments in a completely unsupervised way. In contrast, we learn to classify speech using supervision, but what is important about our result is it shows that a small amount of supervision goes a long way. A slightly different approach to quickly analysing speech is the established task of Keyword spotting BIBREF16, BIBREF17, which simply asks whether any of a specific set of keywords appears in each segment. Recent studies have extended the early work to end-to-end keyword spotting BIBREF18, BIBREF19 and to semantic keyword retrieval, where non-exact but relevant keyword matches are retrieved BIBREF20, BIBREF21, BIBREF22. In all these studies, the query and search languages are the same, while we consider the cross-lingual case.
There has been some limited work on cross-lingual keyword spotting BIBREF23, where ASR is cascaded with text-based cross-lingual retrieval. Some recent studies have attempted to use vision as a complementary modality to do cross-lingual retrieval BIBREF24, BIBREF25. But cross-lingual topic classification for speech has not been considered elsewhere, as far as we know.
Conclusions and future work
Our results show that poor speech translation can still be useful for speech classification in low-resource settings. By varying the amount of training data, we found that translations with a BLEU score as low as 13 are still able to correctly classify 61% of the speech segments.
Cross-lingual topic modeling may be useful when the target language is high-resource. Here, we learned target topics just from the 20 hours of translations, but in future work, we could use a larger text corpus in the high-resource language to learn a more general topic model covering a wider set of topics, and/or combine it with keyword lists curated for specific scenarios like disaster recovery BIBREF26.
Acknowledgments
This work was supported in part by a James S McDonnell Foundation Scholar Award and a Google faculty research award. We thank Ida Szubert, Marco Damonte, and Clara Vania for helpful comments on previous drafts of this paper.
Using NMF for topic modeling
We now describe how we learn topics using NMF. Given a set of text documents as input, the model will output (1) for each document, a distribution over the selected number of topics (henceforth, the document-topic distribution), and (2) for each topic, a distribution over the set of unique terms in the text (henceforth, the topic-term distribution).
Using NMF for topic modeling ::: Text processing
Our training set (train20h) has 1080 English sentences. We start by generating a tf-idf representation for each of these. The English text contains 170K tokens and 6K terms (vocabulary size). As we are looking for topics which are coarse-level categories, we do not use the entire vocabulary, but instead focus only on the high importance terms. We lowercase the English translations and remove all punctuation, and stopwords. We further remove the terms occurring in more than 10% of the documents and those which occur in less than 2 documents, keeping only the 1000 most frequent out of the remaining.
After preprocessing the training set, we have a feature matrix $V$ with dimensions $1080\times 1000$, where each row is a document, and each column represents the tf-idf scores over the 1000 selected terms. The feature matrix will be sparse as only a few terms would occur in a document, and will also be non-negative as tf-idf values are greater than or equal to 0.
Using NMF for topic modeling ::: Learning topics
NMF is a matrix factorization method, which given the matrix $V$, factorizes it into two matrices: $W$ with dimensions $1080\times t$ (long-narrow), and $H$ with dimensions $t\times 1000$ (short-wide), where $t$ is a hyper-parameter. Figure FIGREF21 shows this decomposition when $t$ is set to 10.
In the context of topic modeling, $t$ is the number of topics we want to learn; $W$ is the document-topic distribution, where for each document (row) the column with the highest value is the most-likely topic; and $H$ is the topic-term distribution, where each row is a topic, and the columns with the highest values are terms most relevant to it.
The values for $W$ and $H$ are numerically approximated using a multiplicative update rule BIBREF27, with the Frobenius norm of the reconstruction error as the objective function. In this work, we use the machine-learning toolkit scikit-learn BIBREF14 for feature extraction, and to perform NMF, using default values as described at scikit-learn.org.
Using NMF for topic modeling ::: Making topic predictions
Using our topic-term distribution matrix $H$, we can now make topic predictions for new text input. Our evaluation set (eval100h) has 5376 English sentences. For each of these, we have the gold text, and also the ST model output. We preprocess and represent these using the same procedure as before (SECREF19) giving us the feature matrix $V^{^{\prime }}_{gold}$ for gold, and $V^{^{\prime }}_{ST}$ for ST output, each with dimensions $5376\times 1000$. Our goal is to learn the document-topic distributions $W^{^{\prime }}_{gold}$ and $W^{^{\prime }}_{ST}$, where:
The values for each $W^{^{\prime }}$ matrix are again numerically approximated using the same objective function as before, but keeping $H$ fixed.
Using NMF for topic modeling ::: Silver labels and evaluation
We use the highest scoring topic for each document as the prediction. The silver labels are therefore computed as $argmax(W^{^{\prime }}_{gold})$, and for ST as $argmax(W^{^{\prime }}_{ST})$. We can now compute the accuracy over these two sets of predictions.
Fisher corpus: assigned topics
Figure FIGREF24 shows the topics assigned to callers in the Fisher speech corpus. Some topic prompts overlap, for example, music-preference asks callers to discuss what kind of music they like to listen to, and music-social-message asks them to discuss the social impact of music. For both these topics, we would expect the text to contain similar terms. Similarly the topics cellphones-usage, tech-devices and telemarketing-spam also overlap. Such differences might be difficult for an unsupervised topic modeling algorithm to pick up.
Table TABREF25 shows the topics learned by NMF by using human English translations from the entire 160 hours of training data as input, when the number of topics is set to 25. We observe that some new topics are found that were not discovered by the 20hr/10-topic model and that match the assigned topic prompts, such as juries and housing. However, there are also several incoherent topics, and we don't find a major improvement over the topics learned by just using 20 hours of training data, with the number of topics set to 10.
Tracking topic drift over conversations
To measure how often speakers stray from assigned topic prompts, we take a closer look at the calls in train20h with the assigned prompt of religion. This is the most frequently assigned prompt in the Fisher dataset (17 calls in train20h). We also select this topic for further analysis as it contains terms which are strongly indicative, such as god, bible, etc. and should be relatively easier for our topic model to detect.
Figure FIGREF26 shows the trend of discussion topics over time. Overall, only 36% of the total dialog segments in these calls have the silver label religion, and the most frequently assigned label is family-misc with 46%. We observe that the first segment is often labeled as intro-misc, around 70% of the time, which is expected as speakers begin by introducing themselves. Figure FIGREF26 shows that a similar trend emerges for calls assigned the prompt music (14 calls in train20h). Silver labels for music account for 45% of the call segments and family-misc for around 38%. | Spanish |
da495e2f99ee2d5db9cc17eca5517ddaa5ea8e42 | da495e2f99ee2d5db9cc17eca5517ddaa5ea8e42_0 | Q: Where does the vocabulary come from?
Text: Introduction
Neural machine translation (NMT) proposed by Kalchbrenner and Blunsom BIBREF0 and Sutskever et al. BIBREF1 has achieved significant progress in recent years. Unlike traditional statistical machine translation(SMT) BIBREF2 , BIBREF3 , BIBREF4 which contains multiple separately tuned components, NMT builds an end-to-end framework to model the entire translation process. For several language pairs, NMT has already achieved better translation performance than SMT BIBREF5 , BIBREF6 .
Conventional NMT system limits the vocabulary to a modest-sized vocabulary in both sides and words out of vocabulary are replaced by a special UNK symbol. However, the process of training and decoding is often conducted on an open vocabulary, in which an obvious problem is that NMT model is incapable of translating rare words. In particular, if a source word is outside the source vocabulary or its translation is outside the target vocabulary, the model is unable to generate proper translation for this word during decoding. Both Sutskever et al. BIBREF1 and Bahdanau et al. BIBREF7 have observed that sentences with many out-of-vocabulary words tend to be translated much more poorly than sentences mainly containing frequent words.
To address this problem, many researchers propose a broad category of approaches by employing different translation granularities. Most of these are below the word level, e.g. characters BIBREF8 , hybrid word-characters BIBREF9 , BIBREF5 , and more intelligent subwords BIBREF10 , BIBREF5 . Besides, pioneering studies BIBREF5 , BIBREF6 demonstrate that translation tasks involving Chinese are some of the most difficult problems in NMT systems. However, there is no study that shows which translation granularity is suitable for Chinese-to-English and English-to-Chinese translation tasks.
In this work, we make an empirical comparison of different translation granularities for bidirectional English-Chinese translation tasks. In addition, we analyze the impact of these strategies on the translation results in detail. We demonstrate that Chinese-to-English NMT of 15k and 30k vocabulary size can acquire best results using subword model and with 60k vocabulary size hybrid word-character model obtains the highest performance, while hybrid word-character model is most suitable for English-to-Chinese translation. Our experiment shows that all subword methods are not bounded by the vocabulary size. Furthermore, we carry out the experiments that employ different translation granularities of source side and target side. The translation result shows that when the source granularity is hybrid word-character level and the target sentences are split into subword level by BPE method, it can achieve the best translation performance for Chinese-to-English translation task. As for English-to-Chinese translation task, Hybrid word-character model is most suitable. To the best of our knowledge, this is the first work on an empirical comparison of various translation granularities for bidirectional Chinese-English translations.
Neural Machine Translation
Our models are based on an encoder-decoder architecture with attention mechanism proposed by Luong et al. BIBREF11 , which utilizes stacked LSTM layers for both encoder and decoder as illustrated in Figure FIGREF1 . In this section, we make a review of NMT framework.
First, the NMT encodes the source sentence INLINEFORM0 into a sequence of context vector representation INLINEFORM1 . Then, the NMT decodes from the context vector representation INLINEFORM2 and generates target translation INLINEFORM3 one word each time by maximizing the probability of INLINEFORM4 . Next, We review the encoder and decoder frameworks briefly.
Encoder: The context vector representation INLINEFORM0 is generated by the encoder using INLINEFORM1 stacked LSTM layers. Bi-directional connections are used for the bottom encoder layer, and INLINEFORM2 is a concatenation vector as shown in Eq. (1): DISPLAYFORM0
All other encoder layers are unidirectional, and INLINEFORM0 is calculated as follows: DISPLAYFORM0
Decoder: The conditional probability INLINEFORM0 is formulated as DISPLAYFORM0
Specifically, we employ a simple concatenation layer to produce an attentional hidden state INLINEFORM0 : DISPLAYFORM0
where INLINEFORM0 denotes the target hidden state at the top layer of a stacking LSTM. The attention model calculates INLINEFORM1 as the weighted sum of the source-side context vector representation, just as illustrated in the upper left corner of Figure FIGREF1 . DISPLAYFORM0
where INLINEFORM0 is a normalized item calculated as follows: DISPLAYFORM0
INLINEFORM0 is computed by using the following formula: DISPLAYFORM0
If INLINEFORM0 , INLINEFORM1 will be calculated by combining INLINEFORM2 as feed input BIBREF11 : DISPLAYFORM0
Given the bilingual training data INLINEFORM0 , all parameters of the attention-based NMT are optimized to maximize the following conditional log-likelihood: DISPLAYFORM0
Description of Different Translation Granularities
We revisit how the source and target sentences ( INLINEFORM0 and INLINEFORM1 ) are represented in NMT. For the source side of any given training corpus, we scan through the whole corpus to build a vocabulary INLINEFORM2 of unique tokens. A source sentence INLINEFORM3 is then built as a sequence of the integer indices. The target sentence is similarly transformed into a target sequence of integer indices.
The property of NMT allows us great freedom in the choice of token units, and we can segment sentences in different ways. In this section, we will elaborate on four proposed approaches about the choice of translation granularities.
Character Level
This translation granularity is easy to implement. For this granularity, what we have to do is split the sentence into a sequence of characters. However, the character-level modeling on the English side is more challenging, as the network has to be able to deal with long and coherent sequence of characters. In this case, the number of characters is often 300 INLINEFORM0 1000 symbols long, where the size of the state space grows exponentially. Therefore, this is a great challenge for us to handle.
Besides, the alphabet of English is only consist of 26 letters, in which the vocabulary of English side is too small. Considering these facts, we only separate the Chinese side sentences into characters rather than both sides. Figure FIGREF11 shows an example of this translation granularity for character level.
Hybrid Word-Characters Level
In regular word-based NMT, for all words outside the source vocabulary, one feeds the universal embedding representing UNK as input to the encoder. This is problematic because it discards valuable information about the source word. To address that, hybrid word-character approach will be adopted. In this part, we will introduce this granularity in detail.
Unlike in the conventional word model where out-of-vocabulary words are collapsed into a single UNK symbol, we convert these words into the sequence of constituent characters. Special prefixes are prepended to the characters. The purpose of the prefixes is to show the location of the characters in a word, and to distinguish them from normal in-vocabulary characters. There are three prefixes: INLINEFORM0 B INLINEFORM1 , INLINEFORM2 M INLINEFORM3 , and INLINEFORM4 E INLINEFORM5 , indicating beginning of the word, middle of the word and end of the word, respectively. During decoding, the output may also contain sequences of special tokens. With the prefixes, it is trivial to reverse the tokenization to the original words as part of a post-processing step. Using this approach, in Figure FIGREF11 , we can see the word “龙年” is segmented into “ INLINEFORM6 B INLINEFORM7 龙 INLINEFORM8 E INLINEFORM9 年”, and the word “繁花似锦” is segmented into “ INLINEFORM10 B INLINEFORM11 繁 INLINEFORM12 M INLINEFORM13 花 INLINEFORM14 M INLINEFORM15 似 INLINEFORM16 E INLINEFORM17 锦”.
Subword Level
Considering languages with productive word formation processes such as agglutination and compounding, translation models require mechanisms that segment the sentence below the word level (In this paper, we call this level of symbols as subword units). In this part, we will introduce the two different methods of translation granularity on subword level.
Byte pair encoding (BPE) BIBREF12 is a compression algorithm. This simple data compression technique iteratively replaces the most frequent pair of bytes in a sequence with a single, unused byte. This compression method is first introduced into translation granularity by Sennrich et al. BIBREF10 . In this approach, instead of merging frequent pairs of bytes, characters or character sequences will be merged.
A detailed introduction of algorithm in learning BPE operations is showed in Sennrich et al. BIBREF10 . During decoding time, each word first split into sequences of characters, then learned operation will be applied to merge the characters into larger, known symbols. For BPE method, a special symbol is also needed to indicate the merging position. In Figure FIGREF11 , the word “繁花似锦” is segmented into three subword units, and the first three units are appended a special suffix “@@”. In decoding step, the translation results contain the special tokens as well. With these suffixes, we can recover the output easily.
The wordpiece model (WPM) implementation is initially developed to solve a Japanese/Korean segmentation problem for the speech recognition system BIBREF13 . This approach is completely data-driven and guaranteed to generate a deterministic segmentation for any possible sequence of characters, which is similar to the above method.
The wordpiece model is generated using a data-driven approach to maximize the language-model likelihood of the training data, given an evolving word definition. The training method of WPM is described in more detail in Schuster and Nakajima BIBREF13 . As shown in Figure FIGREF11 , a special symbol is only prepended at the beginning of the words. In this case, the words “龙年”, “繁花似锦”, “洋溢” and “祥和” are split into subwords, and the rest words remain the same except for a special prefix “_”.
Dataset
We perform all these translation granularities on the NIST bidirectional Chinese-English translation tasks. The evaluation metric is BLEU BIBREF14 as calculated by the multi-bleu.perl script.
Our training data consists of 2.09M sentence pairs extracted from LDC corpus. Table 1 shows the detailed statistics of our training data. To test different approaches on Chinese-to-English translation task, we use NIST 2003(MT03) dataset as the validation set, and NIST 2004(MT04), NIST 2005(MT05), NIST 2006(MT06) datasets as our test sets. For English-to-Chinese translation task, we also use NIST 2003(MT03) dataset as the validation set, and NIST 2008(MT08) will be used as test set.
Training Details
We build the described models modified from the Zoph_RNN toolkit which is written in C++/CUDA and provides efficient training across multiple GPUs. Our training procedure and hyper parameter choices are similar to those used by Luong et al. BIBREF11 . In the NMT architecture as illustrated in Figure FIGREF1 , the encoder has three stacked LSTM layers including a bidirectional layer, followed by a global attention layer, and the decoder contains two stacked LSTM layers followed by the softmax layer.
The word embedding dimension and the size of hidden layers are all set to 1000. We limit the maximum length in training corpus to 120. Parameter optimization is performed using both stochastic gradient descent(SGD) method and Adam method BIBREF15 . For the first three epoches, We train using the Adam optimizer and a fixed learning rate of 0.001 without decay. For the remaining six epoches, we train using SGD, and we set learning rate to 0.1 at the beginning and halve the threshold while the perplexity go up on the development set. We set minibatch size to 128. Dropout was also applied on each layer to avoid over-fitting, and the dropout rate is set to 0.2. At test time, we employ beam search with beam size b = 12.
Data Segmentation
For Chinese word segmentation, we use our in-house segmentation tools. For English corpus, the training data is tokenized with the Moses tokenizer. We carry out Chinese-to-English translation experiment on 30k vocabulary and 15k vocabulary for both sides respectively, and we also conduct English-to-Chinese translation experiment on 30k vocabulary size. The word level translation granularity is set to our baseline method.
For character level, we only segment the Chinese sentences into characters and the English sentences remain the same. For hybrid word-characters level, we segment training corpus for both sides. We rank the word frequency from greatest to least in training corpus, and in order to prevent the pollution from the very rare word, we have to set a segmentation point relatively higher. For 30k vocabulary, the word frequency below 64 is segmented into characters on Chinese side, and the segmentation point is set to 22 on English side. For 15k vocabulary, we set the segmentation point to 350 and 96 on Chinese side and English side respectively. For 60k vocabulary, the frequency of Chinese words below 14 and that of English words below 6 are split into characters.
For subword level, two different approaches are used. In BPE method, the number of merge operations is set to 30000 on 30k vocabulary size, 15000 on 15k vocabulary size and 60000 on 60k vocabulary size. For Chinese sentences, we segment the training corpus using our in-house segmentation tools first, and then we can apply the BPE method same as English sentences. Considering the essence of WPM method, we do not have to segment words for Chinese and tokenize sentences for English. That is to say, we can train the WPM without pre-processing step. Hence, for WPM method, we conduct our experiments both on the sentences trained on the raw corpus and the sentences trained on the segmented corpus.
Results on Chinese-to-English Translation
We list the BLEU scores of different translation granularities on 30k vocabulary in Table TABREF27 .
Row 1 is translation result of the state-of-the-art NMT system with word level. For the character level granularity (Row 2), the translation quality is higher than the word level by only 0.38 BLEU points. The last three lines in Table TABREF27 are subword level translation granularity, which contains BPE method and WPM method. BPE method (Row 4) achieves the best translation performance, which gets an improvement of 1.64 BLEU points over the word level. As for the WPM method (Row 6), the gap between this method and BPE method is narrow. Moreover, hybrid word-character level model (Row 3) outperforms the word level by 1.46 BLEU points, and translation quality of this method is very close to the BPE method. Experiments show that hybrid word-character level granularity and BPE method of subword level granularity are our choices for translation granularity on Chinese-to-English translation task.
We execute different translation granularities on the training corpus. To make a comparison, We randomly choose 10000 sentences. Table TABREF29 show the average sentence length of different methods on all granularities.
A well-known flaw of NMT model is the inability to properly translate long sentences. However, most of translation granularities will go below the word level. Therefore, as shown in Table TABREF29 , we can get longer sentences than the word level. We wonder what the translation performance of different lengths are on all translation granularities. We follow Bahdanau et al. BIBREF7 to group sentences of similar lengths together and compute a BLEU score per group, as demonstrated in Figure FIGREF30 .
In order to make the comparison fair, length refers to the number of tokens split in word level. As above mentioned, hybrid word-character level model is one of suitable granularity choices for Chinese-to-English translation. We can find when the length of sentences is below 20, the translation result of this model outperforms the other models to a great extent. But with the length going up, the advantage over other models is diminishing. The character level granularity performs bad for the sentences whose length are below 20. We think the reason may be that when the sentences are short, the representation of sentence in character level cannot express the sentence meaning well. As for BPE method, we find a strange phenomenon. When the number of words in source sentence is from 60 to 80, the translation performance of BPE method is not so good. However, this method can achieve almost 3 BLEU points higher than next-best approach when the source sentence is longer than 80 words. As shown in Figure FIGREF30 , we can see WPM method does not perform well lower than 60 words in source language. But when the length of sentences is between 60 and 80, this method even outperforms the BPE method by up to 5.51 BLEU points. In this experiment, we conclude that subword model is more effective than other models in handling long sentences.
We concern what the translation results of different translation granularities are on smaller vocabulary size. We also carry out the experiment on Chinese-to-English task of 15k vocabulary size.
Compared to 30k vocabulary size, the translation performance of word level (Row 1) on 15k vocabulary size is reduced by 2.14 BLEU points. However, character level (Row 2) and hybrid word-character level (Row 3) achieve 42.09 and 43.12 BLEU points respectively, which is on par with quality of translation on 30k vocabulary. Both these two models exceed word level to a great extent. We infer the reason is that both character level and hybrid word-character level can represent source side and target side sentences better than the word level even if the vocabulary size is small. For subword model, translation performance of these methods remain almost the same as 30k vocabulary, which is beyond our imagination. We can find in Table TABREF32 , WPM method (Row 6) outperforms other models, and to our surprise, translation results of both WPM method and WPM methods with raw corpus (Row 5) obtain a higher BLEU points than 30k vocabulary size. We analyze the reason of this phenomenon is that the subword model is not constrained by the vocabulary size. Although the WPM method achieves the best results for the 15k vocabulary size, this method also belongs to subword level translation granularity. We can conclude that subword translation granularity is more suitable for Chinese-to-English translation task.
In order to make a comparison of these translation granularities on larger vocabulary size, we perform the our experiment of 60k vocabulary size on Chinese-to-English translation task.
We can find in Table TABREF34 , the word and character level (Row 1 and Row 2) on 60k vocabulary size are increased by 1.15 and 1.11 BLEU points respectively compared to 30 vocabulary size. However, to our surprise, all the translation results of subword level granularities on 60k vocabulary are below to the 30k vocabulary size. With the increase of vocabulary size, we add more fine-grained subword segmentation units into vocabulary. We infer that large amount of subword units do not have beneficial effect on the translation results. As for hybrid word-character level, this method achieves 43.97 BLEU points, which is highest among all the translation granularities on 60k vocabulary size. Compared with Table TABREF27 , hybrid word-character level outperforms the best translation result on 30k vocabulary size (BPE method) by 0.22 BLEU points.
We also conduct experiments that we use different translation granularities on source and target side. In order to carry out the experiments easily, we only compare several granularities pairs.
In Table TABREF36 , we can find that when the source translation granularity is word level (Row 2 and Row 3), the translation performances are relative poor, even worse than the word level of both sides in Table TABREF27 . As for BPE method on source side, the hybrid word-character on target side obtains 43.73 BLEU points (Row 6), which is close to the best translation result in Table TABREF27 . Hybrid_BPE method achieves up to 44.26 BLEU points (Row 4), which is even higher than BPE method by up to 0.51 BLEU points. This method can acquire best translation result for Chinese-to-English translation task.
Results on English-to-Chinese Translation
We evaluate different translation granularities on the English-to-Chinese translation tasks, whose results are presented in Table TABREF39 .
We find that hybrid word-character level (Row 3) granularity obtains significant accuracy improvements over word level and this granularity is also superior to other granularities on large-scale English-to-Chinese translation. BPE method (Row 4) in this task does not perform well as Chinese-to-English task, the translation quality of it is lower than hybrid word-character model by up to 0.97 BLEU points. However, another subword level translation granularity WPM method (Row 6) achieves 22.14 BLEU points, which is near the hybrid word-character level. Although the vocabulary of character level on Chinese side is only 7.2k, it can also obtain 19.64 BLEU points (Row 2), which is on par with translation performance of word level.
As Chinese-to-English translation task, we carry out experiments on English-to-Chinese translation for different granularities. According to Table TABREF36 , Hybrid_BPE and BPE_Hybrid methods acquire relative higher translation quality than other methods. Therefore, in this section we only use these two methods to test which is most suitable for English-to-Chinese translation task.
Table TABREF41 shows that translation performances of both two methods are below to the Hybrid word-character granularity in Table TABREF39 . BPE_Hybrid method (Row 2) achieves 22.12 BLEU points, which is higher than Hybrid_BPE method by 0.39 BLEU points and is near the translation quality of WPM method in Table TABREF39 .
Related Work
The recently proposed neural machine translation has drawn more and more attention. Most of existing work in neural machine translation focus on handling rare words BIBREF16 , BIBREF10 , BIBREF17 , integrating SMT strategies BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , designing the better framework BIBREF22 , BIBREF11 , BIBREF23 and addressing the low resource scenario BIBREF24 , BIBREF25 , BIBREF26 .
As for strategies for dealing with rare and unknown words, a number of authors have endeavored to explore methods for addressing them. Luong et al. BIBREF11 and Li et al. BIBREF16 propose simple alignment-based technique that can replace out-of-vocabulary words with similar words. Jean et al. BIBREF27 use a large vocabulary with a method based on importance sampling.
In addition, another direction to achieve rare words problem in NMT is changing the granularity of segmentation. Chung et al. BIBREF8 focus on handling translation at the level of characters without any word segmentation only on target side. Luong et al. BIBREF9 propose a novel hybrid architecture that combines the strength of both word and character-based models. Sennrich et al. BIBREF10 use BPE method to encode rare and unknown words as sequences of subword units. Wu et al. BIBREF5 use both WPM method and hybrid word-character model in their online translation system. However, there is no study that shows which translation granularity is suitable for translation tasks involving Chinese language. Our goal in this work is to make an empirical comparison of different translation granularities for bidirectional Chinese-English translation tasks.
Conclusion
In this work, we provide an extensive comparison for translation granularities in Chinese-English NMT, such as word, character, subword and hybrid word-character. We have also discussed the advantages and disadvantages of various translation granularities in detail. For the same granularity on both sides, the experiments demonstrate that the subword model best fits Chinese-to-English translation with the vocabulary that is not so big, while the hybrid word-character approach obtains the highest performance on English-to-Chinese translation. In addition, experiments on different granularities show that Hybrid_BPE method can acquire best result for Chinese-to-English translation task.
Acknowledgments
The research work has been funded by the Natural Science Foundation of China under Grant No. 61333018 and No. 61402478, and it is also supported by the Strategic Priority Research Program of the CAS under Grant No. XDB02070007. | LDC corpus |
e44a5514d7464993997212341606c2c0f3a72eb4 | e44a5514d7464993997212341606c2c0f3a72eb4_0 | Q: What is the worst performing translation granularity?
Text: Introduction
Neural machine translation (NMT) proposed by Kalchbrenner and Blunsom BIBREF0 and Sutskever et al. BIBREF1 has achieved significant progress in recent years. Unlike traditional statistical machine translation(SMT) BIBREF2 , BIBREF3 , BIBREF4 which contains multiple separately tuned components, NMT builds an end-to-end framework to model the entire translation process. For several language pairs, NMT has already achieved better translation performance than SMT BIBREF5 , BIBREF6 .
Conventional NMT system limits the vocabulary to a modest-sized vocabulary in both sides and words out of vocabulary are replaced by a special UNK symbol. However, the process of training and decoding is often conducted on an open vocabulary, in which an obvious problem is that NMT model is incapable of translating rare words. In particular, if a source word is outside the source vocabulary or its translation is outside the target vocabulary, the model is unable to generate proper translation for this word during decoding. Both Sutskever et al. BIBREF1 and Bahdanau et al. BIBREF7 have observed that sentences with many out-of-vocabulary words tend to be translated much more poorly than sentences mainly containing frequent words.
To address this problem, many researchers propose a broad category of approaches by employing different translation granularities. Most of these are below the word level, e.g. characters BIBREF8 , hybrid word-characters BIBREF9 , BIBREF5 , and more intelligent subwords BIBREF10 , BIBREF5 . Besides, pioneering studies BIBREF5 , BIBREF6 demonstrate that translation tasks involving Chinese are some of the most difficult problems in NMT systems. However, there is no study that shows which translation granularity is suitable for Chinese-to-English and English-to-Chinese translation tasks.
In this work, we make an empirical comparison of different translation granularities for bidirectional English-Chinese translation tasks. In addition, we analyze the impact of these strategies on the translation results in detail. We demonstrate that Chinese-to-English NMT of 15k and 30k vocabulary size can acquire best results using subword model and with 60k vocabulary size hybrid word-character model obtains the highest performance, while hybrid word-character model is most suitable for English-to-Chinese translation. Our experiment shows that all subword methods are not bounded by the vocabulary size. Furthermore, we carry out the experiments that employ different translation granularities of source side and target side. The translation result shows that when the source granularity is hybrid word-character level and the target sentences are split into subword level by BPE method, it can achieve the best translation performance for Chinese-to-English translation task. As for English-to-Chinese translation task, Hybrid word-character model is most suitable. To the best of our knowledge, this is the first work on an empirical comparison of various translation granularities for bidirectional Chinese-English translations.
Neural Machine Translation
Our models are based on an encoder-decoder architecture with attention mechanism proposed by Luong et al. BIBREF11 , which utilizes stacked LSTM layers for both encoder and decoder as illustrated in Figure FIGREF1 . In this section, we make a review of NMT framework.
First, the NMT encodes the source sentence INLINEFORM0 into a sequence of context vector representation INLINEFORM1 . Then, the NMT decodes from the context vector representation INLINEFORM2 and generates target translation INLINEFORM3 one word each time by maximizing the probability of INLINEFORM4 . Next, We review the encoder and decoder frameworks briefly.
Encoder: The context vector representation INLINEFORM0 is generated by the encoder using INLINEFORM1 stacked LSTM layers. Bi-directional connections are used for the bottom encoder layer, and INLINEFORM2 is a concatenation vector as shown in Eq. (1): DISPLAYFORM0
All other encoder layers are unidirectional, and INLINEFORM0 is calculated as follows: DISPLAYFORM0
Decoder: The conditional probability INLINEFORM0 is formulated as DISPLAYFORM0
Specifically, we employ a simple concatenation layer to produce an attentional hidden state INLINEFORM0 : DISPLAYFORM0
where INLINEFORM0 denotes the target hidden state at the top layer of a stacking LSTM. The attention model calculates INLINEFORM1 as the weighted sum of the source-side context vector representation, just as illustrated in the upper left corner of Figure FIGREF1 . DISPLAYFORM0
where INLINEFORM0 is a normalized item calculated as follows: DISPLAYFORM0
INLINEFORM0 is computed by using the following formula: DISPLAYFORM0
If INLINEFORM0 , INLINEFORM1 will be calculated by combining INLINEFORM2 as feed input BIBREF11 : DISPLAYFORM0
Given the bilingual training data INLINEFORM0 , all parameters of the attention-based NMT are optimized to maximize the following conditional log-likelihood: DISPLAYFORM0
Description of Different Translation Granularities
We revisit how the source and target sentences ( INLINEFORM0 and INLINEFORM1 ) are represented in NMT. For the source side of any given training corpus, we scan through the whole corpus to build a vocabulary INLINEFORM2 of unique tokens. A source sentence INLINEFORM3 is then built as a sequence of the integer indices. The target sentence is similarly transformed into a target sequence of integer indices.
The property of NMT allows us great freedom in the choice of token units, and we can segment sentences in different ways. In this section, we will elaborate on four proposed approaches about the choice of translation granularities.
Character Level
This translation granularity is easy to implement. For this granularity, what we have to do is split the sentence into a sequence of characters. However, the character-level modeling on the English side is more challenging, as the network has to be able to deal with long and coherent sequence of characters. In this case, the number of characters is often 300 INLINEFORM0 1000 symbols long, where the size of the state space grows exponentially. Therefore, this is a great challenge for us to handle.
Besides, the alphabet of English is only consist of 26 letters, in which the vocabulary of English side is too small. Considering these facts, we only separate the Chinese side sentences into characters rather than both sides. Figure FIGREF11 shows an example of this translation granularity for character level.
Hybrid Word-Characters Level
In regular word-based NMT, for all words outside the source vocabulary, one feeds the universal embedding representing UNK as input to the encoder. This is problematic because it discards valuable information about the source word. To address that, hybrid word-character approach will be adopted. In this part, we will introduce this granularity in detail.
Unlike in the conventional word model where out-of-vocabulary words are collapsed into a single UNK symbol, we convert these words into the sequence of constituent characters. Special prefixes are prepended to the characters. The purpose of the prefixes is to show the location of the characters in a word, and to distinguish them from normal in-vocabulary characters. There are three prefixes: INLINEFORM0 B INLINEFORM1 , INLINEFORM2 M INLINEFORM3 , and INLINEFORM4 E INLINEFORM5 , indicating beginning of the word, middle of the word and end of the word, respectively. During decoding, the output may also contain sequences of special tokens. With the prefixes, it is trivial to reverse the tokenization to the original words as part of a post-processing step. Using this approach, in Figure FIGREF11 , we can see the word “龙年” is segmented into “ INLINEFORM6 B INLINEFORM7 龙 INLINEFORM8 E INLINEFORM9 年”, and the word “繁花似锦” is segmented into “ INLINEFORM10 B INLINEFORM11 繁 INLINEFORM12 M INLINEFORM13 花 INLINEFORM14 M INLINEFORM15 似 INLINEFORM16 E INLINEFORM17 锦”.
Subword Level
Considering languages with productive word formation processes such as agglutination and compounding, translation models require mechanisms that segment the sentence below the word level (In this paper, we call this level of symbols as subword units). In this part, we will introduce the two different methods of translation granularity on subword level.
Byte pair encoding (BPE) BIBREF12 is a compression algorithm. This simple data compression technique iteratively replaces the most frequent pair of bytes in a sequence with a single, unused byte. This compression method is first introduced into translation granularity by Sennrich et al. BIBREF10 . In this approach, instead of merging frequent pairs of bytes, characters or character sequences will be merged.
A detailed introduction of algorithm in learning BPE operations is showed in Sennrich et al. BIBREF10 . During decoding time, each word first split into sequences of characters, then learned operation will be applied to merge the characters into larger, known symbols. For BPE method, a special symbol is also needed to indicate the merging position. In Figure FIGREF11 , the word “繁花似锦” is segmented into three subword units, and the first three units are appended a special suffix “@@”. In decoding step, the translation results contain the special tokens as well. With these suffixes, we can recover the output easily.
The wordpiece model (WPM) implementation is initially developed to solve a Japanese/Korean segmentation problem for the speech recognition system BIBREF13 . This approach is completely data-driven and guaranteed to generate a deterministic segmentation for any possible sequence of characters, which is similar to the above method.
The wordpiece model is generated using a data-driven approach to maximize the language-model likelihood of the training data, given an evolving word definition. The training method of WPM is described in more detail in Schuster and Nakajima BIBREF13 . As shown in Figure FIGREF11 , a special symbol is only prepended at the beginning of the words. In this case, the words “龙年”, “繁花似锦”, “洋溢” and “祥和” are split into subwords, and the rest words remain the same except for a special prefix “_”.
Dataset
We perform all these translation granularities on the NIST bidirectional Chinese-English translation tasks. The evaluation metric is BLEU BIBREF14 as calculated by the multi-bleu.perl script.
Our training data consists of 2.09M sentence pairs extracted from LDC corpus. Table 1 shows the detailed statistics of our training data. To test different approaches on Chinese-to-English translation task, we use NIST 2003(MT03) dataset as the validation set, and NIST 2004(MT04), NIST 2005(MT05), NIST 2006(MT06) datasets as our test sets. For English-to-Chinese translation task, we also use NIST 2003(MT03) dataset as the validation set, and NIST 2008(MT08) will be used as test set.
Training Details
We build the described models modified from the Zoph_RNN toolkit which is written in C++/CUDA and provides efficient training across multiple GPUs. Our training procedure and hyper parameter choices are similar to those used by Luong et al. BIBREF11 . In the NMT architecture as illustrated in Figure FIGREF1 , the encoder has three stacked LSTM layers including a bidirectional layer, followed by a global attention layer, and the decoder contains two stacked LSTM layers followed by the softmax layer.
The word embedding dimension and the size of hidden layers are all set to 1000. We limit the maximum length in training corpus to 120. Parameter optimization is performed using both stochastic gradient descent(SGD) method and Adam method BIBREF15 . For the first three epoches, We train using the Adam optimizer and a fixed learning rate of 0.001 without decay. For the remaining six epoches, we train using SGD, and we set learning rate to 0.1 at the beginning and halve the threshold while the perplexity go up on the development set. We set minibatch size to 128. Dropout was also applied on each layer to avoid over-fitting, and the dropout rate is set to 0.2. At test time, we employ beam search with beam size b = 12.
Data Segmentation
For Chinese word segmentation, we use our in-house segmentation tools. For English corpus, the training data is tokenized with the Moses tokenizer. We carry out Chinese-to-English translation experiment on 30k vocabulary and 15k vocabulary for both sides respectively, and we also conduct English-to-Chinese translation experiment on 30k vocabulary size. The word level translation granularity is set to our baseline method.
For character level, we only segment the Chinese sentences into characters and the English sentences remain the same. For hybrid word-characters level, we segment training corpus for both sides. We rank the word frequency from greatest to least in training corpus, and in order to prevent the pollution from the very rare word, we have to set a segmentation point relatively higher. For 30k vocabulary, the word frequency below 64 is segmented into characters on Chinese side, and the segmentation point is set to 22 on English side. For 15k vocabulary, we set the segmentation point to 350 and 96 on Chinese side and English side respectively. For 60k vocabulary, the frequency of Chinese words below 14 and that of English words below 6 are split into characters.
For subword level, two different approaches are used. In BPE method, the number of merge operations is set to 30000 on 30k vocabulary size, 15000 on 15k vocabulary size and 60000 on 60k vocabulary size. For Chinese sentences, we segment the training corpus using our in-house segmentation tools first, and then we can apply the BPE method same as English sentences. Considering the essence of WPM method, we do not have to segment words for Chinese and tokenize sentences for English. That is to say, we can train the WPM without pre-processing step. Hence, for WPM method, we conduct our experiments both on the sentences trained on the raw corpus and the sentences trained on the segmented corpus.
Results on Chinese-to-English Translation
We list the BLEU scores of different translation granularities on 30k vocabulary in Table TABREF27 .
Row 1 is translation result of the state-of-the-art NMT system with word level. For the character level granularity (Row 2), the translation quality is higher than the word level by only 0.38 BLEU points. The last three lines in Table TABREF27 are subword level translation granularity, which contains BPE method and WPM method. BPE method (Row 4) achieves the best translation performance, which gets an improvement of 1.64 BLEU points over the word level. As for the WPM method (Row 6), the gap between this method and BPE method is narrow. Moreover, hybrid word-character level model (Row 3) outperforms the word level by 1.46 BLEU points, and translation quality of this method is very close to the BPE method. Experiments show that hybrid word-character level granularity and BPE method of subword level granularity are our choices for translation granularity on Chinese-to-English translation task.
We execute different translation granularities on the training corpus. To make a comparison, We randomly choose 10000 sentences. Table TABREF29 show the average sentence length of different methods on all granularities.
A well-known flaw of NMT model is the inability to properly translate long sentences. However, most of translation granularities will go below the word level. Therefore, as shown in Table TABREF29 , we can get longer sentences than the word level. We wonder what the translation performance of different lengths are on all translation granularities. We follow Bahdanau et al. BIBREF7 to group sentences of similar lengths together and compute a BLEU score per group, as demonstrated in Figure FIGREF30 .
In order to make the comparison fair, length refers to the number of tokens split in word level. As above mentioned, hybrid word-character level model is one of suitable granularity choices for Chinese-to-English translation. We can find when the length of sentences is below 20, the translation result of this model outperforms the other models to a great extent. But with the length going up, the advantage over other models is diminishing. The character level granularity performs bad for the sentences whose length are below 20. We think the reason may be that when the sentences are short, the representation of sentence in character level cannot express the sentence meaning well. As for BPE method, we find a strange phenomenon. When the number of words in source sentence is from 60 to 80, the translation performance of BPE method is not so good. However, this method can achieve almost 3 BLEU points higher than next-best approach when the source sentence is longer than 80 words. As shown in Figure FIGREF30 , we can see WPM method does not perform well lower than 60 words in source language. But when the length of sentences is between 60 and 80, this method even outperforms the BPE method by up to 5.51 BLEU points. In this experiment, we conclude that subword model is more effective than other models in handling long sentences.
We concern what the translation results of different translation granularities are on smaller vocabulary size. We also carry out the experiment on Chinese-to-English task of 15k vocabulary size.
Compared to 30k vocabulary size, the translation performance of word level (Row 1) on 15k vocabulary size is reduced by 2.14 BLEU points. However, character level (Row 2) and hybrid word-character level (Row 3) achieve 42.09 and 43.12 BLEU points respectively, which is on par with quality of translation on 30k vocabulary. Both these two models exceed word level to a great extent. We infer the reason is that both character level and hybrid word-character level can represent source side and target side sentences better than the word level even if the vocabulary size is small. For subword model, translation performance of these methods remain almost the same as 30k vocabulary, which is beyond our imagination. We can find in Table TABREF32 , WPM method (Row 6) outperforms other models, and to our surprise, translation results of both WPM method and WPM methods with raw corpus (Row 5) obtain a higher BLEU points than 30k vocabulary size. We analyze the reason of this phenomenon is that the subword model is not constrained by the vocabulary size. Although the WPM method achieves the best results for the 15k vocabulary size, this method also belongs to subword level translation granularity. We can conclude that subword translation granularity is more suitable for Chinese-to-English translation task.
In order to make a comparison of these translation granularities on larger vocabulary size, we perform the our experiment of 60k vocabulary size on Chinese-to-English translation task.
We can find in Table TABREF34 , the word and character level (Row 1 and Row 2) on 60k vocabulary size are increased by 1.15 and 1.11 BLEU points respectively compared to 30 vocabulary size. However, to our surprise, all the translation results of subword level granularities on 60k vocabulary are below to the 30k vocabulary size. With the increase of vocabulary size, we add more fine-grained subword segmentation units into vocabulary. We infer that large amount of subword units do not have beneficial effect on the translation results. As for hybrid word-character level, this method achieves 43.97 BLEU points, which is highest among all the translation granularities on 60k vocabulary size. Compared with Table TABREF27 , hybrid word-character level outperforms the best translation result on 30k vocabulary size (BPE method) by 0.22 BLEU points.
We also conduct experiments that we use different translation granularities on source and target side. In order to carry out the experiments easily, we only compare several granularities pairs.
In Table TABREF36 , we can find that when the source translation granularity is word level (Row 2 and Row 3), the translation performances are relative poor, even worse than the word level of both sides in Table TABREF27 . As for BPE method on source side, the hybrid word-character on target side obtains 43.73 BLEU points (Row 6), which is close to the best translation result in Table TABREF27 . Hybrid_BPE method achieves up to 44.26 BLEU points (Row 4), which is even higher than BPE method by up to 0.51 BLEU points. This method can acquire best translation result for Chinese-to-English translation task.
Results on English-to-Chinese Translation
We evaluate different translation granularities on the English-to-Chinese translation tasks, whose results are presented in Table TABREF39 .
We find that hybrid word-character level (Row 3) granularity obtains significant accuracy improvements over word level and this granularity is also superior to other granularities on large-scale English-to-Chinese translation. BPE method (Row 4) in this task does not perform well as Chinese-to-English task, the translation quality of it is lower than hybrid word-character model by up to 0.97 BLEU points. However, another subword level translation granularity WPM method (Row 6) achieves 22.14 BLEU points, which is near the hybrid word-character level. Although the vocabulary of character level on Chinese side is only 7.2k, it can also obtain 19.64 BLEU points (Row 2), which is on par with translation performance of word level.
As Chinese-to-English translation task, we carry out experiments on English-to-Chinese translation for different granularities. According to Table TABREF36 , Hybrid_BPE and BPE_Hybrid methods acquire relative higher translation quality than other methods. Therefore, in this section we only use these two methods to test which is most suitable for English-to-Chinese translation task.
Table TABREF41 shows that translation performances of both two methods are below to the Hybrid word-character granularity in Table TABREF39 . BPE_Hybrid method (Row 2) achieves 22.12 BLEU points, which is higher than Hybrid_BPE method by 0.39 BLEU points and is near the translation quality of WPM method in Table TABREF39 .
Related Work
The recently proposed neural machine translation has drawn more and more attention. Most of existing work in neural machine translation focus on handling rare words BIBREF16 , BIBREF10 , BIBREF17 , integrating SMT strategies BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , designing the better framework BIBREF22 , BIBREF11 , BIBREF23 and addressing the low resource scenario BIBREF24 , BIBREF25 , BIBREF26 .
As for strategies for dealing with rare and unknown words, a number of authors have endeavored to explore methods for addressing them. Luong et al. BIBREF11 and Li et al. BIBREF16 propose simple alignment-based technique that can replace out-of-vocabulary words with similar words. Jean et al. BIBREF27 use a large vocabulary with a method based on importance sampling.
In addition, another direction to achieve rare words problem in NMT is changing the granularity of segmentation. Chung et al. BIBREF8 focus on handling translation at the level of characters without any word segmentation only on target side. Luong et al. BIBREF9 propose a novel hybrid architecture that combines the strength of both word and character-based models. Sennrich et al. BIBREF10 use BPE method to encode rare and unknown words as sequences of subword units. Wu et al. BIBREF5 use both WPM method and hybrid word-character model in their online translation system. However, there is no study that shows which translation granularity is suitable for translation tasks involving Chinese language. Our goal in this work is to make an empirical comparison of different translation granularities for bidirectional Chinese-English translation tasks.
Conclusion
In this work, we provide an extensive comparison for translation granularities in Chinese-English NMT, such as word, character, subword and hybrid word-character. We have also discussed the advantages and disadvantages of various translation granularities in detail. For the same granularity on both sides, the experiments demonstrate that the subword model best fits Chinese-to-English translation with the vocabulary that is not so big, while the hybrid word-character approach obtains the highest performance on English-to-Chinese translation. In addition, experiments on different granularities show that Hybrid_BPE method can acquire best result for Chinese-to-English translation task.
Acknowledgments
The research work has been funded by the Natural Science Foundation of China under Grant No. 61333018 and No. 61402478, and it is also supported by the Strategic Priority Research Program of the CAS under Grant No. XDB02070007. | Unanswerable |
310e61b9dd4d75bc1bebbcb1dae578f55807cd04 | 310e61b9dd4d75bc1bebbcb1dae578f55807cd04_0 | Q: What dataset did they use?
Text: Introduction
Neural machine translation (NMT) proposed by Kalchbrenner and Blunsom BIBREF0 and Sutskever et al. BIBREF1 has achieved significant progress in recent years. Unlike traditional statistical machine translation(SMT) BIBREF2 , BIBREF3 , BIBREF4 which contains multiple separately tuned components, NMT builds an end-to-end framework to model the entire translation process. For several language pairs, NMT has already achieved better translation performance than SMT BIBREF5 , BIBREF6 .
Conventional NMT system limits the vocabulary to a modest-sized vocabulary in both sides and words out of vocabulary are replaced by a special UNK symbol. However, the process of training and decoding is often conducted on an open vocabulary, in which an obvious problem is that NMT model is incapable of translating rare words. In particular, if a source word is outside the source vocabulary or its translation is outside the target vocabulary, the model is unable to generate proper translation for this word during decoding. Both Sutskever et al. BIBREF1 and Bahdanau et al. BIBREF7 have observed that sentences with many out-of-vocabulary words tend to be translated much more poorly than sentences mainly containing frequent words.
To address this problem, many researchers propose a broad category of approaches by employing different translation granularities. Most of these are below the word level, e.g. characters BIBREF8 , hybrid word-characters BIBREF9 , BIBREF5 , and more intelligent subwords BIBREF10 , BIBREF5 . Besides, pioneering studies BIBREF5 , BIBREF6 demonstrate that translation tasks involving Chinese are some of the most difficult problems in NMT systems. However, there is no study that shows which translation granularity is suitable for Chinese-to-English and English-to-Chinese translation tasks.
In this work, we make an empirical comparison of different translation granularities for bidirectional English-Chinese translation tasks. In addition, we analyze the impact of these strategies on the translation results in detail. We demonstrate that Chinese-to-English NMT of 15k and 30k vocabulary size can acquire best results using subword model and with 60k vocabulary size hybrid word-character model obtains the highest performance, while hybrid word-character model is most suitable for English-to-Chinese translation. Our experiment shows that all subword methods are not bounded by the vocabulary size. Furthermore, we carry out the experiments that employ different translation granularities of source side and target side. The translation result shows that when the source granularity is hybrid word-character level and the target sentences are split into subword level by BPE method, it can achieve the best translation performance for Chinese-to-English translation task. As for English-to-Chinese translation task, Hybrid word-character model is most suitable. To the best of our knowledge, this is the first work on an empirical comparison of various translation granularities for bidirectional Chinese-English translations.
Neural Machine Translation
Our models are based on an encoder-decoder architecture with attention mechanism proposed by Luong et al. BIBREF11 , which utilizes stacked LSTM layers for both encoder and decoder as illustrated in Figure FIGREF1 . In this section, we make a review of NMT framework.
First, the NMT encodes the source sentence INLINEFORM0 into a sequence of context vector representation INLINEFORM1 . Then, the NMT decodes from the context vector representation INLINEFORM2 and generates target translation INLINEFORM3 one word each time by maximizing the probability of INLINEFORM4 . Next, We review the encoder and decoder frameworks briefly.
Encoder: The context vector representation INLINEFORM0 is generated by the encoder using INLINEFORM1 stacked LSTM layers. Bi-directional connections are used for the bottom encoder layer, and INLINEFORM2 is a concatenation vector as shown in Eq. (1): DISPLAYFORM0
All other encoder layers are unidirectional, and INLINEFORM0 is calculated as follows: DISPLAYFORM0
Decoder: The conditional probability INLINEFORM0 is formulated as DISPLAYFORM0
Specifically, we employ a simple concatenation layer to produce an attentional hidden state INLINEFORM0 : DISPLAYFORM0
where INLINEFORM0 denotes the target hidden state at the top layer of a stacking LSTM. The attention model calculates INLINEFORM1 as the weighted sum of the source-side context vector representation, just as illustrated in the upper left corner of Figure FIGREF1 . DISPLAYFORM0
where INLINEFORM0 is a normalized item calculated as follows: DISPLAYFORM0
INLINEFORM0 is computed by using the following formula: DISPLAYFORM0
If INLINEFORM0 , INLINEFORM1 will be calculated by combining INLINEFORM2 as feed input BIBREF11 : DISPLAYFORM0
Given the bilingual training data INLINEFORM0 , all parameters of the attention-based NMT are optimized to maximize the following conditional log-likelihood: DISPLAYFORM0
Description of Different Translation Granularities
We revisit how the source and target sentences ( INLINEFORM0 and INLINEFORM1 ) are represented in NMT. For the source side of any given training corpus, we scan through the whole corpus to build a vocabulary INLINEFORM2 of unique tokens. A source sentence INLINEFORM3 is then built as a sequence of the integer indices. The target sentence is similarly transformed into a target sequence of integer indices.
The property of NMT allows us great freedom in the choice of token units, and we can segment sentences in different ways. In this section, we will elaborate on four proposed approaches about the choice of translation granularities.
Character Level
This translation granularity is easy to implement. For this granularity, what we have to do is split the sentence into a sequence of characters. However, the character-level modeling on the English side is more challenging, as the network has to be able to deal with long and coherent sequence of characters. In this case, the number of characters is often 300 INLINEFORM0 1000 symbols long, where the size of the state space grows exponentially. Therefore, this is a great challenge for us to handle.
Besides, the alphabet of English is only consist of 26 letters, in which the vocabulary of English side is too small. Considering these facts, we only separate the Chinese side sentences into characters rather than both sides. Figure FIGREF11 shows an example of this translation granularity for character level.
Hybrid Word-Characters Level
In regular word-based NMT, for all words outside the source vocabulary, one feeds the universal embedding representing UNK as input to the encoder. This is problematic because it discards valuable information about the source word. To address that, hybrid word-character approach will be adopted. In this part, we will introduce this granularity in detail.
Unlike in the conventional word model where out-of-vocabulary words are collapsed into a single UNK symbol, we convert these words into the sequence of constituent characters. Special prefixes are prepended to the characters. The purpose of the prefixes is to show the location of the characters in a word, and to distinguish them from normal in-vocabulary characters. There are three prefixes: INLINEFORM0 B INLINEFORM1 , INLINEFORM2 M INLINEFORM3 , and INLINEFORM4 E INLINEFORM5 , indicating beginning of the word, middle of the word and end of the word, respectively. During decoding, the output may also contain sequences of special tokens. With the prefixes, it is trivial to reverse the tokenization to the original words as part of a post-processing step. Using this approach, in Figure FIGREF11 , we can see the word “龙年” is segmented into “ INLINEFORM6 B INLINEFORM7 龙 INLINEFORM8 E INLINEFORM9 年”, and the word “繁花似锦” is segmented into “ INLINEFORM10 B INLINEFORM11 繁 INLINEFORM12 M INLINEFORM13 花 INLINEFORM14 M INLINEFORM15 似 INLINEFORM16 E INLINEFORM17 锦”.
Subword Level
Considering languages with productive word formation processes such as agglutination and compounding, translation models require mechanisms that segment the sentence below the word level (In this paper, we call this level of symbols as subword units). In this part, we will introduce the two different methods of translation granularity on subword level.
Byte pair encoding (BPE) BIBREF12 is a compression algorithm. This simple data compression technique iteratively replaces the most frequent pair of bytes in a sequence with a single, unused byte. This compression method is first introduced into translation granularity by Sennrich et al. BIBREF10 . In this approach, instead of merging frequent pairs of bytes, characters or character sequences will be merged.
A detailed introduction of algorithm in learning BPE operations is showed in Sennrich et al. BIBREF10 . During decoding time, each word first split into sequences of characters, then learned operation will be applied to merge the characters into larger, known symbols. For BPE method, a special symbol is also needed to indicate the merging position. In Figure FIGREF11 , the word “繁花似锦” is segmented into three subword units, and the first three units are appended a special suffix “@@”. In decoding step, the translation results contain the special tokens as well. With these suffixes, we can recover the output easily.
The wordpiece model (WPM) implementation is initially developed to solve a Japanese/Korean segmentation problem for the speech recognition system BIBREF13 . This approach is completely data-driven and guaranteed to generate a deterministic segmentation for any possible sequence of characters, which is similar to the above method.
The wordpiece model is generated using a data-driven approach to maximize the language-model likelihood of the training data, given an evolving word definition. The training method of WPM is described in more detail in Schuster and Nakajima BIBREF13 . As shown in Figure FIGREF11 , a special symbol is only prepended at the beginning of the words. In this case, the words “龙年”, “繁花似锦”, “洋溢” and “祥和” are split into subwords, and the rest words remain the same except for a special prefix “_”.
Dataset
We perform all these translation granularities on the NIST bidirectional Chinese-English translation tasks. The evaluation metric is BLEU BIBREF14 as calculated by the multi-bleu.perl script.
Our training data consists of 2.09M sentence pairs extracted from LDC corpus. Table 1 shows the detailed statistics of our training data. To test different approaches on Chinese-to-English translation task, we use NIST 2003(MT03) dataset as the validation set, and NIST 2004(MT04), NIST 2005(MT05), NIST 2006(MT06) datasets as our test sets. For English-to-Chinese translation task, we also use NIST 2003(MT03) dataset as the validation set, and NIST 2008(MT08) will be used as test set.
Training Details
We build the described models modified from the Zoph_RNN toolkit which is written in C++/CUDA and provides efficient training across multiple GPUs. Our training procedure and hyper parameter choices are similar to those used by Luong et al. BIBREF11 . In the NMT architecture as illustrated in Figure FIGREF1 , the encoder has three stacked LSTM layers including a bidirectional layer, followed by a global attention layer, and the decoder contains two stacked LSTM layers followed by the softmax layer.
The word embedding dimension and the size of hidden layers are all set to 1000. We limit the maximum length in training corpus to 120. Parameter optimization is performed using both stochastic gradient descent(SGD) method and Adam method BIBREF15 . For the first three epoches, We train using the Adam optimizer and a fixed learning rate of 0.001 without decay. For the remaining six epoches, we train using SGD, and we set learning rate to 0.1 at the beginning and halve the threshold while the perplexity go up on the development set. We set minibatch size to 128. Dropout was also applied on each layer to avoid over-fitting, and the dropout rate is set to 0.2. At test time, we employ beam search with beam size b = 12.
Data Segmentation
For Chinese word segmentation, we use our in-house segmentation tools. For English corpus, the training data is tokenized with the Moses tokenizer. We carry out Chinese-to-English translation experiment on 30k vocabulary and 15k vocabulary for both sides respectively, and we also conduct English-to-Chinese translation experiment on 30k vocabulary size. The word level translation granularity is set to our baseline method.
For character level, we only segment the Chinese sentences into characters and the English sentences remain the same. For hybrid word-characters level, we segment training corpus for both sides. We rank the word frequency from greatest to least in training corpus, and in order to prevent the pollution from the very rare word, we have to set a segmentation point relatively higher. For 30k vocabulary, the word frequency below 64 is segmented into characters on Chinese side, and the segmentation point is set to 22 on English side. For 15k vocabulary, we set the segmentation point to 350 and 96 on Chinese side and English side respectively. For 60k vocabulary, the frequency of Chinese words below 14 and that of English words below 6 are split into characters.
For subword level, two different approaches are used. In BPE method, the number of merge operations is set to 30000 on 30k vocabulary size, 15000 on 15k vocabulary size and 60000 on 60k vocabulary size. For Chinese sentences, we segment the training corpus using our in-house segmentation tools first, and then we can apply the BPE method same as English sentences. Considering the essence of WPM method, we do not have to segment words for Chinese and tokenize sentences for English. That is to say, we can train the WPM without pre-processing step. Hence, for WPM method, we conduct our experiments both on the sentences trained on the raw corpus and the sentences trained on the segmented corpus.
Results on Chinese-to-English Translation
We list the BLEU scores of different translation granularities on 30k vocabulary in Table TABREF27 .
Row 1 is translation result of the state-of-the-art NMT system with word level. For the character level granularity (Row 2), the translation quality is higher than the word level by only 0.38 BLEU points. The last three lines in Table TABREF27 are subword level translation granularity, which contains BPE method and WPM method. BPE method (Row 4) achieves the best translation performance, which gets an improvement of 1.64 BLEU points over the word level. As for the WPM method (Row 6), the gap between this method and BPE method is narrow. Moreover, hybrid word-character level model (Row 3) outperforms the word level by 1.46 BLEU points, and translation quality of this method is very close to the BPE method. Experiments show that hybrid word-character level granularity and BPE method of subword level granularity are our choices for translation granularity on Chinese-to-English translation task.
We execute different translation granularities on the training corpus. To make a comparison, We randomly choose 10000 sentences. Table TABREF29 show the average sentence length of different methods on all granularities.
A well-known flaw of NMT model is the inability to properly translate long sentences. However, most of translation granularities will go below the word level. Therefore, as shown in Table TABREF29 , we can get longer sentences than the word level. We wonder what the translation performance of different lengths are on all translation granularities. We follow Bahdanau et al. BIBREF7 to group sentences of similar lengths together and compute a BLEU score per group, as demonstrated in Figure FIGREF30 .
In order to make the comparison fair, length refers to the number of tokens split in word level. As above mentioned, hybrid word-character level model is one of suitable granularity choices for Chinese-to-English translation. We can find when the length of sentences is below 20, the translation result of this model outperforms the other models to a great extent. But with the length going up, the advantage over other models is diminishing. The character level granularity performs bad for the sentences whose length are below 20. We think the reason may be that when the sentences are short, the representation of sentence in character level cannot express the sentence meaning well. As for BPE method, we find a strange phenomenon. When the number of words in source sentence is from 60 to 80, the translation performance of BPE method is not so good. However, this method can achieve almost 3 BLEU points higher than next-best approach when the source sentence is longer than 80 words. As shown in Figure FIGREF30 , we can see WPM method does not perform well lower than 60 words in source language. But when the length of sentences is between 60 and 80, this method even outperforms the BPE method by up to 5.51 BLEU points. In this experiment, we conclude that subword model is more effective than other models in handling long sentences.
We concern what the translation results of different translation granularities are on smaller vocabulary size. We also carry out the experiment on Chinese-to-English task of 15k vocabulary size.
Compared to 30k vocabulary size, the translation performance of word level (Row 1) on 15k vocabulary size is reduced by 2.14 BLEU points. However, character level (Row 2) and hybrid word-character level (Row 3) achieve 42.09 and 43.12 BLEU points respectively, which is on par with quality of translation on 30k vocabulary. Both these two models exceed word level to a great extent. We infer the reason is that both character level and hybrid word-character level can represent source side and target side sentences better than the word level even if the vocabulary size is small. For subword model, translation performance of these methods remain almost the same as 30k vocabulary, which is beyond our imagination. We can find in Table TABREF32 , WPM method (Row 6) outperforms other models, and to our surprise, translation results of both WPM method and WPM methods with raw corpus (Row 5) obtain a higher BLEU points than 30k vocabulary size. We analyze the reason of this phenomenon is that the subword model is not constrained by the vocabulary size. Although the WPM method achieves the best results for the 15k vocabulary size, this method also belongs to subword level translation granularity. We can conclude that subword translation granularity is more suitable for Chinese-to-English translation task.
In order to make a comparison of these translation granularities on larger vocabulary size, we perform the our experiment of 60k vocabulary size on Chinese-to-English translation task.
We can find in Table TABREF34 , the word and character level (Row 1 and Row 2) on 60k vocabulary size are increased by 1.15 and 1.11 BLEU points respectively compared to 30 vocabulary size. However, to our surprise, all the translation results of subword level granularities on 60k vocabulary are below to the 30k vocabulary size. With the increase of vocabulary size, we add more fine-grained subword segmentation units into vocabulary. We infer that large amount of subword units do not have beneficial effect on the translation results. As for hybrid word-character level, this method achieves 43.97 BLEU points, which is highest among all the translation granularities on 60k vocabulary size. Compared with Table TABREF27 , hybrid word-character level outperforms the best translation result on 30k vocabulary size (BPE method) by 0.22 BLEU points.
We also conduct experiments that we use different translation granularities on source and target side. In order to carry out the experiments easily, we only compare several granularities pairs.
In Table TABREF36 , we can find that when the source translation granularity is word level (Row 2 and Row 3), the translation performances are relative poor, even worse than the word level of both sides in Table TABREF27 . As for BPE method on source side, the hybrid word-character on target side obtains 43.73 BLEU points (Row 6), which is close to the best translation result in Table TABREF27 . Hybrid_BPE method achieves up to 44.26 BLEU points (Row 4), which is even higher than BPE method by up to 0.51 BLEU points. This method can acquire best translation result for Chinese-to-English translation task.
Results on English-to-Chinese Translation
We evaluate different translation granularities on the English-to-Chinese translation tasks, whose results are presented in Table TABREF39 .
We find that hybrid word-character level (Row 3) granularity obtains significant accuracy improvements over word level and this granularity is also superior to other granularities on large-scale English-to-Chinese translation. BPE method (Row 4) in this task does not perform well as Chinese-to-English task, the translation quality of it is lower than hybrid word-character model by up to 0.97 BLEU points. However, another subword level translation granularity WPM method (Row 6) achieves 22.14 BLEU points, which is near the hybrid word-character level. Although the vocabulary of character level on Chinese side is only 7.2k, it can also obtain 19.64 BLEU points (Row 2), which is on par with translation performance of word level.
As Chinese-to-English translation task, we carry out experiments on English-to-Chinese translation for different granularities. According to Table TABREF36 , Hybrid_BPE and BPE_Hybrid methods acquire relative higher translation quality than other methods. Therefore, in this section we only use these two methods to test which is most suitable for English-to-Chinese translation task.
Table TABREF41 shows that translation performances of both two methods are below to the Hybrid word-character granularity in Table TABREF39 . BPE_Hybrid method (Row 2) achieves 22.12 BLEU points, which is higher than Hybrid_BPE method by 0.39 BLEU points and is near the translation quality of WPM method in Table TABREF39 .
Related Work
The recently proposed neural machine translation has drawn more and more attention. Most of existing work in neural machine translation focus on handling rare words BIBREF16 , BIBREF10 , BIBREF17 , integrating SMT strategies BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , designing the better framework BIBREF22 , BIBREF11 , BIBREF23 and addressing the low resource scenario BIBREF24 , BIBREF25 , BIBREF26 .
As for strategies for dealing with rare and unknown words, a number of authors have endeavored to explore methods for addressing them. Luong et al. BIBREF11 and Li et al. BIBREF16 propose simple alignment-based technique that can replace out-of-vocabulary words with similar words. Jean et al. BIBREF27 use a large vocabulary with a method based on importance sampling.
In addition, another direction to achieve rare words problem in NMT is changing the granularity of segmentation. Chung et al. BIBREF8 focus on handling translation at the level of characters without any word segmentation only on target side. Luong et al. BIBREF9 propose a novel hybrid architecture that combines the strength of both word and character-based models. Sennrich et al. BIBREF10 use BPE method to encode rare and unknown words as sequences of subword units. Wu et al. BIBREF5 use both WPM method and hybrid word-character model in their online translation system. However, there is no study that shows which translation granularity is suitable for translation tasks involving Chinese language. Our goal in this work is to make an empirical comparison of different translation granularities for bidirectional Chinese-English translation tasks.
Conclusion
In this work, we provide an extensive comparison for translation granularities in Chinese-English NMT, such as word, character, subword and hybrid word-character. We have also discussed the advantages and disadvantages of various translation granularities in detail. For the same granularity on both sides, the experiments demonstrate that the subword model best fits Chinese-to-English translation with the vocabulary that is not so big, while the hybrid word-character approach obtains the highest performance on English-to-Chinese translation. In addition, experiments on different granularities show that Hybrid_BPE method can acquire best result for Chinese-to-English translation task.
Acknowledgments
The research work has been funded by the Natural Science Foundation of China under Grant No. 61333018 and No. 61402478, and it is also supported by the Strategic Priority Research Program of the CAS under Grant No. XDB02070007. | LDC corpus, NIST 2003(MT03), NIST 2004(MT04), NIST 2005(MT05), NIST 2006(MT06), NIST 2008(MT08) |
bdc6664cec2b94b0b3769bc70a60914795f39574 | bdc6664cec2b94b0b3769bc70a60914795f39574_0 | Q: How do they measure performance?
Text: INTRODUCTION
The Semantic Web provides a large number of structured datasets in form of Linked Data. One central obstacle is to make this data available and consumable to lay users without knowledge of formal query languages such as SPARQL. In order to satisfy specific information needs of users, a typical approach are natural language interfaces to allow question answering over the Linked Data (QALD) by translating user queries into SPARQL BIBREF0 , BIBREF1 . As an alternative method, BIBREF2 propose a visual method of QA using an iterative diagrammatic approach. The diagrammatic approach relies on the visual means only, it requires more user interaction than natural language QA, but also provides additional benefits like intuitive insights into dataset characteristics, or a broader understanding of the answer and the potential to further explore the answer context, and finally allows for knowledge sharing by storing and sharing resulting diagrams.
In contrast to BIBREF2 , who present the basic method and tool for diagrammatic question answering (DQA), here we evaluate DQA in comparison to natural language QALD systems. Both approaches have different characteristics, therefore we see them as complementary rather than in competition.
The basic research goals are: i) Given a dataset extracted from the QALD7 benchmark, we evaluate DQA versus state-of-the-art QALD systems. ii) More specifically, we investigate if and to what extent DQA can be complementary to QALD systems, especially in cases where those systems do not find a correct answer. iii) Finally, we want to present the basic outline for the integration of the two methods.
In a nutshell, users that applied DQA found the correct answer with an F1-score of 79.5%, compared to a maximum of 59.2% for the best performing QALD system. Furthermore, for the subset of questions where the QALD system could not provide a correct answer, users found the answer with 70% F1-score with DQA. We further analyze the characteristics of questions where the QALD or DQA, respectively, approach is better suited.
The results indicate, that aside from the other benefits of DQA, it can be a valuable component for integration into larger QALD systems, in cases where those systems cannot find an answer, or when the user wants to explore the answer context in detail by visualizing the relevant nodes and relations. Moreover, users can verify answers given by a QALD system using DQA in case of doubt.
This publication is organized as follows: After the presentation of related work in Section SECREF2 , and a brief system description of the DQA tool in Section SECREF3 , the main focus of the paper is on evaluation setup and results of the comparison of DQA and QALD, including a discussion, in Section SECREF4 . The paper concludes with Section SECREF5 .
RELATED WORK
As introduced in BIBREF2 we understand diagrammatic question answering (DQA) as the process of QA relying solely on visual exploration using diagrams as a representation of the underlying knowledge source. The process includes (i) a model for diagrammatic representation of semantic data which supports data interaction using embedded queries, (ii) a simple method for step-by-step construction of diagrams with respect to cognitive boundaries and a layout that boosts understandability of diagrams, (iii) a library for visual data exploration and sharing based on its internal data model, and (iv) an evaluation of DQA as knowledge understanding and knowledge sharing tool. BIBREF3 propose a framework of five perspectives of knowledge visualization, which can be used to describe certain aspects of the DQA use cases, such as its goal to provide an iterative exploration method, which is accessible to any user, the possibility of knowledge sharing (via saved diagrams), or the general purpose of knowledge understanding and abstraction from technical details.
Many tools exist for visual consumption and interaction with RDF knowledge bases, however, they are not designed specifically towards the question answering use case. BIBREF4 give an overview of ontology and Linked Data visualization tools, and categorize them based on the used visualization methods, interaction techniques and supported ontology constructs.
Regarding language-based QA over Linked Data, BIBREF5 discuss and study the usefulness of natural language interfaces to ontology-based knowledge bases in a general way. They focus on usability of such systems for the end user, and conclude that users prefer full sentences for query formulation and that natural language interfaces are indeed useful.
BIBREF0 describe the challenges of QA over knowledge bases using natural languages, and elaborate the various techniques used by existing QALD systems to overcome those challenges. In the present work, we compare DQA with four of those systems using a subset of questions of the QALD7 benchmark. Those systems are: gAnswer BIBREF6 is an approach for RDF QA that has a “graph-driven” perspective. In contrast to traditional approaches, which first try to understand the question, and then evaluate the query, in gAnswer the intention of the query is modeled in a structured way, which leads to a subgraph matching problem. Secondly, QAKiS BIBREF7 is QA system over structured knowledge bases such as DBpedia that makes use of relational patterns which capture different ways to express a certain relation in a natural language in order to construct a target-language (SPARQL) query. Further, Platypus BIBREF8 is a QA system on Wikidata. It represents questions in an internal format related to dependency-based compositional semantics which allows for question decomposition and language independence. The platform can answer complex questions in several languages by using hybrid grammatical and template-based techniques. And finally, also the WDAqua BIBREF0 system aims for language-independence and for being agnostic of the underlying knowledge base. WDAqua puts more importance on word semantics than on the syntax of the user query, and follows a processes of query expansion, SPARQL construction, query ranking and then making an answer decision.
For the evaluation of QA systems, several benchmarks have been proposed such as WebQuestions BIBREF9 or SimpleQuestions BIBREF10 . However, the most popular benchmarks in the Semantic Web field arise from the QALD evaluation campaign BIBREF1 . The recent QALD7 evaluation campaign includes task 4: “English question answering over Wikidata” which serves as basis to compile our evaluation dataset.
SYSTEM DESCRIPTION
The DQA functionality is part of the Ontodia tool. The initial idea of Ontodia was to enable the exploration of semantic graphs for ordinary users. Data exploration is about efficiently extracting knowledge from data even in situations where it is unclear what is being looked for exactly BIBREF11 .
The DQA tool uses an incremental approach to exploration typically starting from a very small number of nodes. With the context menu of a particular node, relations and related nodes can be added until the diagram fulfills the information need of the user. Figure FIGREF1 gives an example of a start node, where a user wants to learn more about the painting style of Van Gogh.
To illustrate the process, we give a brief example here. More details about the DQA tool, the motivation for DQA and diagram-based visualizations are found in previous work BIBREF2 , BIBREF12 .
As for the example, when attempting to answer a question such as “Who is the mayor of Paris?” the first step for a DQA user is finding a suitable starting point, in our case the entity Paris. The user enters “Paris” into the search box, and can then investigate the entity on the tool canvas. The information about the entity stems from the underlying dataset, for example Wikidata. The user can – in an incremental process – search in the properties of the given entity (or entities) and add relevant entities onto the canvas. In the given example, the property “head of government” connects the mayor to the city of Paris, Anne Hidalgo. The final diagram which answers the given question is presented in Figure FIGREF3 .
EVALUATION
Here we present the evaluation of DQA in comparison to four QALD systems.
Evaluation Setup
As evaluation dataset, we reuse questions from the QALD7 benchmark task 4 “QA over Wikidata”. Question selection from QALD7 is based on the principles of question classification in QA BIBREF13 . Firstly, it is necessary to define question types which correspond to different scenarios of data exploration in DQA, as well as the type of expected answers and the question focus. The question focus refers to the main information in the question which help a user find the answer. We follow the model of BIBREF14 who categorize questions by their question word into WHO, WHICH, WHAT, NAME, and HOW questions. Given the question and answer type categories, we created four questionnaires with nine questions each resulting in 36 questions from the QALD dataset. The questions were picked in equal number for five basic question categories.
20 persons participated in the DQA evaluation – 14 male and six female from eight different countries. The majority of respondents work within academia, however seven users were employed in industry. 131 diagrams (of 140 expected) were returned by the users.
The same 36 questions were answered using four QALD tools: WDAqua BIBREF0 , QAKiS BIBREF7 , gAnswer BIBREF6 and Platypus BIBREF8 .
For the QALD tools, a human evaluator pasted the questions as is into the natural language Web interfaces, and submitted them to the systems. Typically QALD tools provide a distinct answer, which may be a simple literal, or a set of entities which represent the answer, and which can be compared to the gold standard result. However, the WDAqua system, sometimes, additionally to the direct answer to the question, provides links to documents related to the question. We always chose the answer available via direct answer.
To assess the correctness of the answers given both by participants in the DQA experiments, and by the QALD system, we use the classic information retrieval metrics of precision (P), recall (R), and F1. INLINEFORM0 measures the fraction of relevant (correct) answer (items) given versus all answers (answer items) given. INLINEFORM1 is the faction of correct answer (parts) given divided by all correct ones in the gold answer, and INLINEFORM2 is the harmonic mean of INLINEFORM3 and INLINEFORM4 . As an example, if the question is “Where was Albert Einstein born?” (gold answer: “Ulm”), and the system gives two answers “Ulm” and “Bern”, then INLINEFORM5 , INLINEFORM6 and INLINEFORM7 .
For DQA four participants answered each question, therefore we took the average INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 values over the four evaluators as the result per question. The detailed answers by the participants and available online.
Evaluation Results and Discussion
Table TABREF8 presents the overall evaluation metrics of DQA, and the four QALD tools studied. With the given dataset, WDAqua (56.1% F1) and gAnswer (59.2% F1) clearly outperform askplatyp.us (8.6% F1) and QAKiS (27.5% F1). Detailed results per question including the calculation of INLINEFORM0 , INLINEFORM1 and INLINEFORM2 scores are available online. DQA led to 79.5% F1 (80.1% precision and 78.5% recall).
In further evaluations, we compare DQA results to WDAqua in order to study the differences and potential complementary aspects of the approaches. We selected WDAqua as representative of QALD tools, as it provides state-of-the-art results, and is well grounded in the Semantic Web community.
Comparing DQA and WDAqua, the first interesting question is: To what extend is DQA helpful on questions that could not be answered by the QALD system? For WDAqua the overall F1 score on our test dataset is INLINEFORM0 . For the subset of questions where WDAqua had no, or only a partial, answer, DQA users found the correct answer in INLINEFORM1 of cases. On the other hand, the subset of questions that DQA users (partially) failed to answer, were answered correctly by WDAqua with an F1 of INLINEFORM2 . If DQA is used as a backup method for questions not correctly answered with WDAqua, then overall F1 can be raised to INLINEFORM3 . The increase from INLINEFORM4 to INLINEFORM5 demonstrates the potential of DQA as complementary component in QALD systems.
As expected, questions that are difficult to answer with one approach are also harder for the other approach – as some questions in the dataset or just more complex to process and understand than others. However, almost 70% of questions not answered by WDAqua could still be answered by DQA. As examples of cases which are easier to answer for one approach than the other, a question that DQA users could answer, but where WDAqua failed is: “What is the name of the school where Obama's wife studied?”. This complex question formulation is hard to interpret correctly for a machine. In contrast to DQA, QALD systems also struggled with “Who is the son of Sonny and Cher?”. This question needs a lot of real-world knowledge to map the names Sonny and Cher to their corresponding entities. The QALD system needs to select the correct Cher entity from multiple options in Wikidata, and also to understand that “Sonny” refers to the entity Sonny Bono. The resulting answer diagram is given in Figure FIGREF17 . More simple questions, like “Who is the mayor of Paris?” were correctly answered by WDAqua, but not by all DQA users. DQA participants in this case struggled to make the leap from the noun “mayor” to the head-of-government property in Wikidata.
Regarding the limits of DQA, this method has difficulties when the answer can be obtained only with joins of queries, or when it is hard to find the initial starting entities related to question focus. For example, a question like “Show me the list of African birds that are extinct.” typically requires an intersection of two (large) sets of candidates entities, ie. all African birds and extinct birds. Such a task can easily be represented in a SPARQL query, but is hard to address with diagrams, because it would require placing, and interacting with, a huge amount of nodes on the exploration canvas.
Overall, the experiments indicate, that additionally to the use cases where QALD and DQA are useful on their own, there is a lot of potential in combining the two approaches, especially by providing a user the opportunity to explore the dataset with DQA if QALD did not find a correct answer, or when a user wants to confirm the QALD answer by checking in the underlying knowledge base. Furthermore, visually exploring the dataset provides added benefits, like understanding the dataset characteristics, sharing of resulting diagrams (if supported by the tool), and finding more information related to the original information need.
For the integration of QALD and DQA, we envision two scenarios. The first scenario addresses plain question answering, and here DQA can be added to a QALD system for cases where a user is not satisfied with a given answer. The QALD Web interface can for example have a Explore visually with diagrams button, which brings the user to a canvas on which the entities detected by the QALD system within the question and results (if any) are displayed on the canvas as starting nodes. The user will then explore the knowledge graph and find the answers in the same way as the participants in our experiments. The first scenario can lead to a large improvement in answer F1 (see above).
The second scenario of integration of QALD and DQA focuses on the exploration aspect. Even if the QALD system provides the correct answer, a user might be interested to explore the knowledge graph to validate the result and to discover more interesting information about the target entities. From an implementation and UI point of view, the same Explore visually with diagrams button and pre-population of the canvas can be used. Both scenarios also provide the additional benefits of potentially saving and sharing the created diagrams, which elaborate the relation between question and answer.
CONCLUSIONS
In this work, we compare two approaches to answer questions over Linked Data datasets: a visual diagrammatic approach (DQA) which involves iterative exploration of the graph, and a natural language-based (QALD). The evaluations show, that DQA can be a helpful addition to pure QALD systems, both regarding evaluation metrics (precision, recall, and F1), and also for dataset understanding and further exploration. The contributions include: i) a comparative evaluation of four QALD tools and DQA with a dataset extracted from the QALD7 benchmark, ii) an investigation into the differences and potential complementary aspects of the two approaches, and iii) the proposition of integration scenarios for QALD and DQA.
In future work we plan to study the integration of DQA and QALD, especially the aspect of automatically creating an initial diagram from a user query, in order to leverage the discussed potentials. We envision an integrated tool, that uses QALD as basic method to find an answer to a question quickly, but also allows to explore the knowledge graph visually to raise answer quality and support exploration with all its discussed benefits.
ACKNOWLEDGEMENTS
This work was supported by the Government of the Russian Federation (Grant 074-U01) through the ITMO Fellowship and Professorship Program. | average INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 values |
e40df8c685a28b98006c47808f506def68f30e26 | e40df8c685a28b98006c47808f506def68f30e26_0 | Q: Do they measure the performance of a combined approach?
Text: INTRODUCTION
The Semantic Web provides a large number of structured datasets in form of Linked Data. One central obstacle is to make this data available and consumable to lay users without knowledge of formal query languages such as SPARQL. In order to satisfy specific information needs of users, a typical approach are natural language interfaces to allow question answering over the Linked Data (QALD) by translating user queries into SPARQL BIBREF0 , BIBREF1 . As an alternative method, BIBREF2 propose a visual method of QA using an iterative diagrammatic approach. The diagrammatic approach relies on the visual means only, it requires more user interaction than natural language QA, but also provides additional benefits like intuitive insights into dataset characteristics, or a broader understanding of the answer and the potential to further explore the answer context, and finally allows for knowledge sharing by storing and sharing resulting diagrams.
In contrast to BIBREF2 , who present the basic method and tool for diagrammatic question answering (DQA), here we evaluate DQA in comparison to natural language QALD systems. Both approaches have different characteristics, therefore we see them as complementary rather than in competition.
The basic research goals are: i) Given a dataset extracted from the QALD7 benchmark, we evaluate DQA versus state-of-the-art QALD systems. ii) More specifically, we investigate if and to what extent DQA can be complementary to QALD systems, especially in cases where those systems do not find a correct answer. iii) Finally, we want to present the basic outline for the integration of the two methods.
In a nutshell, users that applied DQA found the correct answer with an F1-score of 79.5%, compared to a maximum of 59.2% for the best performing QALD system. Furthermore, for the subset of questions where the QALD system could not provide a correct answer, users found the answer with 70% F1-score with DQA. We further analyze the characteristics of questions where the QALD or DQA, respectively, approach is better suited.
The results indicate, that aside from the other benefits of DQA, it can be a valuable component for integration into larger QALD systems, in cases where those systems cannot find an answer, or when the user wants to explore the answer context in detail by visualizing the relevant nodes and relations. Moreover, users can verify answers given by a QALD system using DQA in case of doubt.
This publication is organized as follows: After the presentation of related work in Section SECREF2 , and a brief system description of the DQA tool in Section SECREF3 , the main focus of the paper is on evaluation setup and results of the comparison of DQA and QALD, including a discussion, in Section SECREF4 . The paper concludes with Section SECREF5 .
RELATED WORK
As introduced in BIBREF2 we understand diagrammatic question answering (DQA) as the process of QA relying solely on visual exploration using diagrams as a representation of the underlying knowledge source. The process includes (i) a model for diagrammatic representation of semantic data which supports data interaction using embedded queries, (ii) a simple method for step-by-step construction of diagrams with respect to cognitive boundaries and a layout that boosts understandability of diagrams, (iii) a library for visual data exploration and sharing based on its internal data model, and (iv) an evaluation of DQA as knowledge understanding and knowledge sharing tool. BIBREF3 propose a framework of five perspectives of knowledge visualization, which can be used to describe certain aspects of the DQA use cases, such as its goal to provide an iterative exploration method, which is accessible to any user, the possibility of knowledge sharing (via saved diagrams), or the general purpose of knowledge understanding and abstraction from technical details.
Many tools exist for visual consumption and interaction with RDF knowledge bases, however, they are not designed specifically towards the question answering use case. BIBREF4 give an overview of ontology and Linked Data visualization tools, and categorize them based on the used visualization methods, interaction techniques and supported ontology constructs.
Regarding language-based QA over Linked Data, BIBREF5 discuss and study the usefulness of natural language interfaces to ontology-based knowledge bases in a general way. They focus on usability of such systems for the end user, and conclude that users prefer full sentences for query formulation and that natural language interfaces are indeed useful.
BIBREF0 describe the challenges of QA over knowledge bases using natural languages, and elaborate the various techniques used by existing QALD systems to overcome those challenges. In the present work, we compare DQA with four of those systems using a subset of questions of the QALD7 benchmark. Those systems are: gAnswer BIBREF6 is an approach for RDF QA that has a “graph-driven” perspective. In contrast to traditional approaches, which first try to understand the question, and then evaluate the query, in gAnswer the intention of the query is modeled in a structured way, which leads to a subgraph matching problem. Secondly, QAKiS BIBREF7 is QA system over structured knowledge bases such as DBpedia that makes use of relational patterns which capture different ways to express a certain relation in a natural language in order to construct a target-language (SPARQL) query. Further, Platypus BIBREF8 is a QA system on Wikidata. It represents questions in an internal format related to dependency-based compositional semantics which allows for question decomposition and language independence. The platform can answer complex questions in several languages by using hybrid grammatical and template-based techniques. And finally, also the WDAqua BIBREF0 system aims for language-independence and for being agnostic of the underlying knowledge base. WDAqua puts more importance on word semantics than on the syntax of the user query, and follows a processes of query expansion, SPARQL construction, query ranking and then making an answer decision.
For the evaluation of QA systems, several benchmarks have been proposed such as WebQuestions BIBREF9 or SimpleQuestions BIBREF10 . However, the most popular benchmarks in the Semantic Web field arise from the QALD evaluation campaign BIBREF1 . The recent QALD7 evaluation campaign includes task 4: “English question answering over Wikidata” which serves as basis to compile our evaluation dataset.
SYSTEM DESCRIPTION
The DQA functionality is part of the Ontodia tool. The initial idea of Ontodia was to enable the exploration of semantic graphs for ordinary users. Data exploration is about efficiently extracting knowledge from data even in situations where it is unclear what is being looked for exactly BIBREF11 .
The DQA tool uses an incremental approach to exploration typically starting from a very small number of nodes. With the context menu of a particular node, relations and related nodes can be added until the diagram fulfills the information need of the user. Figure FIGREF1 gives an example of a start node, where a user wants to learn more about the painting style of Van Gogh.
To illustrate the process, we give a brief example here. More details about the DQA tool, the motivation for DQA and diagram-based visualizations are found in previous work BIBREF2 , BIBREF12 .
As for the example, when attempting to answer a question such as “Who is the mayor of Paris?” the first step for a DQA user is finding a suitable starting point, in our case the entity Paris. The user enters “Paris” into the search box, and can then investigate the entity on the tool canvas. The information about the entity stems from the underlying dataset, for example Wikidata. The user can – in an incremental process – search in the properties of the given entity (or entities) and add relevant entities onto the canvas. In the given example, the property “head of government” connects the mayor to the city of Paris, Anne Hidalgo. The final diagram which answers the given question is presented in Figure FIGREF3 .
EVALUATION
Here we present the evaluation of DQA in comparison to four QALD systems.
Evaluation Setup
As evaluation dataset, we reuse questions from the QALD7 benchmark task 4 “QA over Wikidata”. Question selection from QALD7 is based on the principles of question classification in QA BIBREF13 . Firstly, it is necessary to define question types which correspond to different scenarios of data exploration in DQA, as well as the type of expected answers and the question focus. The question focus refers to the main information in the question which help a user find the answer. We follow the model of BIBREF14 who categorize questions by their question word into WHO, WHICH, WHAT, NAME, and HOW questions. Given the question and answer type categories, we created four questionnaires with nine questions each resulting in 36 questions from the QALD dataset. The questions were picked in equal number for five basic question categories.
20 persons participated in the DQA evaluation – 14 male and six female from eight different countries. The majority of respondents work within academia, however seven users were employed in industry. 131 diagrams (of 140 expected) were returned by the users.
The same 36 questions were answered using four QALD tools: WDAqua BIBREF0 , QAKiS BIBREF7 , gAnswer BIBREF6 and Platypus BIBREF8 .
For the QALD tools, a human evaluator pasted the questions as is into the natural language Web interfaces, and submitted them to the systems. Typically QALD tools provide a distinct answer, which may be a simple literal, or a set of entities which represent the answer, and which can be compared to the gold standard result. However, the WDAqua system, sometimes, additionally to the direct answer to the question, provides links to documents related to the question. We always chose the answer available via direct answer.
To assess the correctness of the answers given both by participants in the DQA experiments, and by the QALD system, we use the classic information retrieval metrics of precision (P), recall (R), and F1. INLINEFORM0 measures the fraction of relevant (correct) answer (items) given versus all answers (answer items) given. INLINEFORM1 is the faction of correct answer (parts) given divided by all correct ones in the gold answer, and INLINEFORM2 is the harmonic mean of INLINEFORM3 and INLINEFORM4 . As an example, if the question is “Where was Albert Einstein born?” (gold answer: “Ulm”), and the system gives two answers “Ulm” and “Bern”, then INLINEFORM5 , INLINEFORM6 and INLINEFORM7 .
For DQA four participants answered each question, therefore we took the average INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 values over the four evaluators as the result per question. The detailed answers by the participants and available online.
Evaluation Results and Discussion
Table TABREF8 presents the overall evaluation metrics of DQA, and the four QALD tools studied. With the given dataset, WDAqua (56.1% F1) and gAnswer (59.2% F1) clearly outperform askplatyp.us (8.6% F1) and QAKiS (27.5% F1). Detailed results per question including the calculation of INLINEFORM0 , INLINEFORM1 and INLINEFORM2 scores are available online. DQA led to 79.5% F1 (80.1% precision and 78.5% recall).
In further evaluations, we compare DQA results to WDAqua in order to study the differences and potential complementary aspects of the approaches. We selected WDAqua as representative of QALD tools, as it provides state-of-the-art results, and is well grounded in the Semantic Web community.
Comparing DQA and WDAqua, the first interesting question is: To what extend is DQA helpful on questions that could not be answered by the QALD system? For WDAqua the overall F1 score on our test dataset is INLINEFORM0 . For the subset of questions where WDAqua had no, or only a partial, answer, DQA users found the correct answer in INLINEFORM1 of cases. On the other hand, the subset of questions that DQA users (partially) failed to answer, were answered correctly by WDAqua with an F1 of INLINEFORM2 . If DQA is used as a backup method for questions not correctly answered with WDAqua, then overall F1 can be raised to INLINEFORM3 . The increase from INLINEFORM4 to INLINEFORM5 demonstrates the potential of DQA as complementary component in QALD systems.
As expected, questions that are difficult to answer with one approach are also harder for the other approach – as some questions in the dataset or just more complex to process and understand than others. However, almost 70% of questions not answered by WDAqua could still be answered by DQA. As examples of cases which are easier to answer for one approach than the other, a question that DQA users could answer, but where WDAqua failed is: “What is the name of the school where Obama's wife studied?”. This complex question formulation is hard to interpret correctly for a machine. In contrast to DQA, QALD systems also struggled with “Who is the son of Sonny and Cher?”. This question needs a lot of real-world knowledge to map the names Sonny and Cher to their corresponding entities. The QALD system needs to select the correct Cher entity from multiple options in Wikidata, and also to understand that “Sonny” refers to the entity Sonny Bono. The resulting answer diagram is given in Figure FIGREF17 . More simple questions, like “Who is the mayor of Paris?” were correctly answered by WDAqua, but not by all DQA users. DQA participants in this case struggled to make the leap from the noun “mayor” to the head-of-government property in Wikidata.
Regarding the limits of DQA, this method has difficulties when the answer can be obtained only with joins of queries, or when it is hard to find the initial starting entities related to question focus. For example, a question like “Show me the list of African birds that are extinct.” typically requires an intersection of two (large) sets of candidates entities, ie. all African birds and extinct birds. Such a task can easily be represented in a SPARQL query, but is hard to address with diagrams, because it would require placing, and interacting with, a huge amount of nodes on the exploration canvas.
Overall, the experiments indicate, that additionally to the use cases where QALD and DQA are useful on their own, there is a lot of potential in combining the two approaches, especially by providing a user the opportunity to explore the dataset with DQA if QALD did not find a correct answer, or when a user wants to confirm the QALD answer by checking in the underlying knowledge base. Furthermore, visually exploring the dataset provides added benefits, like understanding the dataset characteristics, sharing of resulting diagrams (if supported by the tool), and finding more information related to the original information need.
For the integration of QALD and DQA, we envision two scenarios. The first scenario addresses plain question answering, and here DQA can be added to a QALD system for cases where a user is not satisfied with a given answer. The QALD Web interface can for example have a Explore visually with diagrams button, which brings the user to a canvas on which the entities detected by the QALD system within the question and results (if any) are displayed on the canvas as starting nodes. The user will then explore the knowledge graph and find the answers in the same way as the participants in our experiments. The first scenario can lead to a large improvement in answer F1 (see above).
The second scenario of integration of QALD and DQA focuses on the exploration aspect. Even if the QALD system provides the correct answer, a user might be interested to explore the knowledge graph to validate the result and to discover more interesting information about the target entities. From an implementation and UI point of view, the same Explore visually with diagrams button and pre-population of the canvas can be used. Both scenarios also provide the additional benefits of potentially saving and sharing the created diagrams, which elaborate the relation between question and answer.
CONCLUSIONS
In this work, we compare two approaches to answer questions over Linked Data datasets: a visual diagrammatic approach (DQA) which involves iterative exploration of the graph, and a natural language-based (QALD). The evaluations show, that DQA can be a helpful addition to pure QALD systems, both regarding evaluation metrics (precision, recall, and F1), and also for dataset understanding and further exploration. The contributions include: i) a comparative evaluation of four QALD tools and DQA with a dataset extracted from the QALD7 benchmark, ii) an investigation into the differences and potential complementary aspects of the two approaches, and iii) the proposition of integration scenarios for QALD and DQA.
In future work we plan to study the integration of DQA and QALD, especially the aspect of automatically creating an initial diagram from a user query, in order to leverage the discussed potentials. We envision an integrated tool, that uses QALD as basic method to find an answer to a question quickly, but also allows to explore the knowledge graph visually to raise answer quality and support exploration with all its discussed benefits.
ACKNOWLEDGEMENTS
This work was supported by the Government of the Russian Federation (Grant 074-U01) through the ITMO Fellowship and Professorship Program. | Unanswerable |
9653c89a93ac5c717a0a26cf80e9aa98a5ccf910 | 9653c89a93ac5c717a0a26cf80e9aa98a5ccf910_0 | Q: Which four QA systems do they use?
Text: INTRODUCTION
The Semantic Web provides a large number of structured datasets in form of Linked Data. One central obstacle is to make this data available and consumable to lay users without knowledge of formal query languages such as SPARQL. In order to satisfy specific information needs of users, a typical approach are natural language interfaces to allow question answering over the Linked Data (QALD) by translating user queries into SPARQL BIBREF0 , BIBREF1 . As an alternative method, BIBREF2 propose a visual method of QA using an iterative diagrammatic approach. The diagrammatic approach relies on the visual means only, it requires more user interaction than natural language QA, but also provides additional benefits like intuitive insights into dataset characteristics, or a broader understanding of the answer and the potential to further explore the answer context, and finally allows for knowledge sharing by storing and sharing resulting diagrams.
In contrast to BIBREF2 , who present the basic method and tool for diagrammatic question answering (DQA), here we evaluate DQA in comparison to natural language QALD systems. Both approaches have different characteristics, therefore we see them as complementary rather than in competition.
The basic research goals are: i) Given a dataset extracted from the QALD7 benchmark, we evaluate DQA versus state-of-the-art QALD systems. ii) More specifically, we investigate if and to what extent DQA can be complementary to QALD systems, especially in cases where those systems do not find a correct answer. iii) Finally, we want to present the basic outline for the integration of the two methods.
In a nutshell, users that applied DQA found the correct answer with an F1-score of 79.5%, compared to a maximum of 59.2% for the best performing QALD system. Furthermore, for the subset of questions where the QALD system could not provide a correct answer, users found the answer with 70% F1-score with DQA. We further analyze the characteristics of questions where the QALD or DQA, respectively, approach is better suited.
The results indicate, that aside from the other benefits of DQA, it can be a valuable component for integration into larger QALD systems, in cases where those systems cannot find an answer, or when the user wants to explore the answer context in detail by visualizing the relevant nodes and relations. Moreover, users can verify answers given by a QALD system using DQA in case of doubt.
This publication is organized as follows: After the presentation of related work in Section SECREF2 , and a brief system description of the DQA tool in Section SECREF3 , the main focus of the paper is on evaluation setup and results of the comparison of DQA and QALD, including a discussion, in Section SECREF4 . The paper concludes with Section SECREF5 .
RELATED WORK
As introduced in BIBREF2 we understand diagrammatic question answering (DQA) as the process of QA relying solely on visual exploration using diagrams as a representation of the underlying knowledge source. The process includes (i) a model for diagrammatic representation of semantic data which supports data interaction using embedded queries, (ii) a simple method for step-by-step construction of diagrams with respect to cognitive boundaries and a layout that boosts understandability of diagrams, (iii) a library for visual data exploration and sharing based on its internal data model, and (iv) an evaluation of DQA as knowledge understanding and knowledge sharing tool. BIBREF3 propose a framework of five perspectives of knowledge visualization, which can be used to describe certain aspects of the DQA use cases, such as its goal to provide an iterative exploration method, which is accessible to any user, the possibility of knowledge sharing (via saved diagrams), or the general purpose of knowledge understanding and abstraction from technical details.
Many tools exist for visual consumption and interaction with RDF knowledge bases, however, they are not designed specifically towards the question answering use case. BIBREF4 give an overview of ontology and Linked Data visualization tools, and categorize them based on the used visualization methods, interaction techniques and supported ontology constructs.
Regarding language-based QA over Linked Data, BIBREF5 discuss and study the usefulness of natural language interfaces to ontology-based knowledge bases in a general way. They focus on usability of such systems for the end user, and conclude that users prefer full sentences for query formulation and that natural language interfaces are indeed useful.
BIBREF0 describe the challenges of QA over knowledge bases using natural languages, and elaborate the various techniques used by existing QALD systems to overcome those challenges. In the present work, we compare DQA with four of those systems using a subset of questions of the QALD7 benchmark. Those systems are: gAnswer BIBREF6 is an approach for RDF QA that has a “graph-driven” perspective. In contrast to traditional approaches, which first try to understand the question, and then evaluate the query, in gAnswer the intention of the query is modeled in a structured way, which leads to a subgraph matching problem. Secondly, QAKiS BIBREF7 is QA system over structured knowledge bases such as DBpedia that makes use of relational patterns which capture different ways to express a certain relation in a natural language in order to construct a target-language (SPARQL) query. Further, Platypus BIBREF8 is a QA system on Wikidata. It represents questions in an internal format related to dependency-based compositional semantics which allows for question decomposition and language independence. The platform can answer complex questions in several languages by using hybrid grammatical and template-based techniques. And finally, also the WDAqua BIBREF0 system aims for language-independence and for being agnostic of the underlying knowledge base. WDAqua puts more importance on word semantics than on the syntax of the user query, and follows a processes of query expansion, SPARQL construction, query ranking and then making an answer decision.
For the evaluation of QA systems, several benchmarks have been proposed such as WebQuestions BIBREF9 or SimpleQuestions BIBREF10 . However, the most popular benchmarks in the Semantic Web field arise from the QALD evaluation campaign BIBREF1 . The recent QALD7 evaluation campaign includes task 4: “English question answering over Wikidata” which serves as basis to compile our evaluation dataset.
SYSTEM DESCRIPTION
The DQA functionality is part of the Ontodia tool. The initial idea of Ontodia was to enable the exploration of semantic graphs for ordinary users. Data exploration is about efficiently extracting knowledge from data even in situations where it is unclear what is being looked for exactly BIBREF11 .
The DQA tool uses an incremental approach to exploration typically starting from a very small number of nodes. With the context menu of a particular node, relations and related nodes can be added until the diagram fulfills the information need of the user. Figure FIGREF1 gives an example of a start node, where a user wants to learn more about the painting style of Van Gogh.
To illustrate the process, we give a brief example here. More details about the DQA tool, the motivation for DQA and diagram-based visualizations are found in previous work BIBREF2 , BIBREF12 .
As for the example, when attempting to answer a question such as “Who is the mayor of Paris?” the first step for a DQA user is finding a suitable starting point, in our case the entity Paris. The user enters “Paris” into the search box, and can then investigate the entity on the tool canvas. The information about the entity stems from the underlying dataset, for example Wikidata. The user can – in an incremental process – search in the properties of the given entity (or entities) and add relevant entities onto the canvas. In the given example, the property “head of government” connects the mayor to the city of Paris, Anne Hidalgo. The final diagram which answers the given question is presented in Figure FIGREF3 .
EVALUATION
Here we present the evaluation of DQA in comparison to four QALD systems.
Evaluation Setup
As evaluation dataset, we reuse questions from the QALD7 benchmark task 4 “QA over Wikidata”. Question selection from QALD7 is based on the principles of question classification in QA BIBREF13 . Firstly, it is necessary to define question types which correspond to different scenarios of data exploration in DQA, as well as the type of expected answers and the question focus. The question focus refers to the main information in the question which help a user find the answer. We follow the model of BIBREF14 who categorize questions by their question word into WHO, WHICH, WHAT, NAME, and HOW questions. Given the question and answer type categories, we created four questionnaires with nine questions each resulting in 36 questions from the QALD dataset. The questions were picked in equal number for five basic question categories.
20 persons participated in the DQA evaluation – 14 male and six female from eight different countries. The majority of respondents work within academia, however seven users were employed in industry. 131 diagrams (of 140 expected) were returned by the users.
The same 36 questions were answered using four QALD tools: WDAqua BIBREF0 , QAKiS BIBREF7 , gAnswer BIBREF6 and Platypus BIBREF8 .
For the QALD tools, a human evaluator pasted the questions as is into the natural language Web interfaces, and submitted them to the systems. Typically QALD tools provide a distinct answer, which may be a simple literal, or a set of entities which represent the answer, and which can be compared to the gold standard result. However, the WDAqua system, sometimes, additionally to the direct answer to the question, provides links to documents related to the question. We always chose the answer available via direct answer.
To assess the correctness of the answers given both by participants in the DQA experiments, and by the QALD system, we use the classic information retrieval metrics of precision (P), recall (R), and F1. INLINEFORM0 measures the fraction of relevant (correct) answer (items) given versus all answers (answer items) given. INLINEFORM1 is the faction of correct answer (parts) given divided by all correct ones in the gold answer, and INLINEFORM2 is the harmonic mean of INLINEFORM3 and INLINEFORM4 . As an example, if the question is “Where was Albert Einstein born?” (gold answer: “Ulm”), and the system gives two answers “Ulm” and “Bern”, then INLINEFORM5 , INLINEFORM6 and INLINEFORM7 .
For DQA four participants answered each question, therefore we took the average INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 values over the four evaluators as the result per question. The detailed answers by the participants and available online.
Evaluation Results and Discussion
Table TABREF8 presents the overall evaluation metrics of DQA, and the four QALD tools studied. With the given dataset, WDAqua (56.1% F1) and gAnswer (59.2% F1) clearly outperform askplatyp.us (8.6% F1) and QAKiS (27.5% F1). Detailed results per question including the calculation of INLINEFORM0 , INLINEFORM1 and INLINEFORM2 scores are available online. DQA led to 79.5% F1 (80.1% precision and 78.5% recall).
In further evaluations, we compare DQA results to WDAqua in order to study the differences and potential complementary aspects of the approaches. We selected WDAqua as representative of QALD tools, as it provides state-of-the-art results, and is well grounded in the Semantic Web community.
Comparing DQA and WDAqua, the first interesting question is: To what extend is DQA helpful on questions that could not be answered by the QALD system? For WDAqua the overall F1 score on our test dataset is INLINEFORM0 . For the subset of questions where WDAqua had no, or only a partial, answer, DQA users found the correct answer in INLINEFORM1 of cases. On the other hand, the subset of questions that DQA users (partially) failed to answer, were answered correctly by WDAqua with an F1 of INLINEFORM2 . If DQA is used as a backup method for questions not correctly answered with WDAqua, then overall F1 can be raised to INLINEFORM3 . The increase from INLINEFORM4 to INLINEFORM5 demonstrates the potential of DQA as complementary component in QALD systems.
As expected, questions that are difficult to answer with one approach are also harder for the other approach – as some questions in the dataset or just more complex to process and understand than others. However, almost 70% of questions not answered by WDAqua could still be answered by DQA. As examples of cases which are easier to answer for one approach than the other, a question that DQA users could answer, but where WDAqua failed is: “What is the name of the school where Obama's wife studied?”. This complex question formulation is hard to interpret correctly for a machine. In contrast to DQA, QALD systems also struggled with “Who is the son of Sonny and Cher?”. This question needs a lot of real-world knowledge to map the names Sonny and Cher to their corresponding entities. The QALD system needs to select the correct Cher entity from multiple options in Wikidata, and also to understand that “Sonny” refers to the entity Sonny Bono. The resulting answer diagram is given in Figure FIGREF17 . More simple questions, like “Who is the mayor of Paris?” were correctly answered by WDAqua, but not by all DQA users. DQA participants in this case struggled to make the leap from the noun “mayor” to the head-of-government property in Wikidata.
Regarding the limits of DQA, this method has difficulties when the answer can be obtained only with joins of queries, or when it is hard to find the initial starting entities related to question focus. For example, a question like “Show me the list of African birds that are extinct.” typically requires an intersection of two (large) sets of candidates entities, ie. all African birds and extinct birds. Such a task can easily be represented in a SPARQL query, but is hard to address with diagrams, because it would require placing, and interacting with, a huge amount of nodes on the exploration canvas.
Overall, the experiments indicate, that additionally to the use cases where QALD and DQA are useful on their own, there is a lot of potential in combining the two approaches, especially by providing a user the opportunity to explore the dataset with DQA if QALD did not find a correct answer, or when a user wants to confirm the QALD answer by checking in the underlying knowledge base. Furthermore, visually exploring the dataset provides added benefits, like understanding the dataset characteristics, sharing of resulting diagrams (if supported by the tool), and finding more information related to the original information need.
For the integration of QALD and DQA, we envision two scenarios. The first scenario addresses plain question answering, and here DQA can be added to a QALD system for cases where a user is not satisfied with a given answer. The QALD Web interface can for example have a Explore visually with diagrams button, which brings the user to a canvas on which the entities detected by the QALD system within the question and results (if any) are displayed on the canvas as starting nodes. The user will then explore the knowledge graph and find the answers in the same way as the participants in our experiments. The first scenario can lead to a large improvement in answer F1 (see above).
The second scenario of integration of QALD and DQA focuses on the exploration aspect. Even if the QALD system provides the correct answer, a user might be interested to explore the knowledge graph to validate the result and to discover more interesting information about the target entities. From an implementation and UI point of view, the same Explore visually with diagrams button and pre-population of the canvas can be used. Both scenarios also provide the additional benefits of potentially saving and sharing the created diagrams, which elaborate the relation between question and answer.
CONCLUSIONS
In this work, we compare two approaches to answer questions over Linked Data datasets: a visual diagrammatic approach (DQA) which involves iterative exploration of the graph, and a natural language-based (QALD). The evaluations show, that DQA can be a helpful addition to pure QALD systems, both regarding evaluation metrics (precision, recall, and F1), and also for dataset understanding and further exploration. The contributions include: i) a comparative evaluation of four QALD tools and DQA with a dataset extracted from the QALD7 benchmark, ii) an investigation into the differences and potential complementary aspects of the two approaches, and iii) the proposition of integration scenarios for QALD and DQA.
In future work we plan to study the integration of DQA and QALD, especially the aspect of automatically creating an initial diagram from a user query, in order to leverage the discussed potentials. We envision an integrated tool, that uses QALD as basic method to find an answer to a question quickly, but also allows to explore the knowledge graph visually to raise answer quality and support exploration with all its discussed benefits.
ACKNOWLEDGEMENTS
This work was supported by the Government of the Russian Federation (Grant 074-U01) through the ITMO Fellowship and Professorship Program. | WDAqua BIBREF0 , QAKiS BIBREF7 , gAnswer BIBREF6 and Platypus BIBREF8 |
b921a1771ed0ba9dbeff9da000336ecf2bb38322 | b921a1771ed0ba9dbeff9da000336ecf2bb38322_0 | Q: How many iterations of visual search are done on average until an answer is found?
Text: INTRODUCTION
The Semantic Web provides a large number of structured datasets in form of Linked Data. One central obstacle is to make this data available and consumable to lay users without knowledge of formal query languages such as SPARQL. In order to satisfy specific information needs of users, a typical approach are natural language interfaces to allow question answering over the Linked Data (QALD) by translating user queries into SPARQL BIBREF0 , BIBREF1 . As an alternative method, BIBREF2 propose a visual method of QA using an iterative diagrammatic approach. The diagrammatic approach relies on the visual means only, it requires more user interaction than natural language QA, but also provides additional benefits like intuitive insights into dataset characteristics, or a broader understanding of the answer and the potential to further explore the answer context, and finally allows for knowledge sharing by storing and sharing resulting diagrams.
In contrast to BIBREF2 , who present the basic method and tool for diagrammatic question answering (DQA), here we evaluate DQA in comparison to natural language QALD systems. Both approaches have different characteristics, therefore we see them as complementary rather than in competition.
The basic research goals are: i) Given a dataset extracted from the QALD7 benchmark, we evaluate DQA versus state-of-the-art QALD systems. ii) More specifically, we investigate if and to what extent DQA can be complementary to QALD systems, especially in cases where those systems do not find a correct answer. iii) Finally, we want to present the basic outline for the integration of the two methods.
In a nutshell, users that applied DQA found the correct answer with an F1-score of 79.5%, compared to a maximum of 59.2% for the best performing QALD system. Furthermore, for the subset of questions where the QALD system could not provide a correct answer, users found the answer with 70% F1-score with DQA. We further analyze the characteristics of questions where the QALD or DQA, respectively, approach is better suited.
The results indicate, that aside from the other benefits of DQA, it can be a valuable component for integration into larger QALD systems, in cases where those systems cannot find an answer, or when the user wants to explore the answer context in detail by visualizing the relevant nodes and relations. Moreover, users can verify answers given by a QALD system using DQA in case of doubt.
This publication is organized as follows: After the presentation of related work in Section SECREF2 , and a brief system description of the DQA tool in Section SECREF3 , the main focus of the paper is on evaluation setup and results of the comparison of DQA and QALD, including a discussion, in Section SECREF4 . The paper concludes with Section SECREF5 .
RELATED WORK
As introduced in BIBREF2 we understand diagrammatic question answering (DQA) as the process of QA relying solely on visual exploration using diagrams as a representation of the underlying knowledge source. The process includes (i) a model for diagrammatic representation of semantic data which supports data interaction using embedded queries, (ii) a simple method for step-by-step construction of diagrams with respect to cognitive boundaries and a layout that boosts understandability of diagrams, (iii) a library for visual data exploration and sharing based on its internal data model, and (iv) an evaluation of DQA as knowledge understanding and knowledge sharing tool. BIBREF3 propose a framework of five perspectives of knowledge visualization, which can be used to describe certain aspects of the DQA use cases, such as its goal to provide an iterative exploration method, which is accessible to any user, the possibility of knowledge sharing (via saved diagrams), or the general purpose of knowledge understanding and abstraction from technical details.
Many tools exist for visual consumption and interaction with RDF knowledge bases, however, they are not designed specifically towards the question answering use case. BIBREF4 give an overview of ontology and Linked Data visualization tools, and categorize them based on the used visualization methods, interaction techniques and supported ontology constructs.
Regarding language-based QA over Linked Data, BIBREF5 discuss and study the usefulness of natural language interfaces to ontology-based knowledge bases in a general way. They focus on usability of such systems for the end user, and conclude that users prefer full sentences for query formulation and that natural language interfaces are indeed useful.
BIBREF0 describe the challenges of QA over knowledge bases using natural languages, and elaborate the various techniques used by existing QALD systems to overcome those challenges. In the present work, we compare DQA with four of those systems using a subset of questions of the QALD7 benchmark. Those systems are: gAnswer BIBREF6 is an approach for RDF QA that has a “graph-driven” perspective. In contrast to traditional approaches, which first try to understand the question, and then evaluate the query, in gAnswer the intention of the query is modeled in a structured way, which leads to a subgraph matching problem. Secondly, QAKiS BIBREF7 is QA system over structured knowledge bases such as DBpedia that makes use of relational patterns which capture different ways to express a certain relation in a natural language in order to construct a target-language (SPARQL) query. Further, Platypus BIBREF8 is a QA system on Wikidata. It represents questions in an internal format related to dependency-based compositional semantics which allows for question decomposition and language independence. The platform can answer complex questions in several languages by using hybrid grammatical and template-based techniques. And finally, also the WDAqua BIBREF0 system aims for language-independence and for being agnostic of the underlying knowledge base. WDAqua puts more importance on word semantics than on the syntax of the user query, and follows a processes of query expansion, SPARQL construction, query ranking and then making an answer decision.
For the evaluation of QA systems, several benchmarks have been proposed such as WebQuestions BIBREF9 or SimpleQuestions BIBREF10 . However, the most popular benchmarks in the Semantic Web field arise from the QALD evaluation campaign BIBREF1 . The recent QALD7 evaluation campaign includes task 4: “English question answering over Wikidata” which serves as basis to compile our evaluation dataset.
SYSTEM DESCRIPTION
The DQA functionality is part of the Ontodia tool. The initial idea of Ontodia was to enable the exploration of semantic graphs for ordinary users. Data exploration is about efficiently extracting knowledge from data even in situations where it is unclear what is being looked for exactly BIBREF11 .
The DQA tool uses an incremental approach to exploration typically starting from a very small number of nodes. With the context menu of a particular node, relations and related nodes can be added until the diagram fulfills the information need of the user. Figure FIGREF1 gives an example of a start node, where a user wants to learn more about the painting style of Van Gogh.
To illustrate the process, we give a brief example here. More details about the DQA tool, the motivation for DQA and diagram-based visualizations are found in previous work BIBREF2 , BIBREF12 .
As for the example, when attempting to answer a question such as “Who is the mayor of Paris?” the first step for a DQA user is finding a suitable starting point, in our case the entity Paris. The user enters “Paris” into the search box, and can then investigate the entity on the tool canvas. The information about the entity stems from the underlying dataset, for example Wikidata. The user can – in an incremental process – search in the properties of the given entity (or entities) and add relevant entities onto the canvas. In the given example, the property “head of government” connects the mayor to the city of Paris, Anne Hidalgo. The final diagram which answers the given question is presented in Figure FIGREF3 .
EVALUATION
Here we present the evaluation of DQA in comparison to four QALD systems.
Evaluation Setup
As evaluation dataset, we reuse questions from the QALD7 benchmark task 4 “QA over Wikidata”. Question selection from QALD7 is based on the principles of question classification in QA BIBREF13 . Firstly, it is necessary to define question types which correspond to different scenarios of data exploration in DQA, as well as the type of expected answers and the question focus. The question focus refers to the main information in the question which help a user find the answer. We follow the model of BIBREF14 who categorize questions by their question word into WHO, WHICH, WHAT, NAME, and HOW questions. Given the question and answer type categories, we created four questionnaires with nine questions each resulting in 36 questions from the QALD dataset. The questions were picked in equal number for five basic question categories.
20 persons participated in the DQA evaluation – 14 male and six female from eight different countries. The majority of respondents work within academia, however seven users were employed in industry. 131 diagrams (of 140 expected) were returned by the users.
The same 36 questions were answered using four QALD tools: WDAqua BIBREF0 , QAKiS BIBREF7 , gAnswer BIBREF6 and Platypus BIBREF8 .
For the QALD tools, a human evaluator pasted the questions as is into the natural language Web interfaces, and submitted them to the systems. Typically QALD tools provide a distinct answer, which may be a simple literal, or a set of entities which represent the answer, and which can be compared to the gold standard result. However, the WDAqua system, sometimes, additionally to the direct answer to the question, provides links to documents related to the question. We always chose the answer available via direct answer.
To assess the correctness of the answers given both by participants in the DQA experiments, and by the QALD system, we use the classic information retrieval metrics of precision (P), recall (R), and F1. INLINEFORM0 measures the fraction of relevant (correct) answer (items) given versus all answers (answer items) given. INLINEFORM1 is the faction of correct answer (parts) given divided by all correct ones in the gold answer, and INLINEFORM2 is the harmonic mean of INLINEFORM3 and INLINEFORM4 . As an example, if the question is “Where was Albert Einstein born?” (gold answer: “Ulm”), and the system gives two answers “Ulm” and “Bern”, then INLINEFORM5 , INLINEFORM6 and INLINEFORM7 .
For DQA four participants answered each question, therefore we took the average INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 values over the four evaluators as the result per question. The detailed answers by the participants and available online.
Evaluation Results and Discussion
Table TABREF8 presents the overall evaluation metrics of DQA, and the four QALD tools studied. With the given dataset, WDAqua (56.1% F1) and gAnswer (59.2% F1) clearly outperform askplatyp.us (8.6% F1) and QAKiS (27.5% F1). Detailed results per question including the calculation of INLINEFORM0 , INLINEFORM1 and INLINEFORM2 scores are available online. DQA led to 79.5% F1 (80.1% precision and 78.5% recall).
In further evaluations, we compare DQA results to WDAqua in order to study the differences and potential complementary aspects of the approaches. We selected WDAqua as representative of QALD tools, as it provides state-of-the-art results, and is well grounded in the Semantic Web community.
Comparing DQA and WDAqua, the first interesting question is: To what extend is DQA helpful on questions that could not be answered by the QALD system? For WDAqua the overall F1 score on our test dataset is INLINEFORM0 . For the subset of questions where WDAqua had no, or only a partial, answer, DQA users found the correct answer in INLINEFORM1 of cases. On the other hand, the subset of questions that DQA users (partially) failed to answer, were answered correctly by WDAqua with an F1 of INLINEFORM2 . If DQA is used as a backup method for questions not correctly answered with WDAqua, then overall F1 can be raised to INLINEFORM3 . The increase from INLINEFORM4 to INLINEFORM5 demonstrates the potential of DQA as complementary component in QALD systems.
As expected, questions that are difficult to answer with one approach are also harder for the other approach – as some questions in the dataset or just more complex to process and understand than others. However, almost 70% of questions not answered by WDAqua could still be answered by DQA. As examples of cases which are easier to answer for one approach than the other, a question that DQA users could answer, but where WDAqua failed is: “What is the name of the school where Obama's wife studied?”. This complex question formulation is hard to interpret correctly for a machine. In contrast to DQA, QALD systems also struggled with “Who is the son of Sonny and Cher?”. This question needs a lot of real-world knowledge to map the names Sonny and Cher to their corresponding entities. The QALD system needs to select the correct Cher entity from multiple options in Wikidata, and also to understand that “Sonny” refers to the entity Sonny Bono. The resulting answer diagram is given in Figure FIGREF17 . More simple questions, like “Who is the mayor of Paris?” were correctly answered by WDAqua, but not by all DQA users. DQA participants in this case struggled to make the leap from the noun “mayor” to the head-of-government property in Wikidata.
Regarding the limits of DQA, this method has difficulties when the answer can be obtained only with joins of queries, or when it is hard to find the initial starting entities related to question focus. For example, a question like “Show me the list of African birds that are extinct.” typically requires an intersection of two (large) sets of candidates entities, ie. all African birds and extinct birds. Such a task can easily be represented in a SPARQL query, but is hard to address with diagrams, because it would require placing, and interacting with, a huge amount of nodes on the exploration canvas.
Overall, the experiments indicate, that additionally to the use cases where QALD and DQA are useful on their own, there is a lot of potential in combining the two approaches, especially by providing a user the opportunity to explore the dataset with DQA if QALD did not find a correct answer, or when a user wants to confirm the QALD answer by checking in the underlying knowledge base. Furthermore, visually exploring the dataset provides added benefits, like understanding the dataset characteristics, sharing of resulting diagrams (if supported by the tool), and finding more information related to the original information need.
For the integration of QALD and DQA, we envision two scenarios. The first scenario addresses plain question answering, and here DQA can be added to a QALD system for cases where a user is not satisfied with a given answer. The QALD Web interface can for example have a Explore visually with diagrams button, which brings the user to a canvas on which the entities detected by the QALD system within the question and results (if any) are displayed on the canvas as starting nodes. The user will then explore the knowledge graph and find the answers in the same way as the participants in our experiments. The first scenario can lead to a large improvement in answer F1 (see above).
The second scenario of integration of QALD and DQA focuses on the exploration aspect. Even if the QALD system provides the correct answer, a user might be interested to explore the knowledge graph to validate the result and to discover more interesting information about the target entities. From an implementation and UI point of view, the same Explore visually with diagrams button and pre-population of the canvas can be used. Both scenarios also provide the additional benefits of potentially saving and sharing the created diagrams, which elaborate the relation between question and answer.
CONCLUSIONS
In this work, we compare two approaches to answer questions over Linked Data datasets: a visual diagrammatic approach (DQA) which involves iterative exploration of the graph, and a natural language-based (QALD). The evaluations show, that DQA can be a helpful addition to pure QALD systems, both regarding evaluation metrics (precision, recall, and F1), and also for dataset understanding and further exploration. The contributions include: i) a comparative evaluation of four QALD tools and DQA with a dataset extracted from the QALD7 benchmark, ii) an investigation into the differences and potential complementary aspects of the two approaches, and iii) the proposition of integration scenarios for QALD and DQA.
In future work we plan to study the integration of DQA and QALD, especially the aspect of automatically creating an initial diagram from a user query, in order to leverage the discussed potentials. We envision an integrated tool, that uses QALD as basic method to find an answer to a question quickly, but also allows to explore the knowledge graph visually to raise answer quality and support exploration with all its discussed benefits.
ACKNOWLEDGEMENTS
This work was supported by the Government of the Russian Federation (Grant 074-U01) through the ITMO Fellowship and Professorship Program. | Unanswerable |
412aff0b2113b7d61c914edf90b90f2994390088 | 412aff0b2113b7d61c914edf90b90f2994390088_0 | Q: Do they test performance of their approaches using human judgements?
Text: INTRODUCTION
The Semantic Web provides a large number of structured datasets in form of Linked Data. One central obstacle is to make this data available and consumable to lay users without knowledge of formal query languages such as SPARQL. In order to satisfy specific information needs of users, a typical approach are natural language interfaces to allow question answering over the Linked Data (QALD) by translating user queries into SPARQL BIBREF0 , BIBREF1 . As an alternative method, BIBREF2 propose a visual method of QA using an iterative diagrammatic approach. The diagrammatic approach relies on the visual means only, it requires more user interaction than natural language QA, but also provides additional benefits like intuitive insights into dataset characteristics, or a broader understanding of the answer and the potential to further explore the answer context, and finally allows for knowledge sharing by storing and sharing resulting diagrams.
In contrast to BIBREF2 , who present the basic method and tool for diagrammatic question answering (DQA), here we evaluate DQA in comparison to natural language QALD systems. Both approaches have different characteristics, therefore we see them as complementary rather than in competition.
The basic research goals are: i) Given a dataset extracted from the QALD7 benchmark, we evaluate DQA versus state-of-the-art QALD systems. ii) More specifically, we investigate if and to what extent DQA can be complementary to QALD systems, especially in cases where those systems do not find a correct answer. iii) Finally, we want to present the basic outline for the integration of the two methods.
In a nutshell, users that applied DQA found the correct answer with an F1-score of 79.5%, compared to a maximum of 59.2% for the best performing QALD system. Furthermore, for the subset of questions where the QALD system could not provide a correct answer, users found the answer with 70% F1-score with DQA. We further analyze the characteristics of questions where the QALD or DQA, respectively, approach is better suited.
The results indicate, that aside from the other benefits of DQA, it can be a valuable component for integration into larger QALD systems, in cases where those systems cannot find an answer, or when the user wants to explore the answer context in detail by visualizing the relevant nodes and relations. Moreover, users can verify answers given by a QALD system using DQA in case of doubt.
This publication is organized as follows: After the presentation of related work in Section SECREF2 , and a brief system description of the DQA tool in Section SECREF3 , the main focus of the paper is on evaluation setup and results of the comparison of DQA and QALD, including a discussion, in Section SECREF4 . The paper concludes with Section SECREF5 .
RELATED WORK
As introduced in BIBREF2 we understand diagrammatic question answering (DQA) as the process of QA relying solely on visual exploration using diagrams as a representation of the underlying knowledge source. The process includes (i) a model for diagrammatic representation of semantic data which supports data interaction using embedded queries, (ii) a simple method for step-by-step construction of diagrams with respect to cognitive boundaries and a layout that boosts understandability of diagrams, (iii) a library for visual data exploration and sharing based on its internal data model, and (iv) an evaluation of DQA as knowledge understanding and knowledge sharing tool. BIBREF3 propose a framework of five perspectives of knowledge visualization, which can be used to describe certain aspects of the DQA use cases, such as its goal to provide an iterative exploration method, which is accessible to any user, the possibility of knowledge sharing (via saved diagrams), or the general purpose of knowledge understanding and abstraction from technical details.
Many tools exist for visual consumption and interaction with RDF knowledge bases, however, they are not designed specifically towards the question answering use case. BIBREF4 give an overview of ontology and Linked Data visualization tools, and categorize them based on the used visualization methods, interaction techniques and supported ontology constructs.
Regarding language-based QA over Linked Data, BIBREF5 discuss and study the usefulness of natural language interfaces to ontology-based knowledge bases in a general way. They focus on usability of such systems for the end user, and conclude that users prefer full sentences for query formulation and that natural language interfaces are indeed useful.
BIBREF0 describe the challenges of QA over knowledge bases using natural languages, and elaborate the various techniques used by existing QALD systems to overcome those challenges. In the present work, we compare DQA with four of those systems using a subset of questions of the QALD7 benchmark. Those systems are: gAnswer BIBREF6 is an approach for RDF QA that has a “graph-driven” perspective. In contrast to traditional approaches, which first try to understand the question, and then evaluate the query, in gAnswer the intention of the query is modeled in a structured way, which leads to a subgraph matching problem. Secondly, QAKiS BIBREF7 is QA system over structured knowledge bases such as DBpedia that makes use of relational patterns which capture different ways to express a certain relation in a natural language in order to construct a target-language (SPARQL) query. Further, Platypus BIBREF8 is a QA system on Wikidata. It represents questions in an internal format related to dependency-based compositional semantics which allows for question decomposition and language independence. The platform can answer complex questions in several languages by using hybrid grammatical and template-based techniques. And finally, also the WDAqua BIBREF0 system aims for language-independence and for being agnostic of the underlying knowledge base. WDAqua puts more importance on word semantics than on the syntax of the user query, and follows a processes of query expansion, SPARQL construction, query ranking and then making an answer decision.
For the evaluation of QA systems, several benchmarks have been proposed such as WebQuestions BIBREF9 or SimpleQuestions BIBREF10 . However, the most popular benchmarks in the Semantic Web field arise from the QALD evaluation campaign BIBREF1 . The recent QALD7 evaluation campaign includes task 4: “English question answering over Wikidata” which serves as basis to compile our evaluation dataset.
SYSTEM DESCRIPTION
The DQA functionality is part of the Ontodia tool. The initial idea of Ontodia was to enable the exploration of semantic graphs for ordinary users. Data exploration is about efficiently extracting knowledge from data even in situations where it is unclear what is being looked for exactly BIBREF11 .
The DQA tool uses an incremental approach to exploration typically starting from a very small number of nodes. With the context menu of a particular node, relations and related nodes can be added until the diagram fulfills the information need of the user. Figure FIGREF1 gives an example of a start node, where a user wants to learn more about the painting style of Van Gogh.
To illustrate the process, we give a brief example here. More details about the DQA tool, the motivation for DQA and diagram-based visualizations are found in previous work BIBREF2 , BIBREF12 .
As for the example, when attempting to answer a question such as “Who is the mayor of Paris?” the first step for a DQA user is finding a suitable starting point, in our case the entity Paris. The user enters “Paris” into the search box, and can then investigate the entity on the tool canvas. The information about the entity stems from the underlying dataset, for example Wikidata. The user can – in an incremental process – search in the properties of the given entity (or entities) and add relevant entities onto the canvas. In the given example, the property “head of government” connects the mayor to the city of Paris, Anne Hidalgo. The final diagram which answers the given question is presented in Figure FIGREF3 .
EVALUATION
Here we present the evaluation of DQA in comparison to four QALD systems.
Evaluation Setup
As evaluation dataset, we reuse questions from the QALD7 benchmark task 4 “QA over Wikidata”. Question selection from QALD7 is based on the principles of question classification in QA BIBREF13 . Firstly, it is necessary to define question types which correspond to different scenarios of data exploration in DQA, as well as the type of expected answers and the question focus. The question focus refers to the main information in the question which help a user find the answer. We follow the model of BIBREF14 who categorize questions by their question word into WHO, WHICH, WHAT, NAME, and HOW questions. Given the question and answer type categories, we created four questionnaires with nine questions each resulting in 36 questions from the QALD dataset. The questions were picked in equal number for five basic question categories.
20 persons participated in the DQA evaluation – 14 male and six female from eight different countries. The majority of respondents work within academia, however seven users were employed in industry. 131 diagrams (of 140 expected) were returned by the users.
The same 36 questions were answered using four QALD tools: WDAqua BIBREF0 , QAKiS BIBREF7 , gAnswer BIBREF6 and Platypus BIBREF8 .
For the QALD tools, a human evaluator pasted the questions as is into the natural language Web interfaces, and submitted them to the systems. Typically QALD tools provide a distinct answer, which may be a simple literal, or a set of entities which represent the answer, and which can be compared to the gold standard result. However, the WDAqua system, sometimes, additionally to the direct answer to the question, provides links to documents related to the question. We always chose the answer available via direct answer.
To assess the correctness of the answers given both by participants in the DQA experiments, and by the QALD system, we use the classic information retrieval metrics of precision (P), recall (R), and F1. INLINEFORM0 measures the fraction of relevant (correct) answer (items) given versus all answers (answer items) given. INLINEFORM1 is the faction of correct answer (parts) given divided by all correct ones in the gold answer, and INLINEFORM2 is the harmonic mean of INLINEFORM3 and INLINEFORM4 . As an example, if the question is “Where was Albert Einstein born?” (gold answer: “Ulm”), and the system gives two answers “Ulm” and “Bern”, then INLINEFORM5 , INLINEFORM6 and INLINEFORM7 .
For DQA four participants answered each question, therefore we took the average INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 values over the four evaluators as the result per question. The detailed answers by the participants and available online.
Evaluation Results and Discussion
Table TABREF8 presents the overall evaluation metrics of DQA, and the four QALD tools studied. With the given dataset, WDAqua (56.1% F1) and gAnswer (59.2% F1) clearly outperform askplatyp.us (8.6% F1) and QAKiS (27.5% F1). Detailed results per question including the calculation of INLINEFORM0 , INLINEFORM1 and INLINEFORM2 scores are available online. DQA led to 79.5% F1 (80.1% precision and 78.5% recall).
In further evaluations, we compare DQA results to WDAqua in order to study the differences and potential complementary aspects of the approaches. We selected WDAqua as representative of QALD tools, as it provides state-of-the-art results, and is well grounded in the Semantic Web community.
Comparing DQA and WDAqua, the first interesting question is: To what extend is DQA helpful on questions that could not be answered by the QALD system? For WDAqua the overall F1 score on our test dataset is INLINEFORM0 . For the subset of questions where WDAqua had no, or only a partial, answer, DQA users found the correct answer in INLINEFORM1 of cases. On the other hand, the subset of questions that DQA users (partially) failed to answer, were answered correctly by WDAqua with an F1 of INLINEFORM2 . If DQA is used as a backup method for questions not correctly answered with WDAqua, then overall F1 can be raised to INLINEFORM3 . The increase from INLINEFORM4 to INLINEFORM5 demonstrates the potential of DQA as complementary component in QALD systems.
As expected, questions that are difficult to answer with one approach are also harder for the other approach – as some questions in the dataset or just more complex to process and understand than others. However, almost 70% of questions not answered by WDAqua could still be answered by DQA. As examples of cases which are easier to answer for one approach than the other, a question that DQA users could answer, but where WDAqua failed is: “What is the name of the school where Obama's wife studied?”. This complex question formulation is hard to interpret correctly for a machine. In contrast to DQA, QALD systems also struggled with “Who is the son of Sonny and Cher?”. This question needs a lot of real-world knowledge to map the names Sonny and Cher to their corresponding entities. The QALD system needs to select the correct Cher entity from multiple options in Wikidata, and also to understand that “Sonny” refers to the entity Sonny Bono. The resulting answer diagram is given in Figure FIGREF17 . More simple questions, like “Who is the mayor of Paris?” were correctly answered by WDAqua, but not by all DQA users. DQA participants in this case struggled to make the leap from the noun “mayor” to the head-of-government property in Wikidata.
Regarding the limits of DQA, this method has difficulties when the answer can be obtained only with joins of queries, or when it is hard to find the initial starting entities related to question focus. For example, a question like “Show me the list of African birds that are extinct.” typically requires an intersection of two (large) sets of candidates entities, ie. all African birds and extinct birds. Such a task can easily be represented in a SPARQL query, but is hard to address with diagrams, because it would require placing, and interacting with, a huge amount of nodes on the exploration canvas.
Overall, the experiments indicate, that additionally to the use cases where QALD and DQA are useful on their own, there is a lot of potential in combining the two approaches, especially by providing a user the opportunity to explore the dataset with DQA if QALD did not find a correct answer, or when a user wants to confirm the QALD answer by checking in the underlying knowledge base. Furthermore, visually exploring the dataset provides added benefits, like understanding the dataset characteristics, sharing of resulting diagrams (if supported by the tool), and finding more information related to the original information need.
For the integration of QALD and DQA, we envision two scenarios. The first scenario addresses plain question answering, and here DQA can be added to a QALD system for cases where a user is not satisfied with a given answer. The QALD Web interface can for example have a Explore visually with diagrams button, which brings the user to a canvas on which the entities detected by the QALD system within the question and results (if any) are displayed on the canvas as starting nodes. The user will then explore the knowledge graph and find the answers in the same way as the participants in our experiments. The first scenario can lead to a large improvement in answer F1 (see above).
The second scenario of integration of QALD and DQA focuses on the exploration aspect. Even if the QALD system provides the correct answer, a user might be interested to explore the knowledge graph to validate the result and to discover more interesting information about the target entities. From an implementation and UI point of view, the same Explore visually with diagrams button and pre-population of the canvas can be used. Both scenarios also provide the additional benefits of potentially saving and sharing the created diagrams, which elaborate the relation between question and answer.
CONCLUSIONS
In this work, we compare two approaches to answer questions over Linked Data datasets: a visual diagrammatic approach (DQA) which involves iterative exploration of the graph, and a natural language-based (QALD). The evaluations show, that DQA can be a helpful addition to pure QALD systems, both regarding evaluation metrics (precision, recall, and F1), and also for dataset understanding and further exploration. The contributions include: i) a comparative evaluation of four QALD tools and DQA with a dataset extracted from the QALD7 benchmark, ii) an investigation into the differences and potential complementary aspects of the two approaches, and iii) the proposition of integration scenarios for QALD and DQA.
In future work we plan to study the integration of DQA and QALD, especially the aspect of automatically creating an initial diagram from a user query, in order to leverage the discussed potentials. We envision an integrated tool, that uses QALD as basic method to find an answer to a question quickly, but also allows to explore the knowledge graph visually to raise answer quality and support exploration with all its discussed benefits.
ACKNOWLEDGEMENTS
This work was supported by the Government of the Russian Federation (Grant 074-U01) through the ITMO Fellowship and Professorship Program. | Yes |
010e3793eb1342225857d3f95e147d8f8467192a | 010e3793eb1342225857d3f95e147d8f8467192a_0 | Q: What are the sizes of both datasets?
Text: Introduction
Following previous research on automatic detection and correction of dt-mistakes in Dutch BIBREF0, this paper investigates another stumbling block for both native and non-native speakers of Dutch: the correct use of die and dat. The multiplicity of syntactic functions and the dependency on the antecedent's gender and number make this a challenging task for both human and computer. The grammar concerning die and dat is threefold. Firstly, they can be used as dependent or independent demonstrative pronouns (aanwijzend voornaamwoord), with the first replacing the article before the noun it modifies and the latter being a noun phrase that refers to a preceding/following noun phrase or sentence. The choice between the two pronouns depends on the gender and number of the antecedent: dat refers to neuter, singular nouns and sentences, while die refers to masculine, singular nouns and plural nouns independent of their gender. Secondly, die and dat can be used as relative pronouns introducing relative clauses (betrekkelijk voornaamwoord), which provide additional information about the directly preceding antecedent it modifies. Similar rules as for demonstrative pronouns apply: masculine, singular nouns and plural nouns are followed by relative pronoun die, neuter singular nouns by dat. Lastly, dat can be used as a subordinating conjunction (onderschikkend voegwoord) introducing a subordinating clause. An brief overview of the grammar is given in Table TABREF1.
The aim is to develop (1) a binary classification model that automatically detects, predicts and corrects die and dat instances in texts. Furthermore, the correct die/dat instance and the syntactic function of the predicted die and dat are jointly predicted in (2) a multitask classification model. Whereas research on neural-based, machine learning approaches for Dutch demonstrative and relative pronoun resolution - especially for die and dat - is to our knowledge non-existing, this project can serve as a starting point for further research on machine learning applications concerning Dutch subordinating conjunctions, demonstrative pronouns and relative pronouns.
Related Work
The incentive for this research project is the detection and correction system for dt-mistakes in Dutch BIBREF0. For that task, a system with a context encoder - a bidirectional LSTM with attention mechanism - and verb encoder - of which the outputs are then fed to a feedforward neural network - has been developed to predict different verb suffixes. As mentioned above, this project explores the possibility of constructing a neural network system for correcting Dutch demonstrative and relative pronouns die and dat. The task is also called pronoun resolution or anaphora resolution. Anaphora resolution and pronoun prediction has been major research subjects in machine translation research. In BIBREF3, for example, the effect of multiple English coreference resolvers on the pronoun translation in English-Dutch machine translation system with deep transfer has been investigated. Niton, Morawiecki and Ogrodnizuk (2018) developed a fully connected network with three layers in combination with a sieve-based architecture for Polish coreference resolution BIBREF4. Not only in machine translation, but also in general much research has been conducted on machine learning approaches towards coreference resolution BIBREF5BIBREF6BIBREF7 and pronoun resolution BIBREF8, BIBREF9. However, little to no research has been conducted specifically on die/dat correction.
Dataset
The datasets used for training, validation and testing contain sentences extracted from the Europarl corpus BIBREF1 and SoNaR corpus BIBREF2. The Europarl corpus is an open-source parallel corpus containing proceedings of the European Parliament. The Dutch section consists of 2,333,816 sentences and 53,487,257 words. The SoNaR corpus comprises two corpora: SONAR500 and SONAR1. The SONAR500 corpus consists of more than 500 million words obtained from different domains. Examples of text types are newsletters, newspaper articles, legal texts, subtitles and blog posts. All texts except for texts from social media have been automatically tokenized, POS tagged and lemmatized. It contains significantly more data and more varied data than the Europarl corpus. Due to the high amount of data in the corpus, only three subparts are used: Wikipedia texts, reports and newspaper articles. These subparts are chosen because the number of wrongly used die and dat is expected to be low.
Preprocessing
The sentences in the Europarl corpus are tokenized and parsed using the Dutch version of TreeTagger BIBREF10. Only sentences which contain at least one die or dat are extracted from the corpora. Subsequently, each single occurrence of die and dat is detected and replaced by a unique token ('PREDICT'). When there are multiple occurrences in one sentence, only one occurrence is replaced at a time. Consequently, a sentence can appear multiple times in the training and test dataset with the unique token for die and dat at a different place in the sentence. Each sentence is paired with its automatically assigned ground truth label for die and dat. The Europarl dataset, on the one hand, contains 70,057 dat-labeled and 33,814 die-labeled sentences. The resulting train and test sets consist of 103,871 (Europarl) and 1,269,091 (SoNaR) sentences. The SoNaR dataset, on the other hand, has more than ten times the number of labeled sentences with 736,987 dat-labeled and 532,104 die-labeled. Considering the imbalance in both datasets, it may be argued that dat occurs more frequently than die due to its syntactic function as subordinating conjunction and not to its use as demonstrative pronoun whereas it can only refer to singular, neutral nouns. As for the multitask classification model, the POS tags for die and dat present in the SoNaR corpus are extracted and stored as ground truth labels: 407,848 subordinating conjunction, 387,292 relative pronoun and 473,951 demonstrative pronoun. From a brief qualitative assessment on the POS tags for die and dat in both corpora, the POS tags in the SoNaR corpus appear to be more reliable than the POS tags generated by TreeTagger in the Europarl corpus. Therefore, only the SoNaR dataset is used for the multitask classification. An overview of the datasets after preprocessing is given in Table TABREF2.
Binary Classification Model ::: Model Architecture
For the binary classification model that predicts the correct die or dat for each sentence, a Bidirectional Long-Short Term Memory (BiLSTM) neural network is computed. Whereas the antecedent can be rather distant from the relative or demonstrative pronoun due to adjectives and sentence boundaries, an LSTM architecture is chosen over a regular Recurrent Neural Network while the latter does not cope well with learning non-trivial long-distance dependencies BIBREF11. Furthermore, a bidirectional LSTM is chosen over a single left-to-right LSTM, whereas the antecedent can be either before or after the die or dat. The architecture of the binary classification model is provided in Fig. FIGREF7. The input sentence is first sent through an embedding layer where each token is transformed to a 100-dimensional word embedding which have been initially trained on the dataset of sentences containing at least one die or dat using the Word2Vec Skip-gram model BIBREF12. The weights of the embedding layer are trainable. The word embeddings are then sent through a BiLSTM layer. The bidirectional LSTM concatenates the outputs of two LSTMs: the left-to-right $LSTM_{forward}$ computes the states $\overrightarrow{h_1}..\overrightarrow{h_N}$ and the right-to-left $LSTM_{backward}$ computes the states $\overleftarrow{h_N}..\overleftarrow{h_1}$. This means that at time $t$ for input $x$, represented by its word embedding $E(x)$, the bidirectional LSTM outputs the following:
The concatenated output is then sent through a maxpooling layer, linear layer and, eventually, a softmax layer to get a probability distribution over the two classes. In order to prevent the model from overfitting and co-adapting too much, dropout regularization is implemented in the embedding layer and the linear layer. In both layers, dropout is set to $p = 0.5$ which randomly zeroes out nodes in the layer using samples from a Bernoulli distribution.
Binary Classification Model ::: Experimental Set-Up
Each dataset is randomly divided into a training (70%), validation (15%) and test set (15%). The data is fed to the model in batches of 128 samples and reshuffled at every epoch. The objective function that is minimized is Binary Cross-Entropy:
where $y_i$ is the ground truth label (0 for dat and 1 for die) and $p(\hat{y}_i)$ is the probability of the predicted label for all $N$ input sentences of the train set. The weights are optimized by Stochastic Gradient Descent with learning rate = 0.01 and momentum = 0.9. The data is fed to the model in 24 epochs.
Binary Classification Model ::: Results
An overview of the performance results is given in Table TABREF11. We compare model performance when trained and tested on the two corpora individually and experiment with different settings of the two corpora in order to investigate the effect of dataset changes on model performance. There are three settings: full in which the datasets contain full sentences, windowed in which sentences are windowed around the unique prediction token without exceeding sentence boundaries (five tokens before and after the token, including token), and windowed no_boundaries in which the windows can exceed sentence boundaries. When limiting the input sentences to windowed sentences in the Europarl corpus(2), model performance increases significantly on all metrics, especially for die prediction performance. The difference in model performance when trained and tested on the Europarl (2) and SoNaR (3) windowed datasets is particularly noticeable in the precision, recall and F1 scores. Model performance for dat prediction is better for the Europarl dataset than for the SoNaR dataset, while model performance for die prediction is notably better for the SoNaR dataset than for the Europarl dataset. Lastly, a change in windowing seems to have a positive impact on the overall model performance: the model trained and tested on the SoNaR dataset with windows exceeding sentence boundaries (3) outperforms that on the SoNaR dataset with windows within sentence boundaries (4) on every metric.
Multitask Classification Model ::: Model Architecture
The second model performs two prediction tasks. The first prediction task remains the binary classification of die and dat. The second prediction task concerns the prediction of three parts-of-speech (POS) or word classes, namely subordinating conjunction, relative pronoun and demonstrative pronoun. An overview of the model architectures is given in Fig. FIGREF13. For the BiLSTM model, the first layer is the embedding layer where the weights are initialized by means of the 200-dimensional pre-trained embedding matrix. The weights are updated after every epoch. The second layer consists of two bidirectional LSTMs where the output of the first bidirectional LSTM serves as input to the second bidirectional LSTM. The layer has dropout regularization equal to 0.2. The two-layer bidirectional LSTM concatenates the outputs at time $t$ into a 64-dimensional vector and sends it through a maxpooling layer. Until this point, the two task share the same parameters. The model than splits into two separate linear layers. The left linear layer transforms the 64-dimensional vector to a two-dimensional vector on which the softmax is computed. The softmax outputs the probability distribution over the dat and die labels. The right linear layer transforms the 64-dimensional vector to a three-dimensional vector on which the softmax is computed as well. The softmax outputs the probability distribution over the subordinating conjunction, relative pronoun and demonstrative pronoun labels. The second multitask classification model takes the immediate context around the 'PREDICT' token as additional input. Both the windowed sentence and context are first transformed into their word embedding representations. They are, then, separately sent through a sentence encoder and context encoder, respectively. The sentence encoder has the same architecture as the second and third layer of the BiLSTM model, namely a two-layer bidirectional LSTM and a maxpooling layer. For the context encoder, we experiment with two different architectures: a feedforward neural network and a one-layer bidirectional LSTM with dropout = 0.2 with a maxpooling layer on top. Both sentence and context encoder output a 64-dimensional vector which are, consequently, concatenated to a 128-dimensional vector. As in the BiLSTM model, the resulting vector is sent through two separate linear layers to output probability distributions for both the die/dat and POS prediction task.
Multitask Classification Model ::: Experimental Set-up
As discussed in Section SECREF4, the POS ground truth labels in SoNaR-based datasets are more reliable than the POS labels in the Europarl-based datasets that are generated by TreeTagger. Consequently, only the SoNaR dataset is used for training and testing. The dataset is randomly divided into a training (70%), validation (15%) and test (15%) set. The data is fed into the model in batches of 516 samples and the data is reshuffled at every epoch. For die/dat prediction, the Binary Cross-Entropy loss function is minimized. The weights are optimized by Stochastic Gradient Descent with learning rate = 0.01 and momentum = 0.9. For POS prediction, Cross-Entropy is minimized:
where $C$ is the number of classes, in this case three, $y_{i,c}$ is the binary indicator (0 or 1) if class label $c$ is the correct predicted classification for input sentence $i$ and $p$ is the probability of sentence $i$ having class label $c$. The weights are optimized using Adam optimization with learning rate = 0.0001. The data is fed to the model in 35 epochs.
Multitask Classification Model ::: Results
An overview of the performance results for die/dat prediction is given in Table TABREF19. The same dataset settings as for the binary classification model are used: full in which the datasets contain full sentences, windowed in which sentences are windowed around the unique prediction token without exceeding sentence boundaries (five tokens before and after the token, including token), and windowed no_boundaries in which the windows can exceed sentence boundaries. As mentioned in section SECREF4, we only use the SoNaR dataset. The multitask classification models generally perform better with the windowed no_boundaries dataset setting. Concerning the model architectures, it can be concluded that altering the model architecture has no large impact on model performance for die/dat prediction. However, altering the model architecture from an architecture with merely a sentence encoder to an architecture with both sentence and context encoder does have a more significant positive impact on model performance for POS prediction (Table TABREF20). For that prediction task, the multitask classification model with a bidirectional LSTM context encoder trained and tested on windowed SoNaR sentences reaches best performance results on almost all evaluation metrics.
Discussion
In Section SECREF5, a first classification model based on neural networks is computed to predict die and dat labels. The binary classification model consists of an embedding layer, a bidirectional LSTM, a maxpooling layer and a linear layer. The softmax is taken over the output of the last layer and provides a probability distribution over die and dat prediction labels. The sentences receive the prediction label with the highest probability. It is trained, validated and tested four times using four different database settings. From an analysis of the performance metric results, several conclusions can be drawn. Firstly, in all cases, the model appears to predict the dat label more precisely than the die label. This may be caused by the higher number of dat than die instances in training, validation and test datasets extracted from the Europarl and SoNaR corpus. Secondly, when the dataset is more balanced, as in the SoNaR corpus, the difference in performance between die and dat labels decreases as expected. Thirdly, die/dat prediction performance increases when the window over the sentences is not limited to sentence boundaries (SoNaR windowed, no_boundaries). A probable reason for that higher performance is that the model's ability to detect antecedents in the preceeding or following sentence, while it is not able to do so when it is trained and tested on boundary-constraint windowed sentences (SoNaR windowed). Lastly, it appears that performance of the model drops significantly when the binary classification model is trained and tested on full sentences (Europarl full). In conclusion, the binary classification model performs best when it is trained on the larger, more evenly balanced SoNaR corpus that consists of windowed sentences that are not limited to sentence boundaries. A clear performance overview of the best performing binary classification and multitask classification models for die/dat prediction can be found in Table TABREF21.
In Section SECREF6, several multitask classification models are constructed to jointly execute two prediction tasks: die/dat prediction and POS prediction. The BiLSTM multitask classification model consists of an embedding layer, two consecutive bidirectional LSTMs and a maxpooling layer. The output of the maxpooling layer is used as input to two separate linear layers followed by a softmax layer. The two softmax layers yield a probability distribution for die/dat and POS labels. The model trained and tested on windowed SoNaR sentences that exceed sentence boundaries performs better than the model on boundary-constraint windowed sentences and full sentences. The best performing BiLSTM multitask classification model (Model 2) outperforms the best binary classification model (Model 1) on every evaluation metric for die/dat prediction. This could arguably be due to the increased batch size, the doubled embedding dimension, the extra bidirectional LSTM layer, the influence of the second prediction task and/or the split in sentence and context encoder. Firstly, the data is divided into batch sizes of 512 instead of 128. Table TABREF22 shows, however, that there is little consistent difference in performance when batch size is 512 or 128. Therefore, it can be suggested that an increased batch size has no directly positive influence on model performance. Secondly, the input data is transformed to 200-dimensional word embeddings instead of 100-dimensional word embeddings. From the results displayed in Table TABREF22, it appears that a change in word embedding dimension could be causing an slight increase in model performance. Thirdly, the multitask model contains two bidirectional LSTM layers opposed to the binary model that has only one layer. Table TABREF23 shows the influence of the number of layers on the performance of the binary classification model. When the binary classification model has an additional bidirectional LSTM layer, all the evaluation metrics rise with approximately 2%. However, when the binary classification model has three bidirectional LSTM layers, model performance drops significantly. It appears that the doubled number of layers is indeed one of the reasons why the multitask classification models perform better than the binary classification model. However, not every rise in number of layers necessarily influences a model's performance in a positive manner. Concerning the influence of the POS prediction task on die/dat prediction performance and syntactic knowledge in general, a comparison between a two-layer bidirectional LSTM binary classification model and the two-layer bidirectional LSTM multitask classification model is made and displayed in Table TABREF24. It seems that the integration of POS knowledge positively influences die/dat prediction performance, while all evaluation metrics have increased. When examining the influence of a context encoder on die/dat prediction performance of Model 3 and Model 4, the evaluation metrics of Model 2, 3 and 4 are compared. The metric scores are fairly similar which leads to the conclusion that the addition of a context encoder has little to no further influence on die/dat prediction performance. Moreover, the encoder architecture does not cause a considerable difference in die/dat prediction performance between the model with a feedforward context encoder (Model 3) and the model with a bidirectional LSTM context encoder (Model 4). It can thus be suggested that a model does not necessarily profit from a different architecture and that an extra focus on immediate context is not additionally advantageous for the die/dat prediction task.
Contrary to the little to no impact on die/dat prediction performance, the context encoder - especially the bidirectional LSTM context encoder - does have a direct positive impact on POS prediction performance. The difference in POS prediction performance between the three multitask prediction models can be found in Table TABREF25. The model with the bidirectional LSTM context encoder outperforms the other two multitask classification models on every evaluation metric. Considering its highest POS prediction performance and high die/dat prediction performance, it is suggested that the multitask prediction model with bidirectional LSTM context encoder (Model 4) is the overall best model.
Conclusion
Deciding which pronoun to use in various contexts can be a complicated task. The correct use of die and dat as Dutch pronouns entails knowing the antecedent and - if the antecedent is a noun - its grammatical gender and number. We experimented with neural network models to examine whether die and dat instances in sentences can be computationally predicted and, if necessary, corrected. Our binary classification model reaches a promising 84.56 % accuracy. In addition, we extended that model to three multitask classification models that not only predict die and dat, but also predicts the POS (demonstrative pronoun, relative pronoun and subordinating conjunction). By increasing the word embedding dimension, doubling the number of bidirectional LSTM layers and integrating POS knowledge in the model, the multitask classification models raise die/dat prediction performance by approximately 4 %. Concerning POS prediction performance, the multitask classification model consisting of a sentence and context encoder performs best on all evaluation metrics and reaches a accuracy of 87.78 %.
There are ample opportunities to further analyze, enhance and/or extend the die/dat prediction model. A qualitative study of the learned model weights, for example, could provide more insight in the prediction mechanism of the models. We already obtain excellent results with a simple neural architecture comprising relatively few parameters. We believe that more complex architectures such as a transformer architecture BIBREF13 with multihead attention will improve results. It might also be interesting to look at the possibility of integrating a language model such as BERT BIBREF14 in the classification model (e.g., as pretrained embeddings). Moreover, the binary classification task could be extended to a multiclass classification task to predict not only die and dat labels, but also respectively equivalent deze and dit labels. The difference between die/dat and deze/dat, however, entails a difference in temporal and spatial information: while die/dat indicates a physically near or earlier mentioned antecedent, deze/dit implies that the antecedent is physically distant or later mentioned in the text. That difference may possibly cause a prediction model to base its predictions on other tokens in a text. | The Dutch section consists of 2,333,816 sentences and 53,487,257 words., The SONAR500 corpus consists of more than 500 million words obtained from different domains. |
c20bb0847ced490a793657fbaf6afb5ef54dad81 | c20bb0847ced490a793657fbaf6afb5ef54dad81_0 | Q: Why are the scores for predicting perceived musical hardness and darkness extracted only for subsample of 503 songs?
Text: Introduction
As audio and text features provide complementary layers of information on songs, a combination of both data types has been shown to improve the automatic classification of high-level attributes in music such as genre, mood and emotion BIBREF0, BIBREF1, BIBREF2, BIBREF3. Multi-modal approaches interlinking these features offer insights into possible relations between lyrical and musical information (see BIBREF4, BIBREF5, BIBREF6).
In the case of metal music, sound dimensions like loudness, distortion and particularly hardness (or heaviness) play an essential role in defining the sound of this genre BIBREF7, BIBREF8, BIBREF9, BIBREF10. Specific subgenres – especially doom metal, gothic metal and black metal – are further associated with a sound that is often described as dark or gloomy BIBREF11, BIBREF12.
These characteristics are typically not limited to the acoustic and musical level. In a research strand that has so far been generally treated separately from the audio dimensions, lyrics from the metal genre have come under relatively close scrutiny (cf. BIBREF13). Topics typically ascribed to metal lyrics include sadness, death, freedom, nature, occultism or unpleasant/disgusting objects and are overall characterized as harsh, gloomy, dystopian, or satanic BIBREF14, BIBREF13, BIBREF15, BIBREF16, BIBREF17.
Until now, investigations on metal lyrics were limited to individual cases or relatively small corpora – with a maximum of 1,152 songs in BIBREF17. Besides this, the relation between the musical and the textual domain has not yet been explored. Therefore, we examine a large corpus of metal song lyrics, addressing the following questions:
Which topics are present within the corpus of metal lyrics?
Is there a connection between characteristic musical dimensions like hardness and darkness and certain topics occurring within the textual domain?
Methodology
In our sequential research design, the distribution of textual topics within the corpus was analyzed using latent Dirichlet allocation (LDA). This resulted in a topic model, which was used for a probabilistic assignment of topics to each of the song documents. Additionally, for a subset of these songs, audio features were extracted using models for high-level music dimensions. The use of automatic models for the extraction of both text as well as musical features allows for scalability as it enables a large corpus to be studied without depending on the process of manual annotation for each of the songs. The resulting feature vectors were then subjected to a correlation analysis. Figure FIGREF6 outlines the sequence of the steps taken in processing the data. The individual steps are explained in the following subsections.
Methodology ::: Text Corpus Creation and Cleaning
For gathering the data corpus, a web crawler was programmed using the Python packages Requests and BeautifulSoup. In total, 152,916 metal music lyrics were extracted from www.darklyrics.com.
Using Python’s langdetect package, all non-English texts were excluded. With the help of regular expressions, the texts were scanned for tokens indicating meta-information, which is not part of the actual lyrics. To this end, a list of stopwords referring to musical instruments or the production process (e.g. ‘recorded’, ‘mixed’, ‘arrangement by’, ‘band photos’) was defined in addition to common stopwords. After these cleaning procedures, 124,288 texts remained in the subsample. For text normalization, stemming and lemmatization were applied as further preprocessing steps.
Methodology ::: Topic Modelling via Latent Dirichlet Allocation
We performed a LDA BIBREF18 on the remaining subsample to construct a probabilistic topic model. The LDA models were created by using the Python library Gensim BIBREF19. The lyrics were first converted to a bag-of-words format, and standard weighting of terms provided by the Gensim package was applied.
Log perplexity BIBREF20 and log UMass coherence BIBREF21 were calculated as goodness-of-fit measures evaluating topic models ranging from 10 to 100 topics. Considering these performance measures as well as qualitative interpretability of the resulting topic models, we chose a topic model including 20 topics – an approach comparable with BIBREF22. We then examined the most salient and most typical words for each topic.
Moreover, we used the ldavis package to analyze the structure of the resulting topic space BIBREF23. In order to do so, the Jensen-Shannon divergence between topics was calculated in a first step. In a second step, we applied multidimensional scaling (MDS) to project the inter-topic distances onto a two-dimensional plane. MDS is based on the idea of calculating dissimilarities between pairs of items of an input matrix while minimizing the strain function BIBREF24. In this case, the closer the topics are located to one another on the two-dimensional plane, the more they share salient terms and the more likely a combination of these topics appear in a song.
Methodology ::: High-Level Audio Feature Extraction
The high-level audio feature models used had been constructed in previous examinations BIBREF25, BIBREF26. In those music perception studies, ratings were obtained for 212 music stimuli in an online listening experiment by 40 raters.
2
Based on this ground truth, prediction models for the automatic extraction of high-level music dimensions – including the concepts of perceived hardness/heaviness and darkness/gloominess in music – had been trained using machine learning methods. In a second step, the model obtained for hardness had been evaluated using further listening experiments on a new unseen set of audio stimuli BIBREF26. The model has been refined against this backdrop, resulting in an $R^2$ value of 0.80 for hardness/heaviness and 0.60 for darkness/gloominess using five-fold cross-validation.
The resulting models embedded features implemented in LibROSA BIBREF27, Essentia BIBREF28 as well as the timbral models developed as part of the AudioCommons project BIBREF29.
Methodology ::: Investigating the Connection between Audio and Text Features
Finally, we drew a random sample of 503 songs and used Spearman's $\rho $ to identify correlations between the topics retrieved and the audio dimensions obtained by the high-level audio feature models. We opted for Spearman’s $\rho $ since it does not assume normal distribution of the data, is less prone to outliers and zero-inflation than Pearson’s $r$. Bonferroni correction was applied in order to account for multiple-testing.
Results ::: Textual Topics
Table TABREF10 displays the twenty resulting topics found within the text corpus using LDA. The topics are numbered in descending order according to their prevalence (weight) in the text corpus. For each topic, a qualitative interpretation is given along with the 10 most salient terms.
The salient terms of the first topic – and in parts also the second – appear relatively generic, as terms like e.g. ‘know’, ‘never’, and ‘time’ occur in many contexts. However, the majority of the remaining topics reveal distinct lyrical themes described as being characteristic for the metal genre. ‘Religion & satanism’ (topic #5) and descriptions of ‘brutal death’ (topic #7) can be considered as being typical for black metal and death metal respectively, whereas ‘battle’ (topic #6), ‘landscape & journey’ (topic #11), ‘struggle for freedom’ (topic #12), and ‘dystopia’ (topic #15), are associated with power metal and other metal subgenres.
2
This is highlighted in detail in Figure FIGREF11. Here, the topic distributions for two exemplary bands contained within the sample are presented. For these heat maps, data has been aggregated over individual songs showing the topic distribution at the level of albums over a band’s history. The examples chosen illustrate the dependence between textual topics and musical subgenres. For the band Manowar, which is associated with the genre of heavy metal, power metal or true metal, a prevalence of topic #6 (‘battle’) can be observed, while a distinctive prevalence of topic #7 (‘brutal death’) becomes apparent for Cannibal Corpse – a band belonging to the subgenre of death metal.
Within the topic configuration obtained via multidimensional scaling (see Figure FIGREF12), two latent dimensions can be identified. The first dimension (PC1) distinguishes topics with more common wordings on the right hand side from topics with less common wording on the left hand side. This also correlates with the weight of the topics within the corpus. The second dimension (PC2) is characterized by an contrast between transcendent and sinister topics dealing with occultism, metaphysics, satanism, darkness, and mourning (#9, #3, .#5, #13, and #16) at the top and comparatively shallow content dealing with personal life and Rock’n’Roll lifestyle using a rather mundane or vulgar vocabulary (#1, #8, and #19) at the bottom. This contrast can be interpreted as ‘otherworldliness / individual-transcending narratives’ vs. ‘worldliness / personal life’.
Results ::: Correlations with Musical Dimensions
In the final step of our analysis, we calculated the association between the twenty topics discussed above and the two high-level audio features hardness and darkness using Spearman’s $\rho $. The results are visualized in Figure FIGREF13 and the $\rho $ values listed in table TABREF10.
Significant positive associations can be observed between musical hardness and the topics ‘brutal death’, ‘dystopia’, ‘archaisms & occultism’, ‘religion & satanism’, and ‘battle’, while it is negatively linked to relatively mundane topics concerning ‘personal life’ and ‘love & romance’. The situation is similar for dark/gloomy sounding music, which in turn is specifically related to themes such as ‘dystopia’ and ‘(psychological) madness’. Overall, the strength of the associations is moderate at best, with a tendency towards higher associations for hardness than darkness. The strongest association exists between hardness and the topic ‘brutal death’ ($\rho = 0.267$, $p < 0.01$).
Conclusion and Outlook
Applying the example of metal music, our work examined the textual topics found in song lyrics and investigated the association between these topics and high-level music features. By using LDA and MDS in order to explore prevalent topics and the topic space, typical text topics identified in qualitative analyses could be confirmed and objectified based on a large text corpus. These include e.g. satanism, dystopia or disgusting objects. It was shown that musical hardness is particularly associated with harsh topics like ‘brutal death’ and ‘dystopia’, while it is negatively linked to relatively mundane topics concerning personal life and love. We expect that even stronger correlations could be found for metal-specific topics when including more genres covering a wider range of hardness/darkness values.
Therefore, we suggest transferring the method to a sample including multiple genres. Moreover, an integration with metadata such as genre information would allow for the testing of associations between topics, genres and high-level audio features. This could help to better understand the role of different domains in an overall perception of genre-defining attributes such as hardness. | Unanswerable |
ff8557d93704120b65d9b597a4fab40b49d24b6d | ff8557d93704120b65d9b597a4fab40b49d24b6d_0 | Q: How long is the model trained?
Text: Introduction
As audio and text features provide complementary layers of information on songs, a combination of both data types has been shown to improve the automatic classification of high-level attributes in music such as genre, mood and emotion BIBREF0, BIBREF1, BIBREF2, BIBREF3. Multi-modal approaches interlinking these features offer insights into possible relations between lyrical and musical information (see BIBREF4, BIBREF5, BIBREF6).
In the case of metal music, sound dimensions like loudness, distortion and particularly hardness (or heaviness) play an essential role in defining the sound of this genre BIBREF7, BIBREF8, BIBREF9, BIBREF10. Specific subgenres – especially doom metal, gothic metal and black metal – are further associated with a sound that is often described as dark or gloomy BIBREF11, BIBREF12.
These characteristics are typically not limited to the acoustic and musical level. In a research strand that has so far been generally treated separately from the audio dimensions, lyrics from the metal genre have come under relatively close scrutiny (cf. BIBREF13). Topics typically ascribed to metal lyrics include sadness, death, freedom, nature, occultism or unpleasant/disgusting objects and are overall characterized as harsh, gloomy, dystopian, or satanic BIBREF14, BIBREF13, BIBREF15, BIBREF16, BIBREF17.
Until now, investigations on metal lyrics were limited to individual cases or relatively small corpora – with a maximum of 1,152 songs in BIBREF17. Besides this, the relation between the musical and the textual domain has not yet been explored. Therefore, we examine a large corpus of metal song lyrics, addressing the following questions:
Which topics are present within the corpus of metal lyrics?
Is there a connection between characteristic musical dimensions like hardness and darkness and certain topics occurring within the textual domain?
Methodology
In our sequential research design, the distribution of textual topics within the corpus was analyzed using latent Dirichlet allocation (LDA). This resulted in a topic model, which was used for a probabilistic assignment of topics to each of the song documents. Additionally, for a subset of these songs, audio features were extracted using models for high-level music dimensions. The use of automatic models for the extraction of both text as well as musical features allows for scalability as it enables a large corpus to be studied without depending on the process of manual annotation for each of the songs. The resulting feature vectors were then subjected to a correlation analysis. Figure FIGREF6 outlines the sequence of the steps taken in processing the data. The individual steps are explained in the following subsections.
Methodology ::: Text Corpus Creation and Cleaning
For gathering the data corpus, a web crawler was programmed using the Python packages Requests and BeautifulSoup. In total, 152,916 metal music lyrics were extracted from www.darklyrics.com.
Using Python’s langdetect package, all non-English texts were excluded. With the help of regular expressions, the texts were scanned for tokens indicating meta-information, which is not part of the actual lyrics. To this end, a list of stopwords referring to musical instruments or the production process (e.g. ‘recorded’, ‘mixed’, ‘arrangement by’, ‘band photos’) was defined in addition to common stopwords. After these cleaning procedures, 124,288 texts remained in the subsample. For text normalization, stemming and lemmatization were applied as further preprocessing steps.
Methodology ::: Topic Modelling via Latent Dirichlet Allocation
We performed a LDA BIBREF18 on the remaining subsample to construct a probabilistic topic model. The LDA models were created by using the Python library Gensim BIBREF19. The lyrics were first converted to a bag-of-words format, and standard weighting of terms provided by the Gensim package was applied.
Log perplexity BIBREF20 and log UMass coherence BIBREF21 were calculated as goodness-of-fit measures evaluating topic models ranging from 10 to 100 topics. Considering these performance measures as well as qualitative interpretability of the resulting topic models, we chose a topic model including 20 topics – an approach comparable with BIBREF22. We then examined the most salient and most typical words for each topic.
Moreover, we used the ldavis package to analyze the structure of the resulting topic space BIBREF23. In order to do so, the Jensen-Shannon divergence between topics was calculated in a first step. In a second step, we applied multidimensional scaling (MDS) to project the inter-topic distances onto a two-dimensional plane. MDS is based on the idea of calculating dissimilarities between pairs of items of an input matrix while minimizing the strain function BIBREF24. In this case, the closer the topics are located to one another on the two-dimensional plane, the more they share salient terms and the more likely a combination of these topics appear in a song.
Methodology ::: High-Level Audio Feature Extraction
The high-level audio feature models used had been constructed in previous examinations BIBREF25, BIBREF26. In those music perception studies, ratings were obtained for 212 music stimuli in an online listening experiment by 40 raters.
2
Based on this ground truth, prediction models for the automatic extraction of high-level music dimensions – including the concepts of perceived hardness/heaviness and darkness/gloominess in music – had been trained using machine learning methods. In a second step, the model obtained for hardness had been evaluated using further listening experiments on a new unseen set of audio stimuli BIBREF26. The model has been refined against this backdrop, resulting in an $R^2$ value of 0.80 for hardness/heaviness and 0.60 for darkness/gloominess using five-fold cross-validation.
The resulting models embedded features implemented in LibROSA BIBREF27, Essentia BIBREF28 as well as the timbral models developed as part of the AudioCommons project BIBREF29.
Methodology ::: Investigating the Connection between Audio and Text Features
Finally, we drew a random sample of 503 songs and used Spearman's $\rho $ to identify correlations between the topics retrieved and the audio dimensions obtained by the high-level audio feature models. We opted for Spearman’s $\rho $ since it does not assume normal distribution of the data, is less prone to outliers and zero-inflation than Pearson’s $r$. Bonferroni correction was applied in order to account for multiple-testing.
Results ::: Textual Topics
Table TABREF10 displays the twenty resulting topics found within the text corpus using LDA. The topics are numbered in descending order according to their prevalence (weight) in the text corpus. For each topic, a qualitative interpretation is given along with the 10 most salient terms.
The salient terms of the first topic – and in parts also the second – appear relatively generic, as terms like e.g. ‘know’, ‘never’, and ‘time’ occur in many contexts. However, the majority of the remaining topics reveal distinct lyrical themes described as being characteristic for the metal genre. ‘Religion & satanism’ (topic #5) and descriptions of ‘brutal death’ (topic #7) can be considered as being typical for black metal and death metal respectively, whereas ‘battle’ (topic #6), ‘landscape & journey’ (topic #11), ‘struggle for freedom’ (topic #12), and ‘dystopia’ (topic #15), are associated with power metal and other metal subgenres.
2
This is highlighted in detail in Figure FIGREF11. Here, the topic distributions for two exemplary bands contained within the sample are presented. For these heat maps, data has been aggregated over individual songs showing the topic distribution at the level of albums over a band’s history. The examples chosen illustrate the dependence between textual topics and musical subgenres. For the band Manowar, which is associated with the genre of heavy metal, power metal or true metal, a prevalence of topic #6 (‘battle’) can be observed, while a distinctive prevalence of topic #7 (‘brutal death’) becomes apparent for Cannibal Corpse – a band belonging to the subgenre of death metal.
Within the topic configuration obtained via multidimensional scaling (see Figure FIGREF12), two latent dimensions can be identified. The first dimension (PC1) distinguishes topics with more common wordings on the right hand side from topics with less common wording on the left hand side. This also correlates with the weight of the topics within the corpus. The second dimension (PC2) is characterized by an contrast between transcendent and sinister topics dealing with occultism, metaphysics, satanism, darkness, and mourning (#9, #3, .#5, #13, and #16) at the top and comparatively shallow content dealing with personal life and Rock’n’Roll lifestyle using a rather mundane or vulgar vocabulary (#1, #8, and #19) at the bottom. This contrast can be interpreted as ‘otherworldliness / individual-transcending narratives’ vs. ‘worldliness / personal life’.
Results ::: Correlations with Musical Dimensions
In the final step of our analysis, we calculated the association between the twenty topics discussed above and the two high-level audio features hardness and darkness using Spearman’s $\rho $. The results are visualized in Figure FIGREF13 and the $\rho $ values listed in table TABREF10.
Significant positive associations can be observed between musical hardness and the topics ‘brutal death’, ‘dystopia’, ‘archaisms & occultism’, ‘religion & satanism’, and ‘battle’, while it is negatively linked to relatively mundane topics concerning ‘personal life’ and ‘love & romance’. The situation is similar for dark/gloomy sounding music, which in turn is specifically related to themes such as ‘dystopia’ and ‘(psychological) madness’. Overall, the strength of the associations is moderate at best, with a tendency towards higher associations for hardness than darkness. The strongest association exists between hardness and the topic ‘brutal death’ ($\rho = 0.267$, $p < 0.01$).
Conclusion and Outlook
Applying the example of metal music, our work examined the textual topics found in song lyrics and investigated the association between these topics and high-level music features. By using LDA and MDS in order to explore prevalent topics and the topic space, typical text topics identified in qualitative analyses could be confirmed and objectified based on a large text corpus. These include e.g. satanism, dystopia or disgusting objects. It was shown that musical hardness is particularly associated with harsh topics like ‘brutal death’ and ‘dystopia’, while it is negatively linked to relatively mundane topics concerning personal life and love. We expect that even stronger correlations could be found for metal-specific topics when including more genres covering a wider range of hardness/darkness values.
Therefore, we suggest transferring the method to a sample including multiple genres. Moreover, an integration with metadata such as genre information would allow for the testing of associations between topics, genres and high-level audio features. This could help to better understand the role of different domains in an overall perception of genre-defining attributes such as hardness. | Unanswerable |
447eb98e602616c01187960c9c3011c62afd7c27 | 447eb98e602616c01187960c9c3011c62afd7c27_0 | Q: What are lyrical topics present in the metal genre?
Text: Introduction
As audio and text features provide complementary layers of information on songs, a combination of both data types has been shown to improve the automatic classification of high-level attributes in music such as genre, mood and emotion BIBREF0, BIBREF1, BIBREF2, BIBREF3. Multi-modal approaches interlinking these features offer insights into possible relations between lyrical and musical information (see BIBREF4, BIBREF5, BIBREF6).
In the case of metal music, sound dimensions like loudness, distortion and particularly hardness (or heaviness) play an essential role in defining the sound of this genre BIBREF7, BIBREF8, BIBREF9, BIBREF10. Specific subgenres – especially doom metal, gothic metal and black metal – are further associated with a sound that is often described as dark or gloomy BIBREF11, BIBREF12.
These characteristics are typically not limited to the acoustic and musical level. In a research strand that has so far been generally treated separately from the audio dimensions, lyrics from the metal genre have come under relatively close scrutiny (cf. BIBREF13). Topics typically ascribed to metal lyrics include sadness, death, freedom, nature, occultism or unpleasant/disgusting objects and are overall characterized as harsh, gloomy, dystopian, or satanic BIBREF14, BIBREF13, BIBREF15, BIBREF16, BIBREF17.
Until now, investigations on metal lyrics were limited to individual cases or relatively small corpora – with a maximum of 1,152 songs in BIBREF17. Besides this, the relation between the musical and the textual domain has not yet been explored. Therefore, we examine a large corpus of metal song lyrics, addressing the following questions:
Which topics are present within the corpus of metal lyrics?
Is there a connection between characteristic musical dimensions like hardness and darkness and certain topics occurring within the textual domain?
Methodology
In our sequential research design, the distribution of textual topics within the corpus was analyzed using latent Dirichlet allocation (LDA). This resulted in a topic model, which was used for a probabilistic assignment of topics to each of the song documents. Additionally, for a subset of these songs, audio features were extracted using models for high-level music dimensions. The use of automatic models for the extraction of both text as well as musical features allows for scalability as it enables a large corpus to be studied without depending on the process of manual annotation for each of the songs. The resulting feature vectors were then subjected to a correlation analysis. Figure FIGREF6 outlines the sequence of the steps taken in processing the data. The individual steps are explained in the following subsections.
Methodology ::: Text Corpus Creation and Cleaning
For gathering the data corpus, a web crawler was programmed using the Python packages Requests and BeautifulSoup. In total, 152,916 metal music lyrics were extracted from www.darklyrics.com.
Using Python’s langdetect package, all non-English texts were excluded. With the help of regular expressions, the texts were scanned for tokens indicating meta-information, which is not part of the actual lyrics. To this end, a list of stopwords referring to musical instruments or the production process (e.g. ‘recorded’, ‘mixed’, ‘arrangement by’, ‘band photos’) was defined in addition to common stopwords. After these cleaning procedures, 124,288 texts remained in the subsample. For text normalization, stemming and lemmatization were applied as further preprocessing steps.
Methodology ::: Topic Modelling via Latent Dirichlet Allocation
We performed a LDA BIBREF18 on the remaining subsample to construct a probabilistic topic model. The LDA models were created by using the Python library Gensim BIBREF19. The lyrics were first converted to a bag-of-words format, and standard weighting of terms provided by the Gensim package was applied.
Log perplexity BIBREF20 and log UMass coherence BIBREF21 were calculated as goodness-of-fit measures evaluating topic models ranging from 10 to 100 topics. Considering these performance measures as well as qualitative interpretability of the resulting topic models, we chose a topic model including 20 topics – an approach comparable with BIBREF22. We then examined the most salient and most typical words for each topic.
Moreover, we used the ldavis package to analyze the structure of the resulting topic space BIBREF23. In order to do so, the Jensen-Shannon divergence between topics was calculated in a first step. In a second step, we applied multidimensional scaling (MDS) to project the inter-topic distances onto a two-dimensional plane. MDS is based on the idea of calculating dissimilarities between pairs of items of an input matrix while minimizing the strain function BIBREF24. In this case, the closer the topics are located to one another on the two-dimensional plane, the more they share salient terms and the more likely a combination of these topics appear in a song.
Methodology ::: High-Level Audio Feature Extraction
The high-level audio feature models used had been constructed in previous examinations BIBREF25, BIBREF26. In those music perception studies, ratings were obtained for 212 music stimuli in an online listening experiment by 40 raters.
2
Based on this ground truth, prediction models for the automatic extraction of high-level music dimensions – including the concepts of perceived hardness/heaviness and darkness/gloominess in music – had been trained using machine learning methods. In a second step, the model obtained for hardness had been evaluated using further listening experiments on a new unseen set of audio stimuli BIBREF26. The model has been refined against this backdrop, resulting in an $R^2$ value of 0.80 for hardness/heaviness and 0.60 for darkness/gloominess using five-fold cross-validation.
The resulting models embedded features implemented in LibROSA BIBREF27, Essentia BIBREF28 as well as the timbral models developed as part of the AudioCommons project BIBREF29.
Methodology ::: Investigating the Connection between Audio and Text Features
Finally, we drew a random sample of 503 songs and used Spearman's $\rho $ to identify correlations between the topics retrieved and the audio dimensions obtained by the high-level audio feature models. We opted for Spearman’s $\rho $ since it does not assume normal distribution of the data, is less prone to outliers and zero-inflation than Pearson’s $r$. Bonferroni correction was applied in order to account for multiple-testing.
Results ::: Textual Topics
Table TABREF10 displays the twenty resulting topics found within the text corpus using LDA. The topics are numbered in descending order according to their prevalence (weight) in the text corpus. For each topic, a qualitative interpretation is given along with the 10 most salient terms.
The salient terms of the first topic – and in parts also the second – appear relatively generic, as terms like e.g. ‘know’, ‘never’, and ‘time’ occur in many contexts. However, the majority of the remaining topics reveal distinct lyrical themes described as being characteristic for the metal genre. ‘Religion & satanism’ (topic #5) and descriptions of ‘brutal death’ (topic #7) can be considered as being typical for black metal and death metal respectively, whereas ‘battle’ (topic #6), ‘landscape & journey’ (topic #11), ‘struggle for freedom’ (topic #12), and ‘dystopia’ (topic #15), are associated with power metal and other metal subgenres.
2
This is highlighted in detail in Figure FIGREF11. Here, the topic distributions for two exemplary bands contained within the sample are presented. For these heat maps, data has been aggregated over individual songs showing the topic distribution at the level of albums over a band’s history. The examples chosen illustrate the dependence between textual topics and musical subgenres. For the band Manowar, which is associated with the genre of heavy metal, power metal or true metal, a prevalence of topic #6 (‘battle’) can be observed, while a distinctive prevalence of topic #7 (‘brutal death’) becomes apparent for Cannibal Corpse – a band belonging to the subgenre of death metal.
Within the topic configuration obtained via multidimensional scaling (see Figure FIGREF12), two latent dimensions can be identified. The first dimension (PC1) distinguishes topics with more common wordings on the right hand side from topics with less common wording on the left hand side. This also correlates with the weight of the topics within the corpus. The second dimension (PC2) is characterized by an contrast between transcendent and sinister topics dealing with occultism, metaphysics, satanism, darkness, and mourning (#9, #3, .#5, #13, and #16) at the top and comparatively shallow content dealing with personal life and Rock’n’Roll lifestyle using a rather mundane or vulgar vocabulary (#1, #8, and #19) at the bottom. This contrast can be interpreted as ‘otherworldliness / individual-transcending narratives’ vs. ‘worldliness / personal life’.
Results ::: Correlations with Musical Dimensions
In the final step of our analysis, we calculated the association between the twenty topics discussed above and the two high-level audio features hardness and darkness using Spearman’s $\rho $. The results are visualized in Figure FIGREF13 and the $\rho $ values listed in table TABREF10.
Significant positive associations can be observed between musical hardness and the topics ‘brutal death’, ‘dystopia’, ‘archaisms & occultism’, ‘religion & satanism’, and ‘battle’, while it is negatively linked to relatively mundane topics concerning ‘personal life’ and ‘love & romance’. The situation is similar for dark/gloomy sounding music, which in turn is specifically related to themes such as ‘dystopia’ and ‘(psychological) madness’. Overall, the strength of the associations is moderate at best, with a tendency towards higher associations for hardness than darkness. The strongest association exists between hardness and the topic ‘brutal death’ ($\rho = 0.267$, $p < 0.01$).
Conclusion and Outlook
Applying the example of metal music, our work examined the textual topics found in song lyrics and investigated the association between these topics and high-level music features. By using LDA and MDS in order to explore prevalent topics and the topic space, typical text topics identified in qualitative analyses could be confirmed and objectified based on a large text corpus. These include e.g. satanism, dystopia or disgusting objects. It was shown that musical hardness is particularly associated with harsh topics like ‘brutal death’ and ‘dystopia’, while it is negatively linked to relatively mundane topics concerning personal life and love. We expect that even stronger correlations could be found for metal-specific topics when including more genres covering a wider range of hardness/darkness values.
Therefore, we suggest transferring the method to a sample including multiple genres. Moreover, an integration with metadata such as genre information would allow for the testing of associations between topics, genres and high-level audio features. This could help to better understand the role of different domains in an overall perception of genre-defining attributes such as hardness. | Table TABREF10 displays the twenty resulting topics |
f398587b9a0008628278a5ea858e01d3f5559f65 | f398587b9a0008628278a5ea858e01d3f5559f65_0 | Q: By how much does SPNet outperforms state-of-the-art abstractive summarization methods on evaluation metrics?
Text: Introduction
Summarization aims to condense a piece of text to a shorter version, retaining the critical information. On dialogs, summarization has various promising applications in the real world. For instance, the automatic doctor-patient interaction summary can save doctors' massive amount of time used for filling medical records. There is also a general demand for summarizing meetings in order to track project progress in the industry. Generally, multi-party conversations with interactive communication are more difficult to summarize than single-speaker documents. Hence, dialog summarization will be a potential field in summarization track.
There are two types of summarization: extractive and abstractive. Extractive summarization selects sentences or phrases directly from the source text and merges them to a summary, while abstractive summarization attempts to generate novel expressions to condense information. Previous dialog summarization research mostly study extractive summarization BIBREF1, BIBREF2. Extractive methods merge selected important utterances from a dialog to form summary. Because dialogs are highly dependant on their histories, it is difficult to produce coherent discourses with a set of non-consecutive conversation turns. Therefore, extractive summarization is not the best approach to summarize dialogs. However, most modern abstractive methods focus on single-speaker documents rather than dialogs due to the lack of dialog summarization corpora. Popular abstractive summarization dataset like CNN/Daily Mail BIBREF3 is on news documents. AMI meeting corpus BIBREF4 is the common benchmark, but it only has extractive summary.
In this work, we introduce a dataset for abstractive dialog summarization based on MultiWOZ BIBREF0. Seq2Seq models such as Pointer-Generator BIBREF5 have achieved high-quality summaries of news document. However, directly applying a news summarizer to dialog results in two drawbacks: informative entities such as place name are difficult to capture precisely and contents in different domains are summarized unequally. To address these problems, we propose Scaffold Pointer Network (SPNet). SPNet incorporates three types of semantic scaffolds in dialog: speaker role, semantic slot, and dialog domain. Firstly, SPNet adapts separate encoder to attentional Seq2Seq framework, producing distinct semantic representations for different speaker roles. Then, our method inputs delexicalized utterances for producing delexicalized summary, and fills in slot values to generate complete summary. Finally, we incorporate dialog domain scaffold by jointly optimizing dialog domain classification task along with the summarization task. We evaluate SPNet with both automatic and human evaluation metrics on MultiWOZ. SPNet outperforms Pointer-Generator BIBREF5 and Transformer BIBREF6 on all the metrics.
Related Work
BIBREF7 first applied modern neural models to abstractive summarization. Their approach is based on Seq2Seq framework BIBREF8 and attention mechanism BIBREF9, achieving state-of-the-art results on Gigaword and DUC-2004 dataset. BIBREF10 proposed copy mechanism in summarization, demonstrating its effectiveness by combining the advantages of extractive and abstractive approach. BIBREF5 applied pointing BIBREF11 as copy mechanism and use coverage mechanism BIBREF12 to discourage repetition. Most recently, reinforcement learning (RL) has been employed in abstractive summarization. RL-based approaches directly optimize the objectives of summarization BIBREF13, BIBREF14. However, deep reinforcement learning approaches are difficult to train and more prone to exposure bias BIBREF15.
Recently, pre-training methods are popular in NLP applications. BERT BIBREF16 and GPT BIBREF17 have achieved state-of-the-art performance in many tasks, including summarization. For instance, BIBREF18 proposed a method to pre-train hierarchical document encoder for extractive summarization. BIBREF19 proposed two strategies to incorporate a pre-trained model (GPT) to perform the abstractive summarizer and achieved a better performance. However, there has not been much research on adapting pre-trained models to dialog summarization.
Dialog summarization, specifically meeting summarization, has been studied extensively. Previous work generally focused on statistical machine learning methods in extractive dialog summarization: BIBREF20 used skip-chain conditional random fields (CRFs) BIBREF21 as a ranking method in extractive meeting summarization. BIBREF22 compared support vector machines (SVMs) BIBREF23 with LDA-based topic models BIBREF24 for producing decision summaries. However, abstractive dialog summarization was less explored due to the lack of a suitable benchmark. Recent work BIBREF25, BIBREF26, BIBREF27 created abstractive dialog summary benchmarks with existing dialog corpus. BIBREF26 annotated topic descriptions in AMI meeting corpus as the summary. However, topics they defined are coarse, such as “industrial designer presentation". They also proposed a model with a sentence-gated mechanism incorporating dialog acts to perform abstractive summarization. Moreover, BIBREF28 first built a model to summarize audio-visual meeting data with an abstractive method. However, previous work has not investigated the utilization of semantic patterns in dialog, so we explore it in-depth in our work.
Proposed Method
As discussed above, state-of-the-art document summarizers are not applicable in conversation settings. We propose Scaffold Pointer Network (SPNet) based on Pointer-Generator BIBREF5. SPNet incorporates three types of semantic scaffolds to improve abstractive dialog summarization: speaker role, semantic slot and dialog domain.
Proposed Method ::: Background
We first introduce Pointer-Generator BIBREF5. It is a hybrid model of the typical Seq2Seq attention model BIBREF29 and pointer network BIBREF11. Seq2Seq framework encodes source sequence and generates the target sequence with the decoder. The input sequence is fed into the encoder token by token, producing the encoder hidden states $h_i$ in each encoding step. The decoder receives word embedding of the previous word and generates a distribution to decide the target element in this step, retaining decoder hidden states $s_t$. In Pointer-Generator, attention distribution $a^t$ is computed as in BIBREF9:
where $W_h$, $W_s$, $v$ and $b_{attn}$ are all learnable parameters.
With the attention distribution $a^t$, context vector $h_t^*$ is computed as the weighted sum of encoder's hidden states. Context vector is regarded as the attentional information in the source text:
Pointer-Generator differs from typical Seq2Seq attention model in the generation process. The pointing mechanism combines copying words directly from the source text with generating words from a fixed vocabulary. Generation probability $p_{gen}$ is calculated as “a soft switch" to choose from copy and generation:
where $x_t$ is the decoder input, $w_{h^*}$, $w_s$, $w_x$ and $b_{ptr}$ are all learnable parameters. $\sigma $ is sigmoid function, so the generation probability $p_{gen}$ has a range of $[0, 1]$.
The ability to select from copy and generation corresponds to a dynamic vocabulary. Pointer network forms an extended vocabulary for the copied tokens, including all the out-of-vocabulary(OOV) words appeared in the source text. The final probability distribution $P(w)$ on extended vocabulary is computed as follows:
where $P_{vocab}$ is the distribution on the original vocabulary, $V^{\prime }$, $V$, $b$ and $b^{\prime }$ are learnable parameters used to calculate such distribution.
Proposed Method ::: Scaffold Pointer Network (SPNet)
Our Scaffold Pointer Network (depicted in Figure FIGREF7) is based on Pointer-Generator BIBREF5. The contribution of SPNet is three-fold: separate encoding for different roles, incorporating semantic slot scaffold and dialog domain scaffold.
Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Speaker Role Scaffold
Our encoder-decoder framework employs separate encoding for different speakers in the dialog. User utterances $x_t^{usr}$ and system utterances $x_t^{sys}$ are fed into a user encoder and a system encoder separately to obtain encoder hidden states $h_{i}^{usr}$ and $h_{i}^{sys}$ . The attention distributions and context vectors are calculated as described in section SECREF1. In order to merge these two encoders in our framework, the decoder's hidden state $s_0$ is initialized as:
The pointing mechanism in our model follows the Equation DISPLAY_FORM4, and we obtain the context vector $h_t^{*}$:
Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Semantic Slot Scaffold
We integrate semantic slot scaffold by performing delexicalization on original dialogs. Delexicalization is a common pre-processing step in dialog modeling. Specifically, delexicalization replaces the slot values with its semantic slot name(e.g. replace 18:00 with [time]). It is easier for the language modeling to process delexicalized texts, as they have a reduced vocabulary size. But these generated sentences lack the semantic information due to the delexicalization. Some previous dialog system research ignored this issue BIBREF30 or completed single delexicalized utterance BIBREF31 as generated response. We propose to perform delexicalization in dialog summary, since delexicalized utterances can simplify dialog modeling. We fill the generated templates with slots with the copy and pointing mechanism.
We first train the model with the delexicalized utterance. Attention distribution $a^t$ over the source tokens instructs the decoder to fill up the slots with lexicalized values:
Note that $w_{slot}$ specifies the tokens that represents the slot name (e.g. [hotel_place], [time]). Decoder directly copies lexicalized value $value(w_i)$ conditioned on attention distribution $a_i^t$. If $w$ is not a slot token, then the probability $P(w)$ is calculated as Equation DISPLAY_FORM5.
Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Dialog Domain Scaffold
We integrate dialog domain scaffold through a multi-task framework. Dialog domain indicates different conversation task content, for example, booking hotel, restaurant and taxi in MultiWOZ dataset. Generally, the content in different domains varies so multi-domain task summarization is more difficult than single-domain. We include domain classification as the auxiliary task to incorporate the prior that different domains have different content. Feedback from the domain classification task provides domain specific information for the encoder to learn better representations. For domain classification, we feed the concatenated encoder hidden state through a binary classifier with two linear layers, producing domain probability $d$. The $i^{th}$ element $d_i$ in $d$ represents the probability of the $i^{th}$ domain:
where $U$, $U^{\prime }$, $b_{d}$ and $b_{d}^{\prime }$ are all trainable parameters in the classifier. We denote the loss function of summarization as $loss_1$ and domain classification as $loss_2$. Assume target word at timestep $t$ is $w_t^{*}$, $loss_1$ is the arithmetic mean of the negative log likelihood of $w_t^{*}$ over the generated sequence:
The domain classification task is a multi-label binary classification problem. We use binary cross entropy loss between the $i^{th}$ domain label $\hat{d_i}$ and predict probability $d_i$ for this task:
where $|D|$ is the number of domains. Finally, we reweight the classification loss with hyperparameter $\lambda $ and the objective function is:
Experimental Settings ::: Dataset
We validate SPNet on MultiWOZ-2.0 dataset BIBREF0. MultiWOZ consists of multi-domain conversations between a tourist and a information center clerk on varies booking tasks or domains, such as booking restaurants, hotels, taxis, etc. There are 10,438 dialogs, spanning over seven domains. 3,406 of them are single-domain (8.93 turns on average) and 7,302 are multi-domain (15.39 turns on average). During MultiWOZ data collection, instruction is provided for crowd workers to perform the task. We use the instructions as the dialog summary, and an example data is shown in Table TABREF25. Dialog domain label is extracted from existing MultiWOZ annotation. In the experiment, we split the dataset into 8,438 training, 1,000 validation, and 1,000 testing.
Experimental Settings ::: Evaluation Metrics
ROUGE BIBREF32 is a standard metric for summarization, designed to measure the surface word alignment between a generated summary and a human written summary. We evaluate our model with ROUGE-1, ROUGE-2 and ROUGE-L. They measure the word-overlap, bigram-overlap, and longest common sequence between the reference summary and the generated summary respectively. We obtain ROUGE scores using the files2rouge package. However, ROUGE is insufficient to measure summarization performance. The following example shows its limitations:
Reference: You are going to [restaurant_name] at [time].
Summary: You are going to [restaurant_name] at.
In this case, the summary has a high ROUGE score, as it has a considerable proportion of word overlap with the reference summary. However, it still has poor relevance and readability, for leaving out one of the most critical information: [time]. ROUGE treats each word equally in computing n-gram overlap while the informativeness actually varies: common words or phrases (e.g. “You are going to") significantly contribute to the ROUGE score and readability, but they are almost irrelevant to essential contents. The semantic slot values (e.g. [restaurant_name], [time]) are more essential compared to other words in the summary. However, ROUGE did not take this into consideration. To address this drawback in ROUGE, we propose a new evaluation metric: Critical Information Completeness (CIC). Formally, CIC is a recall of semantic slot information between a candidate summary and a reference summary. CIC is defined as follows:
where $V$ stands for a set of delexicalized values in the reference summary, $Count_{match}(v)$ is the number of values co-occurring in the candidate summary and reference summary, and $m$ is the number of values in set $V$. In our experiments, CIC is computed as the arithmetic mean over all the dialog domains to retain the overall performance.
CIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities. For example, in news summarization the proper nouns are the critical information to retain.
Experimental Settings ::: Implementation Details
We implemented our baselines with OpenNMT framework BIBREF33. We delexicalize utterances according to the belief span annotation. To maintain the generalizability of SPNet, we combine the slots that refer to the same information from different dialog domains into one slot (e.g. time). Instead of using pre-trained word embeddings like GloVe BIBREF34, we train word embeddings from scratch with a 128-dimension embedding layer. We set the hidden states of the bidirectional LSTM encoders to 256 dimensions, and the unidirectional LSTM decoder to 512 dimension. Our model is optimized using Adam BIBREF35 with a learning rate of 0.001, $\beta _1=0.9$, $\beta _2=0.999$. We reduce the learning rate to half to avoid overfitting when the validation loss increases. We set the hyperparameter $\lambda $ to 0.5 in the objective function and the batch size to eight. We use beam search with a beam size of three during decoding. We use the validation set to select the model parameter. Our model with and without multi-task takes about 15 epochs and seven epochs to converge, respectively.
Results and Discussions ::: Automatic Evaluation Results
To demonstrate SPNet's effectiveness, we compare it with two state-of-the-art methods, Pointer-Generator BIBREF5 and Transformer BIBREF6. Pointer-Generator is the state-of-the-art method in abstractive document summarization. In inference, we use length penalty and coverage penalty mentioned in BIBREF36. The hyperparameters in the original implementation BIBREF5 were used. Transformer uses attention mechanisms to replace recurrence for sequence transduction. Transformer generalizes well to many sequence-to-sequence problems, so we adapt it to our task, following the implementation in the official OpenNMT-py documentation.
We show all the models' results in Table TABREF24. We observe that SPNet reaches the highest score in both ROUGE and CIC. Both Pointer-Generator and Transformer achieve high ROUGE scores, but a relative low CIC scores. It suggests that the baselines have more room for improvement on preserving critical slot information. All the scaffolds we propose can be applied to different neural network models. In this work we select Pointer-Generator as our base model in SPNet because we observe that Transformer only has a small improvement over Pointer-Generator but is having a higher cost on training time and computing resources. We observe that SPNet outperforms other methods in all the automatic evaluation metrics with a big margin, as it incorporates all the three semantic scaffolds. Semantic slot contributes the most to SPNet's increased performance, bringing the largest increase on all automatic evaluation metrics.
Results and Discussions ::: Human Evaluation Results
We also perform human evaluation to verify if our method's increased performance on automatic evaluation metrics entails better human perceived quality. We randomly select 100 test samples from MultiWOZ test set for evaluation. We recruit 150 crowd workers from Amazon Mechanical Turk. For each sample, we show the conversation, reference summary, as well as summaries generated by Pointer-Generator and SPNet to three different participants. The participants are asked to score each summary on three indicators: relevance, conciseness and readability on a 1 to 5 scale, and rank the summary pair (tie allowed).
We present human evaluation results in Table TABREF27. In the scoring part, our model outperforms Pointer-Generator in all three evaluation metrics. SPNet scored better than Pointer-Generator on relevance and readability. All generated summaries are relatively concise; therefore, they score very similar in conciseness. Ground truth is still perceived as more relevant and readable than SPNet results. However, ground truth does not get a high absolute score. From the feedback of the evaluators, we found that they think that the ground truth has not covered all the necessary information in the conversation, and the description is not so natural. This motivates us to collect a dialog summarization dataset with high-quality human-written summaries in the future. Results in the ranking evaluation show more differences between different summaries. SPNet outperforms Pointer-Generator with a large margin. Its performance is relatively close to the ground truth summary.
Results and Discussions ::: Case study
Table TABREF25 shows an example summary from all models along with ground truth summary. We observe that Pointer-Generator ignores some essential fragments, such as the restaurant booking information (6 people, Sunday, 18:45). Missing information always belongs to the last several domains (restaurant in this case) in a multi-domain dialog. We also observe that separately encoding two speakers reduces repetition and inconsistency. For instance, Pointer-Generator's summary mentions “free wifi" several times and has conflicting requirements on wifi. This is because dialogs has information redundancy, but single-speaker model ignores such dialog property.
Our method has limitations. In the example shown in Table TABREF25, our summary does not mention the hotel name (Alexander Bed and Breakfast) and its address (517a Coldham Lane) referred in the source. It occurs because the ground truth summary doe not cover it in the training data. As a supervised method, SPNet is hard to generate a summary containing additional information beyond the ground truth. However, in some cases, SPNet can also correctly summarize the content not covered in the reference summary (see Table TABREF31 in Appendix).
Furthermore, although our SPNet achieves a much-improved performance, the application of SPNet still needs extra annotations for semantic scaffolds. For a dialog dataset, speaker role scaffold is a natural pattern for modeling. Most multi-domain dialog corpus has the domain annotation. While for texts, for example news, its topic categorization such as sports or entertainment can be used as domain annotation. We find that semantic slot scaffold brings the most significant improvement, but it is seldom explicitly annotated. However, the semantic slot scaffold can be relaxed to any critical entities in the corpus, such as team name in sports news or professional terminology in a technical meeting.
Conclusion and Future Work
We adapt a dialog generation dataset, MultiWOZ to an abstractive dialog summarization dataset. We propose SPNet, an end-to-end model that incorporates the speaker role, semantic slot and dialog domain as the semantic scaffolds to improve abstractive summary quality. We also propose an automatic evaluation metric CIC that considers semantic slot relevance to serve as a complementary metric to ROUGE. SPNet outperforms baseline methods in both automatic and human evaluation metrics. It suggests that involving semantic scaffolds efficiently improves abstractive summarization quality in the dialog scene.
Moreover, we can easily extend SPNet to other summarization tasks. We plan to apply semantic slot scaffold to news summarization. Specifically, we can annotate the critical entities such as person names or location names to ensure that they are captured correctly in the generated summary. We also plan to collect a human-human dialog dataset with more diverse human-written summaries. | SPNet vs best baseline:
ROUGE-1: 90.97 vs 90.68
CIC: 70.45 vs 70.25 |
d5f8707ddc21741d52b3c2a9ab1af2871dc6c90b | d5f8707ddc21741d52b3c2a9ab1af2871dc6c90b_0 | Q: What automatic and human evaluation metrics are used to compare SPNet to its counterparts?
Text: Introduction
Summarization aims to condense a piece of text to a shorter version, retaining the critical information. On dialogs, summarization has various promising applications in the real world. For instance, the automatic doctor-patient interaction summary can save doctors' massive amount of time used for filling medical records. There is also a general demand for summarizing meetings in order to track project progress in the industry. Generally, multi-party conversations with interactive communication are more difficult to summarize than single-speaker documents. Hence, dialog summarization will be a potential field in summarization track.
There are two types of summarization: extractive and abstractive. Extractive summarization selects sentences or phrases directly from the source text and merges them to a summary, while abstractive summarization attempts to generate novel expressions to condense information. Previous dialog summarization research mostly study extractive summarization BIBREF1, BIBREF2. Extractive methods merge selected important utterances from a dialog to form summary. Because dialogs are highly dependant on their histories, it is difficult to produce coherent discourses with a set of non-consecutive conversation turns. Therefore, extractive summarization is not the best approach to summarize dialogs. However, most modern abstractive methods focus on single-speaker documents rather than dialogs due to the lack of dialog summarization corpora. Popular abstractive summarization dataset like CNN/Daily Mail BIBREF3 is on news documents. AMI meeting corpus BIBREF4 is the common benchmark, but it only has extractive summary.
In this work, we introduce a dataset for abstractive dialog summarization based on MultiWOZ BIBREF0. Seq2Seq models such as Pointer-Generator BIBREF5 have achieved high-quality summaries of news document. However, directly applying a news summarizer to dialog results in two drawbacks: informative entities such as place name are difficult to capture precisely and contents in different domains are summarized unequally. To address these problems, we propose Scaffold Pointer Network (SPNet). SPNet incorporates three types of semantic scaffolds in dialog: speaker role, semantic slot, and dialog domain. Firstly, SPNet adapts separate encoder to attentional Seq2Seq framework, producing distinct semantic representations for different speaker roles. Then, our method inputs delexicalized utterances for producing delexicalized summary, and fills in slot values to generate complete summary. Finally, we incorporate dialog domain scaffold by jointly optimizing dialog domain classification task along with the summarization task. We evaluate SPNet with both automatic and human evaluation metrics on MultiWOZ. SPNet outperforms Pointer-Generator BIBREF5 and Transformer BIBREF6 on all the metrics.
Related Work
BIBREF7 first applied modern neural models to abstractive summarization. Their approach is based on Seq2Seq framework BIBREF8 and attention mechanism BIBREF9, achieving state-of-the-art results on Gigaword and DUC-2004 dataset. BIBREF10 proposed copy mechanism in summarization, demonstrating its effectiveness by combining the advantages of extractive and abstractive approach. BIBREF5 applied pointing BIBREF11 as copy mechanism and use coverage mechanism BIBREF12 to discourage repetition. Most recently, reinforcement learning (RL) has been employed in abstractive summarization. RL-based approaches directly optimize the objectives of summarization BIBREF13, BIBREF14. However, deep reinforcement learning approaches are difficult to train and more prone to exposure bias BIBREF15.
Recently, pre-training methods are popular in NLP applications. BERT BIBREF16 and GPT BIBREF17 have achieved state-of-the-art performance in many tasks, including summarization. For instance, BIBREF18 proposed a method to pre-train hierarchical document encoder for extractive summarization. BIBREF19 proposed two strategies to incorporate a pre-trained model (GPT) to perform the abstractive summarizer and achieved a better performance. However, there has not been much research on adapting pre-trained models to dialog summarization.
Dialog summarization, specifically meeting summarization, has been studied extensively. Previous work generally focused on statistical machine learning methods in extractive dialog summarization: BIBREF20 used skip-chain conditional random fields (CRFs) BIBREF21 as a ranking method in extractive meeting summarization. BIBREF22 compared support vector machines (SVMs) BIBREF23 with LDA-based topic models BIBREF24 for producing decision summaries. However, abstractive dialog summarization was less explored due to the lack of a suitable benchmark. Recent work BIBREF25, BIBREF26, BIBREF27 created abstractive dialog summary benchmarks with existing dialog corpus. BIBREF26 annotated topic descriptions in AMI meeting corpus as the summary. However, topics they defined are coarse, such as “industrial designer presentation". They also proposed a model with a sentence-gated mechanism incorporating dialog acts to perform abstractive summarization. Moreover, BIBREF28 first built a model to summarize audio-visual meeting data with an abstractive method. However, previous work has not investigated the utilization of semantic patterns in dialog, so we explore it in-depth in our work.
Proposed Method
As discussed above, state-of-the-art document summarizers are not applicable in conversation settings. We propose Scaffold Pointer Network (SPNet) based on Pointer-Generator BIBREF5. SPNet incorporates three types of semantic scaffolds to improve abstractive dialog summarization: speaker role, semantic slot and dialog domain.
Proposed Method ::: Background
We first introduce Pointer-Generator BIBREF5. It is a hybrid model of the typical Seq2Seq attention model BIBREF29 and pointer network BIBREF11. Seq2Seq framework encodes source sequence and generates the target sequence with the decoder. The input sequence is fed into the encoder token by token, producing the encoder hidden states $h_i$ in each encoding step. The decoder receives word embedding of the previous word and generates a distribution to decide the target element in this step, retaining decoder hidden states $s_t$. In Pointer-Generator, attention distribution $a^t$ is computed as in BIBREF9:
where $W_h$, $W_s$, $v$ and $b_{attn}$ are all learnable parameters.
With the attention distribution $a^t$, context vector $h_t^*$ is computed as the weighted sum of encoder's hidden states. Context vector is regarded as the attentional information in the source text:
Pointer-Generator differs from typical Seq2Seq attention model in the generation process. The pointing mechanism combines copying words directly from the source text with generating words from a fixed vocabulary. Generation probability $p_{gen}$ is calculated as “a soft switch" to choose from copy and generation:
where $x_t$ is the decoder input, $w_{h^*}$, $w_s$, $w_x$ and $b_{ptr}$ are all learnable parameters. $\sigma $ is sigmoid function, so the generation probability $p_{gen}$ has a range of $[0, 1]$.
The ability to select from copy and generation corresponds to a dynamic vocabulary. Pointer network forms an extended vocabulary for the copied tokens, including all the out-of-vocabulary(OOV) words appeared in the source text. The final probability distribution $P(w)$ on extended vocabulary is computed as follows:
where $P_{vocab}$ is the distribution on the original vocabulary, $V^{\prime }$, $V$, $b$ and $b^{\prime }$ are learnable parameters used to calculate such distribution.
Proposed Method ::: Scaffold Pointer Network (SPNet)
Our Scaffold Pointer Network (depicted in Figure FIGREF7) is based on Pointer-Generator BIBREF5. The contribution of SPNet is three-fold: separate encoding for different roles, incorporating semantic slot scaffold and dialog domain scaffold.
Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Speaker Role Scaffold
Our encoder-decoder framework employs separate encoding for different speakers in the dialog. User utterances $x_t^{usr}$ and system utterances $x_t^{sys}$ are fed into a user encoder and a system encoder separately to obtain encoder hidden states $h_{i}^{usr}$ and $h_{i}^{sys}$ . The attention distributions and context vectors are calculated as described in section SECREF1. In order to merge these two encoders in our framework, the decoder's hidden state $s_0$ is initialized as:
The pointing mechanism in our model follows the Equation DISPLAY_FORM4, and we obtain the context vector $h_t^{*}$:
Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Semantic Slot Scaffold
We integrate semantic slot scaffold by performing delexicalization on original dialogs. Delexicalization is a common pre-processing step in dialog modeling. Specifically, delexicalization replaces the slot values with its semantic slot name(e.g. replace 18:00 with [time]). It is easier for the language modeling to process delexicalized texts, as they have a reduced vocabulary size. But these generated sentences lack the semantic information due to the delexicalization. Some previous dialog system research ignored this issue BIBREF30 or completed single delexicalized utterance BIBREF31 as generated response. We propose to perform delexicalization in dialog summary, since delexicalized utterances can simplify dialog modeling. We fill the generated templates with slots with the copy and pointing mechanism.
We first train the model with the delexicalized utterance. Attention distribution $a^t$ over the source tokens instructs the decoder to fill up the slots with lexicalized values:
Note that $w_{slot}$ specifies the tokens that represents the slot name (e.g. [hotel_place], [time]). Decoder directly copies lexicalized value $value(w_i)$ conditioned on attention distribution $a_i^t$. If $w$ is not a slot token, then the probability $P(w)$ is calculated as Equation DISPLAY_FORM5.
Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Dialog Domain Scaffold
We integrate dialog domain scaffold through a multi-task framework. Dialog domain indicates different conversation task content, for example, booking hotel, restaurant and taxi in MultiWOZ dataset. Generally, the content in different domains varies so multi-domain task summarization is more difficult than single-domain. We include domain classification as the auxiliary task to incorporate the prior that different domains have different content. Feedback from the domain classification task provides domain specific information for the encoder to learn better representations. For domain classification, we feed the concatenated encoder hidden state through a binary classifier with two linear layers, producing domain probability $d$. The $i^{th}$ element $d_i$ in $d$ represents the probability of the $i^{th}$ domain:
where $U$, $U^{\prime }$, $b_{d}$ and $b_{d}^{\prime }$ are all trainable parameters in the classifier. We denote the loss function of summarization as $loss_1$ and domain classification as $loss_2$. Assume target word at timestep $t$ is $w_t^{*}$, $loss_1$ is the arithmetic mean of the negative log likelihood of $w_t^{*}$ over the generated sequence:
The domain classification task is a multi-label binary classification problem. We use binary cross entropy loss between the $i^{th}$ domain label $\hat{d_i}$ and predict probability $d_i$ for this task:
where $|D|$ is the number of domains. Finally, we reweight the classification loss with hyperparameter $\lambda $ and the objective function is:
Experimental Settings ::: Dataset
We validate SPNet on MultiWOZ-2.0 dataset BIBREF0. MultiWOZ consists of multi-domain conversations between a tourist and a information center clerk on varies booking tasks or domains, such as booking restaurants, hotels, taxis, etc. There are 10,438 dialogs, spanning over seven domains. 3,406 of them are single-domain (8.93 turns on average) and 7,302 are multi-domain (15.39 turns on average). During MultiWOZ data collection, instruction is provided for crowd workers to perform the task. We use the instructions as the dialog summary, and an example data is shown in Table TABREF25. Dialog domain label is extracted from existing MultiWOZ annotation. In the experiment, we split the dataset into 8,438 training, 1,000 validation, and 1,000 testing.
Experimental Settings ::: Evaluation Metrics
ROUGE BIBREF32 is a standard metric for summarization, designed to measure the surface word alignment between a generated summary and a human written summary. We evaluate our model with ROUGE-1, ROUGE-2 and ROUGE-L. They measure the word-overlap, bigram-overlap, and longest common sequence between the reference summary and the generated summary respectively. We obtain ROUGE scores using the files2rouge package. However, ROUGE is insufficient to measure summarization performance. The following example shows its limitations:
Reference: You are going to [restaurant_name] at [time].
Summary: You are going to [restaurant_name] at.
In this case, the summary has a high ROUGE score, as it has a considerable proportion of word overlap with the reference summary. However, it still has poor relevance and readability, for leaving out one of the most critical information: [time]. ROUGE treats each word equally in computing n-gram overlap while the informativeness actually varies: common words or phrases (e.g. “You are going to") significantly contribute to the ROUGE score and readability, but they are almost irrelevant to essential contents. The semantic slot values (e.g. [restaurant_name], [time]) are more essential compared to other words in the summary. However, ROUGE did not take this into consideration. To address this drawback in ROUGE, we propose a new evaluation metric: Critical Information Completeness (CIC). Formally, CIC is a recall of semantic slot information between a candidate summary and a reference summary. CIC is defined as follows:
where $V$ stands for a set of delexicalized values in the reference summary, $Count_{match}(v)$ is the number of values co-occurring in the candidate summary and reference summary, and $m$ is the number of values in set $V$. In our experiments, CIC is computed as the arithmetic mean over all the dialog domains to retain the overall performance.
CIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities. For example, in news summarization the proper nouns are the critical information to retain.
Experimental Settings ::: Implementation Details
We implemented our baselines with OpenNMT framework BIBREF33. We delexicalize utterances according to the belief span annotation. To maintain the generalizability of SPNet, we combine the slots that refer to the same information from different dialog domains into one slot (e.g. time). Instead of using pre-trained word embeddings like GloVe BIBREF34, we train word embeddings from scratch with a 128-dimension embedding layer. We set the hidden states of the bidirectional LSTM encoders to 256 dimensions, and the unidirectional LSTM decoder to 512 dimension. Our model is optimized using Adam BIBREF35 with a learning rate of 0.001, $\beta _1=0.9$, $\beta _2=0.999$. We reduce the learning rate to half to avoid overfitting when the validation loss increases. We set the hyperparameter $\lambda $ to 0.5 in the objective function and the batch size to eight. We use beam search with a beam size of three during decoding. We use the validation set to select the model parameter. Our model with and without multi-task takes about 15 epochs and seven epochs to converge, respectively.
Results and Discussions ::: Automatic Evaluation Results
To demonstrate SPNet's effectiveness, we compare it with two state-of-the-art methods, Pointer-Generator BIBREF5 and Transformer BIBREF6. Pointer-Generator is the state-of-the-art method in abstractive document summarization. In inference, we use length penalty and coverage penalty mentioned in BIBREF36. The hyperparameters in the original implementation BIBREF5 were used. Transformer uses attention mechanisms to replace recurrence for sequence transduction. Transformer generalizes well to many sequence-to-sequence problems, so we adapt it to our task, following the implementation in the official OpenNMT-py documentation.
We show all the models' results in Table TABREF24. We observe that SPNet reaches the highest score in both ROUGE and CIC. Both Pointer-Generator and Transformer achieve high ROUGE scores, but a relative low CIC scores. It suggests that the baselines have more room for improvement on preserving critical slot information. All the scaffolds we propose can be applied to different neural network models. In this work we select Pointer-Generator as our base model in SPNet because we observe that Transformer only has a small improvement over Pointer-Generator but is having a higher cost on training time and computing resources. We observe that SPNet outperforms other methods in all the automatic evaluation metrics with a big margin, as it incorporates all the three semantic scaffolds. Semantic slot contributes the most to SPNet's increased performance, bringing the largest increase on all automatic evaluation metrics.
Results and Discussions ::: Human Evaluation Results
We also perform human evaluation to verify if our method's increased performance on automatic evaluation metrics entails better human perceived quality. We randomly select 100 test samples from MultiWOZ test set for evaluation. We recruit 150 crowd workers from Amazon Mechanical Turk. For each sample, we show the conversation, reference summary, as well as summaries generated by Pointer-Generator and SPNet to three different participants. The participants are asked to score each summary on three indicators: relevance, conciseness and readability on a 1 to 5 scale, and rank the summary pair (tie allowed).
We present human evaluation results in Table TABREF27. In the scoring part, our model outperforms Pointer-Generator in all three evaluation metrics. SPNet scored better than Pointer-Generator on relevance and readability. All generated summaries are relatively concise; therefore, they score very similar in conciseness. Ground truth is still perceived as more relevant and readable than SPNet results. However, ground truth does not get a high absolute score. From the feedback of the evaluators, we found that they think that the ground truth has not covered all the necessary information in the conversation, and the description is not so natural. This motivates us to collect a dialog summarization dataset with high-quality human-written summaries in the future. Results in the ranking evaluation show more differences between different summaries. SPNet outperforms Pointer-Generator with a large margin. Its performance is relatively close to the ground truth summary.
Results and Discussions ::: Case study
Table TABREF25 shows an example summary from all models along with ground truth summary. We observe that Pointer-Generator ignores some essential fragments, such as the restaurant booking information (6 people, Sunday, 18:45). Missing information always belongs to the last several domains (restaurant in this case) in a multi-domain dialog. We also observe that separately encoding two speakers reduces repetition and inconsistency. For instance, Pointer-Generator's summary mentions “free wifi" several times and has conflicting requirements on wifi. This is because dialogs has information redundancy, but single-speaker model ignores such dialog property.
Our method has limitations. In the example shown in Table TABREF25, our summary does not mention the hotel name (Alexander Bed and Breakfast) and its address (517a Coldham Lane) referred in the source. It occurs because the ground truth summary doe not cover it in the training data. As a supervised method, SPNet is hard to generate a summary containing additional information beyond the ground truth. However, in some cases, SPNet can also correctly summarize the content not covered in the reference summary (see Table TABREF31 in Appendix).
Furthermore, although our SPNet achieves a much-improved performance, the application of SPNet still needs extra annotations for semantic scaffolds. For a dialog dataset, speaker role scaffold is a natural pattern for modeling. Most multi-domain dialog corpus has the domain annotation. While for texts, for example news, its topic categorization such as sports or entertainment can be used as domain annotation. We find that semantic slot scaffold brings the most significant improvement, but it is seldom explicitly annotated. However, the semantic slot scaffold can be relaxed to any critical entities in the corpus, such as team name in sports news or professional terminology in a technical meeting.
Conclusion and Future Work
We adapt a dialog generation dataset, MultiWOZ to an abstractive dialog summarization dataset. We propose SPNet, an end-to-end model that incorporates the speaker role, semantic slot and dialog domain as the semantic scaffolds to improve abstractive summary quality. We also propose an automatic evaluation metric CIC that considers semantic slot relevance to serve as a complementary metric to ROUGE. SPNet outperforms baseline methods in both automatic and human evaluation metrics. It suggests that involving semantic scaffolds efficiently improves abstractive summarization quality in the dialog scene.
Moreover, we can easily extend SPNet to other summarization tasks. We plan to apply semantic slot scaffold to news summarization. Specifically, we can annotate the critical entities such as person names or location names to ensure that they are captured correctly in the generated summary. We also plan to collect a human-human dialog dataset with more diverse human-written summaries. | ROUGE and CIC, relevance, conciseness and readability on a 1 to 5 scale, and rank the summary pair |
58f3bfbd01ba9768172be45a819faaa0de2ddfa4 | 58f3bfbd01ba9768172be45a819faaa0de2ddfa4_0 | Q: Is proposed abstractive dialog summarization dataset open source?
Text: Introduction
Summarization aims to condense a piece of text to a shorter version, retaining the critical information. On dialogs, summarization has various promising applications in the real world. For instance, the automatic doctor-patient interaction summary can save doctors' massive amount of time used for filling medical records. There is also a general demand for summarizing meetings in order to track project progress in the industry. Generally, multi-party conversations with interactive communication are more difficult to summarize than single-speaker documents. Hence, dialog summarization will be a potential field in summarization track.
There are two types of summarization: extractive and abstractive. Extractive summarization selects sentences or phrases directly from the source text and merges them to a summary, while abstractive summarization attempts to generate novel expressions to condense information. Previous dialog summarization research mostly study extractive summarization BIBREF1, BIBREF2. Extractive methods merge selected important utterances from a dialog to form summary. Because dialogs are highly dependant on their histories, it is difficult to produce coherent discourses with a set of non-consecutive conversation turns. Therefore, extractive summarization is not the best approach to summarize dialogs. However, most modern abstractive methods focus on single-speaker documents rather than dialogs due to the lack of dialog summarization corpora. Popular abstractive summarization dataset like CNN/Daily Mail BIBREF3 is on news documents. AMI meeting corpus BIBREF4 is the common benchmark, but it only has extractive summary.
In this work, we introduce a dataset for abstractive dialog summarization based on MultiWOZ BIBREF0. Seq2Seq models such as Pointer-Generator BIBREF5 have achieved high-quality summaries of news document. However, directly applying a news summarizer to dialog results in two drawbacks: informative entities such as place name are difficult to capture precisely and contents in different domains are summarized unequally. To address these problems, we propose Scaffold Pointer Network (SPNet). SPNet incorporates three types of semantic scaffolds in dialog: speaker role, semantic slot, and dialog domain. Firstly, SPNet adapts separate encoder to attentional Seq2Seq framework, producing distinct semantic representations for different speaker roles. Then, our method inputs delexicalized utterances for producing delexicalized summary, and fills in slot values to generate complete summary. Finally, we incorporate dialog domain scaffold by jointly optimizing dialog domain classification task along with the summarization task. We evaluate SPNet with both automatic and human evaluation metrics on MultiWOZ. SPNet outperforms Pointer-Generator BIBREF5 and Transformer BIBREF6 on all the metrics.
Related Work
BIBREF7 first applied modern neural models to abstractive summarization. Their approach is based on Seq2Seq framework BIBREF8 and attention mechanism BIBREF9, achieving state-of-the-art results on Gigaword and DUC-2004 dataset. BIBREF10 proposed copy mechanism in summarization, demonstrating its effectiveness by combining the advantages of extractive and abstractive approach. BIBREF5 applied pointing BIBREF11 as copy mechanism and use coverage mechanism BIBREF12 to discourage repetition. Most recently, reinforcement learning (RL) has been employed in abstractive summarization. RL-based approaches directly optimize the objectives of summarization BIBREF13, BIBREF14. However, deep reinforcement learning approaches are difficult to train and more prone to exposure bias BIBREF15.
Recently, pre-training methods are popular in NLP applications. BERT BIBREF16 and GPT BIBREF17 have achieved state-of-the-art performance in many tasks, including summarization. For instance, BIBREF18 proposed a method to pre-train hierarchical document encoder for extractive summarization. BIBREF19 proposed two strategies to incorporate a pre-trained model (GPT) to perform the abstractive summarizer and achieved a better performance. However, there has not been much research on adapting pre-trained models to dialog summarization.
Dialog summarization, specifically meeting summarization, has been studied extensively. Previous work generally focused on statistical machine learning methods in extractive dialog summarization: BIBREF20 used skip-chain conditional random fields (CRFs) BIBREF21 as a ranking method in extractive meeting summarization. BIBREF22 compared support vector machines (SVMs) BIBREF23 with LDA-based topic models BIBREF24 for producing decision summaries. However, abstractive dialog summarization was less explored due to the lack of a suitable benchmark. Recent work BIBREF25, BIBREF26, BIBREF27 created abstractive dialog summary benchmarks with existing dialog corpus. BIBREF26 annotated topic descriptions in AMI meeting corpus as the summary. However, topics they defined are coarse, such as “industrial designer presentation". They also proposed a model with a sentence-gated mechanism incorporating dialog acts to perform abstractive summarization. Moreover, BIBREF28 first built a model to summarize audio-visual meeting data with an abstractive method. However, previous work has not investigated the utilization of semantic patterns in dialog, so we explore it in-depth in our work.
Proposed Method
As discussed above, state-of-the-art document summarizers are not applicable in conversation settings. We propose Scaffold Pointer Network (SPNet) based on Pointer-Generator BIBREF5. SPNet incorporates three types of semantic scaffolds to improve abstractive dialog summarization: speaker role, semantic slot and dialog domain.
Proposed Method ::: Background
We first introduce Pointer-Generator BIBREF5. It is a hybrid model of the typical Seq2Seq attention model BIBREF29 and pointer network BIBREF11. Seq2Seq framework encodes source sequence and generates the target sequence with the decoder. The input sequence is fed into the encoder token by token, producing the encoder hidden states $h_i$ in each encoding step. The decoder receives word embedding of the previous word and generates a distribution to decide the target element in this step, retaining decoder hidden states $s_t$. In Pointer-Generator, attention distribution $a^t$ is computed as in BIBREF9:
where $W_h$, $W_s$, $v$ and $b_{attn}$ are all learnable parameters.
With the attention distribution $a^t$, context vector $h_t^*$ is computed as the weighted sum of encoder's hidden states. Context vector is regarded as the attentional information in the source text:
Pointer-Generator differs from typical Seq2Seq attention model in the generation process. The pointing mechanism combines copying words directly from the source text with generating words from a fixed vocabulary. Generation probability $p_{gen}$ is calculated as “a soft switch" to choose from copy and generation:
where $x_t$ is the decoder input, $w_{h^*}$, $w_s$, $w_x$ and $b_{ptr}$ are all learnable parameters. $\sigma $ is sigmoid function, so the generation probability $p_{gen}$ has a range of $[0, 1]$.
The ability to select from copy and generation corresponds to a dynamic vocabulary. Pointer network forms an extended vocabulary for the copied tokens, including all the out-of-vocabulary(OOV) words appeared in the source text. The final probability distribution $P(w)$ on extended vocabulary is computed as follows:
where $P_{vocab}$ is the distribution on the original vocabulary, $V^{\prime }$, $V$, $b$ and $b^{\prime }$ are learnable parameters used to calculate such distribution.
Proposed Method ::: Scaffold Pointer Network (SPNet)
Our Scaffold Pointer Network (depicted in Figure FIGREF7) is based on Pointer-Generator BIBREF5. The contribution of SPNet is three-fold: separate encoding for different roles, incorporating semantic slot scaffold and dialog domain scaffold.
Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Speaker Role Scaffold
Our encoder-decoder framework employs separate encoding for different speakers in the dialog. User utterances $x_t^{usr}$ and system utterances $x_t^{sys}$ are fed into a user encoder and a system encoder separately to obtain encoder hidden states $h_{i}^{usr}$ and $h_{i}^{sys}$ . The attention distributions and context vectors are calculated as described in section SECREF1. In order to merge these two encoders in our framework, the decoder's hidden state $s_0$ is initialized as:
The pointing mechanism in our model follows the Equation DISPLAY_FORM4, and we obtain the context vector $h_t^{*}$:
Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Semantic Slot Scaffold
We integrate semantic slot scaffold by performing delexicalization on original dialogs. Delexicalization is a common pre-processing step in dialog modeling. Specifically, delexicalization replaces the slot values with its semantic slot name(e.g. replace 18:00 with [time]). It is easier for the language modeling to process delexicalized texts, as they have a reduced vocabulary size. But these generated sentences lack the semantic information due to the delexicalization. Some previous dialog system research ignored this issue BIBREF30 or completed single delexicalized utterance BIBREF31 as generated response. We propose to perform delexicalization in dialog summary, since delexicalized utterances can simplify dialog modeling. We fill the generated templates with slots with the copy and pointing mechanism.
We first train the model with the delexicalized utterance. Attention distribution $a^t$ over the source tokens instructs the decoder to fill up the slots with lexicalized values:
Note that $w_{slot}$ specifies the tokens that represents the slot name (e.g. [hotel_place], [time]). Decoder directly copies lexicalized value $value(w_i)$ conditioned on attention distribution $a_i^t$. If $w$ is not a slot token, then the probability $P(w)$ is calculated as Equation DISPLAY_FORM5.
Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Dialog Domain Scaffold
We integrate dialog domain scaffold through a multi-task framework. Dialog domain indicates different conversation task content, for example, booking hotel, restaurant and taxi in MultiWOZ dataset. Generally, the content in different domains varies so multi-domain task summarization is more difficult than single-domain. We include domain classification as the auxiliary task to incorporate the prior that different domains have different content. Feedback from the domain classification task provides domain specific information for the encoder to learn better representations. For domain classification, we feed the concatenated encoder hidden state through a binary classifier with two linear layers, producing domain probability $d$. The $i^{th}$ element $d_i$ in $d$ represents the probability of the $i^{th}$ domain:
where $U$, $U^{\prime }$, $b_{d}$ and $b_{d}^{\prime }$ are all trainable parameters in the classifier. We denote the loss function of summarization as $loss_1$ and domain classification as $loss_2$. Assume target word at timestep $t$ is $w_t^{*}$, $loss_1$ is the arithmetic mean of the negative log likelihood of $w_t^{*}$ over the generated sequence:
The domain classification task is a multi-label binary classification problem. We use binary cross entropy loss between the $i^{th}$ domain label $\hat{d_i}$ and predict probability $d_i$ for this task:
where $|D|$ is the number of domains. Finally, we reweight the classification loss with hyperparameter $\lambda $ and the objective function is:
Experimental Settings ::: Dataset
We validate SPNet on MultiWOZ-2.0 dataset BIBREF0. MultiWOZ consists of multi-domain conversations between a tourist and a information center clerk on varies booking tasks or domains, such as booking restaurants, hotels, taxis, etc. There are 10,438 dialogs, spanning over seven domains. 3,406 of them are single-domain (8.93 turns on average) and 7,302 are multi-domain (15.39 turns on average). During MultiWOZ data collection, instruction is provided for crowd workers to perform the task. We use the instructions as the dialog summary, and an example data is shown in Table TABREF25. Dialog domain label is extracted from existing MultiWOZ annotation. In the experiment, we split the dataset into 8,438 training, 1,000 validation, and 1,000 testing.
Experimental Settings ::: Evaluation Metrics
ROUGE BIBREF32 is a standard metric for summarization, designed to measure the surface word alignment between a generated summary and a human written summary. We evaluate our model with ROUGE-1, ROUGE-2 and ROUGE-L. They measure the word-overlap, bigram-overlap, and longest common sequence between the reference summary and the generated summary respectively. We obtain ROUGE scores using the files2rouge package. However, ROUGE is insufficient to measure summarization performance. The following example shows its limitations:
Reference: You are going to [restaurant_name] at [time].
Summary: You are going to [restaurant_name] at.
In this case, the summary has a high ROUGE score, as it has a considerable proportion of word overlap with the reference summary. However, it still has poor relevance and readability, for leaving out one of the most critical information: [time]. ROUGE treats each word equally in computing n-gram overlap while the informativeness actually varies: common words or phrases (e.g. “You are going to") significantly contribute to the ROUGE score and readability, but they are almost irrelevant to essential contents. The semantic slot values (e.g. [restaurant_name], [time]) are more essential compared to other words in the summary. However, ROUGE did not take this into consideration. To address this drawback in ROUGE, we propose a new evaluation metric: Critical Information Completeness (CIC). Formally, CIC is a recall of semantic slot information between a candidate summary and a reference summary. CIC is defined as follows:
where $V$ stands for a set of delexicalized values in the reference summary, $Count_{match}(v)$ is the number of values co-occurring in the candidate summary and reference summary, and $m$ is the number of values in set $V$. In our experiments, CIC is computed as the arithmetic mean over all the dialog domains to retain the overall performance.
CIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities. For example, in news summarization the proper nouns are the critical information to retain.
Experimental Settings ::: Implementation Details
We implemented our baselines with OpenNMT framework BIBREF33. We delexicalize utterances according to the belief span annotation. To maintain the generalizability of SPNet, we combine the slots that refer to the same information from different dialog domains into one slot (e.g. time). Instead of using pre-trained word embeddings like GloVe BIBREF34, we train word embeddings from scratch with a 128-dimension embedding layer. We set the hidden states of the bidirectional LSTM encoders to 256 dimensions, and the unidirectional LSTM decoder to 512 dimension. Our model is optimized using Adam BIBREF35 with a learning rate of 0.001, $\beta _1=0.9$, $\beta _2=0.999$. We reduce the learning rate to half to avoid overfitting when the validation loss increases. We set the hyperparameter $\lambda $ to 0.5 in the objective function and the batch size to eight. We use beam search with a beam size of three during decoding. We use the validation set to select the model parameter. Our model with and without multi-task takes about 15 epochs and seven epochs to converge, respectively.
Results and Discussions ::: Automatic Evaluation Results
To demonstrate SPNet's effectiveness, we compare it with two state-of-the-art methods, Pointer-Generator BIBREF5 and Transformer BIBREF6. Pointer-Generator is the state-of-the-art method in abstractive document summarization. In inference, we use length penalty and coverage penalty mentioned in BIBREF36. The hyperparameters in the original implementation BIBREF5 were used. Transformer uses attention mechanisms to replace recurrence for sequence transduction. Transformer generalizes well to many sequence-to-sequence problems, so we adapt it to our task, following the implementation in the official OpenNMT-py documentation.
We show all the models' results in Table TABREF24. We observe that SPNet reaches the highest score in both ROUGE and CIC. Both Pointer-Generator and Transformer achieve high ROUGE scores, but a relative low CIC scores. It suggests that the baselines have more room for improvement on preserving critical slot information. All the scaffolds we propose can be applied to different neural network models. In this work we select Pointer-Generator as our base model in SPNet because we observe that Transformer only has a small improvement over Pointer-Generator but is having a higher cost on training time and computing resources. We observe that SPNet outperforms other methods in all the automatic evaluation metrics with a big margin, as it incorporates all the three semantic scaffolds. Semantic slot contributes the most to SPNet's increased performance, bringing the largest increase on all automatic evaluation metrics.
Results and Discussions ::: Human Evaluation Results
We also perform human evaluation to verify if our method's increased performance on automatic evaluation metrics entails better human perceived quality. We randomly select 100 test samples from MultiWOZ test set for evaluation. We recruit 150 crowd workers from Amazon Mechanical Turk. For each sample, we show the conversation, reference summary, as well as summaries generated by Pointer-Generator and SPNet to three different participants. The participants are asked to score each summary on three indicators: relevance, conciseness and readability on a 1 to 5 scale, and rank the summary pair (tie allowed).
We present human evaluation results in Table TABREF27. In the scoring part, our model outperforms Pointer-Generator in all three evaluation metrics. SPNet scored better than Pointer-Generator on relevance and readability. All generated summaries are relatively concise; therefore, they score very similar in conciseness. Ground truth is still perceived as more relevant and readable than SPNet results. However, ground truth does not get a high absolute score. From the feedback of the evaluators, we found that they think that the ground truth has not covered all the necessary information in the conversation, and the description is not so natural. This motivates us to collect a dialog summarization dataset with high-quality human-written summaries in the future. Results in the ranking evaluation show more differences between different summaries. SPNet outperforms Pointer-Generator with a large margin. Its performance is relatively close to the ground truth summary.
Results and Discussions ::: Case study
Table TABREF25 shows an example summary from all models along with ground truth summary. We observe that Pointer-Generator ignores some essential fragments, such as the restaurant booking information (6 people, Sunday, 18:45). Missing information always belongs to the last several domains (restaurant in this case) in a multi-domain dialog. We also observe that separately encoding two speakers reduces repetition and inconsistency. For instance, Pointer-Generator's summary mentions “free wifi" several times and has conflicting requirements on wifi. This is because dialogs has information redundancy, but single-speaker model ignores such dialog property.
Our method has limitations. In the example shown in Table TABREF25, our summary does not mention the hotel name (Alexander Bed and Breakfast) and its address (517a Coldham Lane) referred in the source. It occurs because the ground truth summary doe not cover it in the training data. As a supervised method, SPNet is hard to generate a summary containing additional information beyond the ground truth. However, in some cases, SPNet can also correctly summarize the content not covered in the reference summary (see Table TABREF31 in Appendix).
Furthermore, although our SPNet achieves a much-improved performance, the application of SPNet still needs extra annotations for semantic scaffolds. For a dialog dataset, speaker role scaffold is a natural pattern for modeling. Most multi-domain dialog corpus has the domain annotation. While for texts, for example news, its topic categorization such as sports or entertainment can be used as domain annotation. We find that semantic slot scaffold brings the most significant improvement, but it is seldom explicitly annotated. However, the semantic slot scaffold can be relaxed to any critical entities in the corpus, such as team name in sports news or professional terminology in a technical meeting.
Conclusion and Future Work
We adapt a dialog generation dataset, MultiWOZ to an abstractive dialog summarization dataset. We propose SPNet, an end-to-end model that incorporates the speaker role, semantic slot and dialog domain as the semantic scaffolds to improve abstractive summary quality. We also propose an automatic evaluation metric CIC that considers semantic slot relevance to serve as a complementary metric to ROUGE. SPNet outperforms baseline methods in both automatic and human evaluation metrics. It suggests that involving semantic scaffolds efficiently improves abstractive summarization quality in the dialog scene.
Moreover, we can easily extend SPNet to other summarization tasks. We plan to apply semantic slot scaffold to news summarization. Specifically, we can annotate the critical entities such as person names or location names to ensure that they are captured correctly in the generated summary. We also plan to collect a human-human dialog dataset with more diverse human-written summaries. | Unanswerable |
73633afbefa191b36cca594977204c6511f9dad4 | 73633afbefa191b36cca594977204c6511f9dad4_0 | Q: Is it expected to have speaker role, semantic slot and dialog domain annotations in real world datasets?
Text: Introduction
Summarization aims to condense a piece of text to a shorter version, retaining the critical information. On dialogs, summarization has various promising applications in the real world. For instance, the automatic doctor-patient interaction summary can save doctors' massive amount of time used for filling medical records. There is also a general demand for summarizing meetings in order to track project progress in the industry. Generally, multi-party conversations with interactive communication are more difficult to summarize than single-speaker documents. Hence, dialog summarization will be a potential field in summarization track.
There are two types of summarization: extractive and abstractive. Extractive summarization selects sentences or phrases directly from the source text and merges them to a summary, while abstractive summarization attempts to generate novel expressions to condense information. Previous dialog summarization research mostly study extractive summarization BIBREF1, BIBREF2. Extractive methods merge selected important utterances from a dialog to form summary. Because dialogs are highly dependant on their histories, it is difficult to produce coherent discourses with a set of non-consecutive conversation turns. Therefore, extractive summarization is not the best approach to summarize dialogs. However, most modern abstractive methods focus on single-speaker documents rather than dialogs due to the lack of dialog summarization corpora. Popular abstractive summarization dataset like CNN/Daily Mail BIBREF3 is on news documents. AMI meeting corpus BIBREF4 is the common benchmark, but it only has extractive summary.
In this work, we introduce a dataset for abstractive dialog summarization based on MultiWOZ BIBREF0. Seq2Seq models such as Pointer-Generator BIBREF5 have achieved high-quality summaries of news document. However, directly applying a news summarizer to dialog results in two drawbacks: informative entities such as place name are difficult to capture precisely and contents in different domains are summarized unequally. To address these problems, we propose Scaffold Pointer Network (SPNet). SPNet incorporates three types of semantic scaffolds in dialog: speaker role, semantic slot, and dialog domain. Firstly, SPNet adapts separate encoder to attentional Seq2Seq framework, producing distinct semantic representations for different speaker roles. Then, our method inputs delexicalized utterances for producing delexicalized summary, and fills in slot values to generate complete summary. Finally, we incorporate dialog domain scaffold by jointly optimizing dialog domain classification task along with the summarization task. We evaluate SPNet with both automatic and human evaluation metrics on MultiWOZ. SPNet outperforms Pointer-Generator BIBREF5 and Transformer BIBREF6 on all the metrics.
Related Work
BIBREF7 first applied modern neural models to abstractive summarization. Their approach is based on Seq2Seq framework BIBREF8 and attention mechanism BIBREF9, achieving state-of-the-art results on Gigaword and DUC-2004 dataset. BIBREF10 proposed copy mechanism in summarization, demonstrating its effectiveness by combining the advantages of extractive and abstractive approach. BIBREF5 applied pointing BIBREF11 as copy mechanism and use coverage mechanism BIBREF12 to discourage repetition. Most recently, reinforcement learning (RL) has been employed in abstractive summarization. RL-based approaches directly optimize the objectives of summarization BIBREF13, BIBREF14. However, deep reinforcement learning approaches are difficult to train and more prone to exposure bias BIBREF15.
Recently, pre-training methods are popular in NLP applications. BERT BIBREF16 and GPT BIBREF17 have achieved state-of-the-art performance in many tasks, including summarization. For instance, BIBREF18 proposed a method to pre-train hierarchical document encoder for extractive summarization. BIBREF19 proposed two strategies to incorporate a pre-trained model (GPT) to perform the abstractive summarizer and achieved a better performance. However, there has not been much research on adapting pre-trained models to dialog summarization.
Dialog summarization, specifically meeting summarization, has been studied extensively. Previous work generally focused on statistical machine learning methods in extractive dialog summarization: BIBREF20 used skip-chain conditional random fields (CRFs) BIBREF21 as a ranking method in extractive meeting summarization. BIBREF22 compared support vector machines (SVMs) BIBREF23 with LDA-based topic models BIBREF24 for producing decision summaries. However, abstractive dialog summarization was less explored due to the lack of a suitable benchmark. Recent work BIBREF25, BIBREF26, BIBREF27 created abstractive dialog summary benchmarks with existing dialog corpus. BIBREF26 annotated topic descriptions in AMI meeting corpus as the summary. However, topics they defined are coarse, such as “industrial designer presentation". They also proposed a model with a sentence-gated mechanism incorporating dialog acts to perform abstractive summarization. Moreover, BIBREF28 first built a model to summarize audio-visual meeting data with an abstractive method. However, previous work has not investigated the utilization of semantic patterns in dialog, so we explore it in-depth in our work.
Proposed Method
As discussed above, state-of-the-art document summarizers are not applicable in conversation settings. We propose Scaffold Pointer Network (SPNet) based on Pointer-Generator BIBREF5. SPNet incorporates three types of semantic scaffolds to improve abstractive dialog summarization: speaker role, semantic slot and dialog domain.
Proposed Method ::: Background
We first introduce Pointer-Generator BIBREF5. It is a hybrid model of the typical Seq2Seq attention model BIBREF29 and pointer network BIBREF11. Seq2Seq framework encodes source sequence and generates the target sequence with the decoder. The input sequence is fed into the encoder token by token, producing the encoder hidden states $h_i$ in each encoding step. The decoder receives word embedding of the previous word and generates a distribution to decide the target element in this step, retaining decoder hidden states $s_t$. In Pointer-Generator, attention distribution $a^t$ is computed as in BIBREF9:
where $W_h$, $W_s$, $v$ and $b_{attn}$ are all learnable parameters.
With the attention distribution $a^t$, context vector $h_t^*$ is computed as the weighted sum of encoder's hidden states. Context vector is regarded as the attentional information in the source text:
Pointer-Generator differs from typical Seq2Seq attention model in the generation process. The pointing mechanism combines copying words directly from the source text with generating words from a fixed vocabulary. Generation probability $p_{gen}$ is calculated as “a soft switch" to choose from copy and generation:
where $x_t$ is the decoder input, $w_{h^*}$, $w_s$, $w_x$ and $b_{ptr}$ are all learnable parameters. $\sigma $ is sigmoid function, so the generation probability $p_{gen}$ has a range of $[0, 1]$.
The ability to select from copy and generation corresponds to a dynamic vocabulary. Pointer network forms an extended vocabulary for the copied tokens, including all the out-of-vocabulary(OOV) words appeared in the source text. The final probability distribution $P(w)$ on extended vocabulary is computed as follows:
where $P_{vocab}$ is the distribution on the original vocabulary, $V^{\prime }$, $V$, $b$ and $b^{\prime }$ are learnable parameters used to calculate such distribution.
Proposed Method ::: Scaffold Pointer Network (SPNet)
Our Scaffold Pointer Network (depicted in Figure FIGREF7) is based on Pointer-Generator BIBREF5. The contribution of SPNet is three-fold: separate encoding for different roles, incorporating semantic slot scaffold and dialog domain scaffold.
Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Speaker Role Scaffold
Our encoder-decoder framework employs separate encoding for different speakers in the dialog. User utterances $x_t^{usr}$ and system utterances $x_t^{sys}$ are fed into a user encoder and a system encoder separately to obtain encoder hidden states $h_{i}^{usr}$ and $h_{i}^{sys}$ . The attention distributions and context vectors are calculated as described in section SECREF1. In order to merge these two encoders in our framework, the decoder's hidden state $s_0$ is initialized as:
The pointing mechanism in our model follows the Equation DISPLAY_FORM4, and we obtain the context vector $h_t^{*}$:
Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Semantic Slot Scaffold
We integrate semantic slot scaffold by performing delexicalization on original dialogs. Delexicalization is a common pre-processing step in dialog modeling. Specifically, delexicalization replaces the slot values with its semantic slot name(e.g. replace 18:00 with [time]). It is easier for the language modeling to process delexicalized texts, as they have a reduced vocabulary size. But these generated sentences lack the semantic information due to the delexicalization. Some previous dialog system research ignored this issue BIBREF30 or completed single delexicalized utterance BIBREF31 as generated response. We propose to perform delexicalization in dialog summary, since delexicalized utterances can simplify dialog modeling. We fill the generated templates with slots with the copy and pointing mechanism.
We first train the model with the delexicalized utterance. Attention distribution $a^t$ over the source tokens instructs the decoder to fill up the slots with lexicalized values:
Note that $w_{slot}$ specifies the tokens that represents the slot name (e.g. [hotel_place], [time]). Decoder directly copies lexicalized value $value(w_i)$ conditioned on attention distribution $a_i^t$. If $w$ is not a slot token, then the probability $P(w)$ is calculated as Equation DISPLAY_FORM5.
Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Dialog Domain Scaffold
We integrate dialog domain scaffold through a multi-task framework. Dialog domain indicates different conversation task content, for example, booking hotel, restaurant and taxi in MultiWOZ dataset. Generally, the content in different domains varies so multi-domain task summarization is more difficult than single-domain. We include domain classification as the auxiliary task to incorporate the prior that different domains have different content. Feedback from the domain classification task provides domain specific information for the encoder to learn better representations. For domain classification, we feed the concatenated encoder hidden state through a binary classifier with two linear layers, producing domain probability $d$. The $i^{th}$ element $d_i$ in $d$ represents the probability of the $i^{th}$ domain:
where $U$, $U^{\prime }$, $b_{d}$ and $b_{d}^{\prime }$ are all trainable parameters in the classifier. We denote the loss function of summarization as $loss_1$ and domain classification as $loss_2$. Assume target word at timestep $t$ is $w_t^{*}$, $loss_1$ is the arithmetic mean of the negative log likelihood of $w_t^{*}$ over the generated sequence:
The domain classification task is a multi-label binary classification problem. We use binary cross entropy loss between the $i^{th}$ domain label $\hat{d_i}$ and predict probability $d_i$ for this task:
where $|D|$ is the number of domains. Finally, we reweight the classification loss with hyperparameter $\lambda $ and the objective function is:
Experimental Settings ::: Dataset
We validate SPNet on MultiWOZ-2.0 dataset BIBREF0. MultiWOZ consists of multi-domain conversations between a tourist and a information center clerk on varies booking tasks or domains, such as booking restaurants, hotels, taxis, etc. There are 10,438 dialogs, spanning over seven domains. 3,406 of them are single-domain (8.93 turns on average) and 7,302 are multi-domain (15.39 turns on average). During MultiWOZ data collection, instruction is provided for crowd workers to perform the task. We use the instructions as the dialog summary, and an example data is shown in Table TABREF25. Dialog domain label is extracted from existing MultiWOZ annotation. In the experiment, we split the dataset into 8,438 training, 1,000 validation, and 1,000 testing.
Experimental Settings ::: Evaluation Metrics
ROUGE BIBREF32 is a standard metric for summarization, designed to measure the surface word alignment between a generated summary and a human written summary. We evaluate our model with ROUGE-1, ROUGE-2 and ROUGE-L. They measure the word-overlap, bigram-overlap, and longest common sequence between the reference summary and the generated summary respectively. We obtain ROUGE scores using the files2rouge package. However, ROUGE is insufficient to measure summarization performance. The following example shows its limitations:
Reference: You are going to [restaurant_name] at [time].
Summary: You are going to [restaurant_name] at.
In this case, the summary has a high ROUGE score, as it has a considerable proportion of word overlap with the reference summary. However, it still has poor relevance and readability, for leaving out one of the most critical information: [time]. ROUGE treats each word equally in computing n-gram overlap while the informativeness actually varies: common words or phrases (e.g. “You are going to") significantly contribute to the ROUGE score and readability, but they are almost irrelevant to essential contents. The semantic slot values (e.g. [restaurant_name], [time]) are more essential compared to other words in the summary. However, ROUGE did not take this into consideration. To address this drawback in ROUGE, we propose a new evaluation metric: Critical Information Completeness (CIC). Formally, CIC is a recall of semantic slot information between a candidate summary and a reference summary. CIC is defined as follows:
where $V$ stands for a set of delexicalized values in the reference summary, $Count_{match}(v)$ is the number of values co-occurring in the candidate summary and reference summary, and $m$ is the number of values in set $V$. In our experiments, CIC is computed as the arithmetic mean over all the dialog domains to retain the overall performance.
CIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities. For example, in news summarization the proper nouns are the critical information to retain.
Experimental Settings ::: Implementation Details
We implemented our baselines with OpenNMT framework BIBREF33. We delexicalize utterances according to the belief span annotation. To maintain the generalizability of SPNet, we combine the slots that refer to the same information from different dialog domains into one slot (e.g. time). Instead of using pre-trained word embeddings like GloVe BIBREF34, we train word embeddings from scratch with a 128-dimension embedding layer. We set the hidden states of the bidirectional LSTM encoders to 256 dimensions, and the unidirectional LSTM decoder to 512 dimension. Our model is optimized using Adam BIBREF35 with a learning rate of 0.001, $\beta _1=0.9$, $\beta _2=0.999$. We reduce the learning rate to half to avoid overfitting when the validation loss increases. We set the hyperparameter $\lambda $ to 0.5 in the objective function and the batch size to eight. We use beam search with a beam size of three during decoding. We use the validation set to select the model parameter. Our model with and without multi-task takes about 15 epochs and seven epochs to converge, respectively.
Results and Discussions ::: Automatic Evaluation Results
To demonstrate SPNet's effectiveness, we compare it with two state-of-the-art methods, Pointer-Generator BIBREF5 and Transformer BIBREF6. Pointer-Generator is the state-of-the-art method in abstractive document summarization. In inference, we use length penalty and coverage penalty mentioned in BIBREF36. The hyperparameters in the original implementation BIBREF5 were used. Transformer uses attention mechanisms to replace recurrence for sequence transduction. Transformer generalizes well to many sequence-to-sequence problems, so we adapt it to our task, following the implementation in the official OpenNMT-py documentation.
We show all the models' results in Table TABREF24. We observe that SPNet reaches the highest score in both ROUGE and CIC. Both Pointer-Generator and Transformer achieve high ROUGE scores, but a relative low CIC scores. It suggests that the baselines have more room for improvement on preserving critical slot information. All the scaffolds we propose can be applied to different neural network models. In this work we select Pointer-Generator as our base model in SPNet because we observe that Transformer only has a small improvement over Pointer-Generator but is having a higher cost on training time and computing resources. We observe that SPNet outperforms other methods in all the automatic evaluation metrics with a big margin, as it incorporates all the three semantic scaffolds. Semantic slot contributes the most to SPNet's increased performance, bringing the largest increase on all automatic evaluation metrics.
Results and Discussions ::: Human Evaluation Results
We also perform human evaluation to verify if our method's increased performance on automatic evaluation metrics entails better human perceived quality. We randomly select 100 test samples from MultiWOZ test set for evaluation. We recruit 150 crowd workers from Amazon Mechanical Turk. For each sample, we show the conversation, reference summary, as well as summaries generated by Pointer-Generator and SPNet to three different participants. The participants are asked to score each summary on three indicators: relevance, conciseness and readability on a 1 to 5 scale, and rank the summary pair (tie allowed).
We present human evaluation results in Table TABREF27. In the scoring part, our model outperforms Pointer-Generator in all three evaluation metrics. SPNet scored better than Pointer-Generator on relevance and readability. All generated summaries are relatively concise; therefore, they score very similar in conciseness. Ground truth is still perceived as more relevant and readable than SPNet results. However, ground truth does not get a high absolute score. From the feedback of the evaluators, we found that they think that the ground truth has not covered all the necessary information in the conversation, and the description is not so natural. This motivates us to collect a dialog summarization dataset with high-quality human-written summaries in the future. Results in the ranking evaluation show more differences between different summaries. SPNet outperforms Pointer-Generator with a large margin. Its performance is relatively close to the ground truth summary.
Results and Discussions ::: Case study
Table TABREF25 shows an example summary from all models along with ground truth summary. We observe that Pointer-Generator ignores some essential fragments, such as the restaurant booking information (6 people, Sunday, 18:45). Missing information always belongs to the last several domains (restaurant in this case) in a multi-domain dialog. We also observe that separately encoding two speakers reduces repetition and inconsistency. For instance, Pointer-Generator's summary mentions “free wifi" several times and has conflicting requirements on wifi. This is because dialogs has information redundancy, but single-speaker model ignores such dialog property.
Our method has limitations. In the example shown in Table TABREF25, our summary does not mention the hotel name (Alexander Bed and Breakfast) and its address (517a Coldham Lane) referred in the source. It occurs because the ground truth summary doe not cover it in the training data. As a supervised method, SPNet is hard to generate a summary containing additional information beyond the ground truth. However, in some cases, SPNet can also correctly summarize the content not covered in the reference summary (see Table TABREF31 in Appendix).
Furthermore, although our SPNet achieves a much-improved performance, the application of SPNet still needs extra annotations for semantic scaffolds. For a dialog dataset, speaker role scaffold is a natural pattern for modeling. Most multi-domain dialog corpus has the domain annotation. While for texts, for example news, its topic categorization such as sports or entertainment can be used as domain annotation. We find that semantic slot scaffold brings the most significant improvement, but it is seldom explicitly annotated. However, the semantic slot scaffold can be relaxed to any critical entities in the corpus, such as team name in sports news or professional terminology in a technical meeting.
Conclusion and Future Work
We adapt a dialog generation dataset, MultiWOZ to an abstractive dialog summarization dataset. We propose SPNet, an end-to-end model that incorporates the speaker role, semantic slot and dialog domain as the semantic scaffolds to improve abstractive summary quality. We also propose an automatic evaluation metric CIC that considers semantic slot relevance to serve as a complementary metric to ROUGE. SPNet outperforms baseline methods in both automatic and human evaluation metrics. It suggests that involving semantic scaffolds efficiently improves abstractive summarization quality in the dialog scene.
Moreover, we can easily extend SPNet to other summarization tasks. We plan to apply semantic slot scaffold to news summarization. Specifically, we can annotate the critical entities such as person names or location names to ensure that they are captured correctly in the generated summary. We also plan to collect a human-human dialog dataset with more diverse human-written summaries. | Not at the moment, but summaries can be additionaly extended with this annotations. |
db39a71080e323ba2ddf958f93778e2b875dcd24 | db39a71080e323ba2ddf958f93778e2b875dcd24_0 | Q: How does SPNet utilize additional speaker role, semantic slot and dialog domain annotations?
Text: Introduction
Summarization aims to condense a piece of text to a shorter version, retaining the critical information. On dialogs, summarization has various promising applications in the real world. For instance, the automatic doctor-patient interaction summary can save doctors' massive amount of time used for filling medical records. There is also a general demand for summarizing meetings in order to track project progress in the industry. Generally, multi-party conversations with interactive communication are more difficult to summarize than single-speaker documents. Hence, dialog summarization will be a potential field in summarization track.
There are two types of summarization: extractive and abstractive. Extractive summarization selects sentences or phrases directly from the source text and merges them to a summary, while abstractive summarization attempts to generate novel expressions to condense information. Previous dialog summarization research mostly study extractive summarization BIBREF1, BIBREF2. Extractive methods merge selected important utterances from a dialog to form summary. Because dialogs are highly dependant on their histories, it is difficult to produce coherent discourses with a set of non-consecutive conversation turns. Therefore, extractive summarization is not the best approach to summarize dialogs. However, most modern abstractive methods focus on single-speaker documents rather than dialogs due to the lack of dialog summarization corpora. Popular abstractive summarization dataset like CNN/Daily Mail BIBREF3 is on news documents. AMI meeting corpus BIBREF4 is the common benchmark, but it only has extractive summary.
In this work, we introduce a dataset for abstractive dialog summarization based on MultiWOZ BIBREF0. Seq2Seq models such as Pointer-Generator BIBREF5 have achieved high-quality summaries of news document. However, directly applying a news summarizer to dialog results in two drawbacks: informative entities such as place name are difficult to capture precisely and contents in different domains are summarized unequally. To address these problems, we propose Scaffold Pointer Network (SPNet). SPNet incorporates three types of semantic scaffolds in dialog: speaker role, semantic slot, and dialog domain. Firstly, SPNet adapts separate encoder to attentional Seq2Seq framework, producing distinct semantic representations for different speaker roles. Then, our method inputs delexicalized utterances for producing delexicalized summary, and fills in slot values to generate complete summary. Finally, we incorporate dialog domain scaffold by jointly optimizing dialog domain classification task along with the summarization task. We evaluate SPNet with both automatic and human evaluation metrics on MultiWOZ. SPNet outperforms Pointer-Generator BIBREF5 and Transformer BIBREF6 on all the metrics.
Related Work
BIBREF7 first applied modern neural models to abstractive summarization. Their approach is based on Seq2Seq framework BIBREF8 and attention mechanism BIBREF9, achieving state-of-the-art results on Gigaword and DUC-2004 dataset. BIBREF10 proposed copy mechanism in summarization, demonstrating its effectiveness by combining the advantages of extractive and abstractive approach. BIBREF5 applied pointing BIBREF11 as copy mechanism and use coverage mechanism BIBREF12 to discourage repetition. Most recently, reinforcement learning (RL) has been employed in abstractive summarization. RL-based approaches directly optimize the objectives of summarization BIBREF13, BIBREF14. However, deep reinforcement learning approaches are difficult to train and more prone to exposure bias BIBREF15.
Recently, pre-training methods are popular in NLP applications. BERT BIBREF16 and GPT BIBREF17 have achieved state-of-the-art performance in many tasks, including summarization. For instance, BIBREF18 proposed a method to pre-train hierarchical document encoder for extractive summarization. BIBREF19 proposed two strategies to incorporate a pre-trained model (GPT) to perform the abstractive summarizer and achieved a better performance. However, there has not been much research on adapting pre-trained models to dialog summarization.
Dialog summarization, specifically meeting summarization, has been studied extensively. Previous work generally focused on statistical machine learning methods in extractive dialog summarization: BIBREF20 used skip-chain conditional random fields (CRFs) BIBREF21 as a ranking method in extractive meeting summarization. BIBREF22 compared support vector machines (SVMs) BIBREF23 with LDA-based topic models BIBREF24 for producing decision summaries. However, abstractive dialog summarization was less explored due to the lack of a suitable benchmark. Recent work BIBREF25, BIBREF26, BIBREF27 created abstractive dialog summary benchmarks with existing dialog corpus. BIBREF26 annotated topic descriptions in AMI meeting corpus as the summary. However, topics they defined are coarse, such as “industrial designer presentation". They also proposed a model with a sentence-gated mechanism incorporating dialog acts to perform abstractive summarization. Moreover, BIBREF28 first built a model to summarize audio-visual meeting data with an abstractive method. However, previous work has not investigated the utilization of semantic patterns in dialog, so we explore it in-depth in our work.
Proposed Method
As discussed above, state-of-the-art document summarizers are not applicable in conversation settings. We propose Scaffold Pointer Network (SPNet) based on Pointer-Generator BIBREF5. SPNet incorporates three types of semantic scaffolds to improve abstractive dialog summarization: speaker role, semantic slot and dialog domain.
Proposed Method ::: Background
We first introduce Pointer-Generator BIBREF5. It is a hybrid model of the typical Seq2Seq attention model BIBREF29 and pointer network BIBREF11. Seq2Seq framework encodes source sequence and generates the target sequence with the decoder. The input sequence is fed into the encoder token by token, producing the encoder hidden states $h_i$ in each encoding step. The decoder receives word embedding of the previous word and generates a distribution to decide the target element in this step, retaining decoder hidden states $s_t$. In Pointer-Generator, attention distribution $a^t$ is computed as in BIBREF9:
where $W_h$, $W_s$, $v$ and $b_{attn}$ are all learnable parameters.
With the attention distribution $a^t$, context vector $h_t^*$ is computed as the weighted sum of encoder's hidden states. Context vector is regarded as the attentional information in the source text:
Pointer-Generator differs from typical Seq2Seq attention model in the generation process. The pointing mechanism combines copying words directly from the source text with generating words from a fixed vocabulary. Generation probability $p_{gen}$ is calculated as “a soft switch" to choose from copy and generation:
where $x_t$ is the decoder input, $w_{h^*}$, $w_s$, $w_x$ and $b_{ptr}$ are all learnable parameters. $\sigma $ is sigmoid function, so the generation probability $p_{gen}$ has a range of $[0, 1]$.
The ability to select from copy and generation corresponds to a dynamic vocabulary. Pointer network forms an extended vocabulary for the copied tokens, including all the out-of-vocabulary(OOV) words appeared in the source text. The final probability distribution $P(w)$ on extended vocabulary is computed as follows:
where $P_{vocab}$ is the distribution on the original vocabulary, $V^{\prime }$, $V$, $b$ and $b^{\prime }$ are learnable parameters used to calculate such distribution.
Proposed Method ::: Scaffold Pointer Network (SPNet)
Our Scaffold Pointer Network (depicted in Figure FIGREF7) is based on Pointer-Generator BIBREF5. The contribution of SPNet is three-fold: separate encoding for different roles, incorporating semantic slot scaffold and dialog domain scaffold.
Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Speaker Role Scaffold
Our encoder-decoder framework employs separate encoding for different speakers in the dialog. User utterances $x_t^{usr}$ and system utterances $x_t^{sys}$ are fed into a user encoder and a system encoder separately to obtain encoder hidden states $h_{i}^{usr}$ and $h_{i}^{sys}$ . The attention distributions and context vectors are calculated as described in section SECREF1. In order to merge these two encoders in our framework, the decoder's hidden state $s_0$ is initialized as:
The pointing mechanism in our model follows the Equation DISPLAY_FORM4, and we obtain the context vector $h_t^{*}$:
Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Semantic Slot Scaffold
We integrate semantic slot scaffold by performing delexicalization on original dialogs. Delexicalization is a common pre-processing step in dialog modeling. Specifically, delexicalization replaces the slot values with its semantic slot name(e.g. replace 18:00 with [time]). It is easier for the language modeling to process delexicalized texts, as they have a reduced vocabulary size. But these generated sentences lack the semantic information due to the delexicalization. Some previous dialog system research ignored this issue BIBREF30 or completed single delexicalized utterance BIBREF31 as generated response. We propose to perform delexicalization in dialog summary, since delexicalized utterances can simplify dialog modeling. We fill the generated templates with slots with the copy and pointing mechanism.
We first train the model with the delexicalized utterance. Attention distribution $a^t$ over the source tokens instructs the decoder to fill up the slots with lexicalized values:
Note that $w_{slot}$ specifies the tokens that represents the slot name (e.g. [hotel_place], [time]). Decoder directly copies lexicalized value $value(w_i)$ conditioned on attention distribution $a_i^t$. If $w$ is not a slot token, then the probability $P(w)$ is calculated as Equation DISPLAY_FORM5.
Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Dialog Domain Scaffold
We integrate dialog domain scaffold through a multi-task framework. Dialog domain indicates different conversation task content, for example, booking hotel, restaurant and taxi in MultiWOZ dataset. Generally, the content in different domains varies so multi-domain task summarization is more difficult than single-domain. We include domain classification as the auxiliary task to incorporate the prior that different domains have different content. Feedback from the domain classification task provides domain specific information for the encoder to learn better representations. For domain classification, we feed the concatenated encoder hidden state through a binary classifier with two linear layers, producing domain probability $d$. The $i^{th}$ element $d_i$ in $d$ represents the probability of the $i^{th}$ domain:
where $U$, $U^{\prime }$, $b_{d}$ and $b_{d}^{\prime }$ are all trainable parameters in the classifier. We denote the loss function of summarization as $loss_1$ and domain classification as $loss_2$. Assume target word at timestep $t$ is $w_t^{*}$, $loss_1$ is the arithmetic mean of the negative log likelihood of $w_t^{*}$ over the generated sequence:
The domain classification task is a multi-label binary classification problem. We use binary cross entropy loss between the $i^{th}$ domain label $\hat{d_i}$ and predict probability $d_i$ for this task:
where $|D|$ is the number of domains. Finally, we reweight the classification loss with hyperparameter $\lambda $ and the objective function is:
Experimental Settings ::: Dataset
We validate SPNet on MultiWOZ-2.0 dataset BIBREF0. MultiWOZ consists of multi-domain conversations between a tourist and a information center clerk on varies booking tasks or domains, such as booking restaurants, hotels, taxis, etc. There are 10,438 dialogs, spanning over seven domains. 3,406 of them are single-domain (8.93 turns on average) and 7,302 are multi-domain (15.39 turns on average). During MultiWOZ data collection, instruction is provided for crowd workers to perform the task. We use the instructions as the dialog summary, and an example data is shown in Table TABREF25. Dialog domain label is extracted from existing MultiWOZ annotation. In the experiment, we split the dataset into 8,438 training, 1,000 validation, and 1,000 testing.
Experimental Settings ::: Evaluation Metrics
ROUGE BIBREF32 is a standard metric for summarization, designed to measure the surface word alignment between a generated summary and a human written summary. We evaluate our model with ROUGE-1, ROUGE-2 and ROUGE-L. They measure the word-overlap, bigram-overlap, and longest common sequence between the reference summary and the generated summary respectively. We obtain ROUGE scores using the files2rouge package. However, ROUGE is insufficient to measure summarization performance. The following example shows its limitations:
Reference: You are going to [restaurant_name] at [time].
Summary: You are going to [restaurant_name] at.
In this case, the summary has a high ROUGE score, as it has a considerable proportion of word overlap with the reference summary. However, it still has poor relevance and readability, for leaving out one of the most critical information: [time]. ROUGE treats each word equally in computing n-gram overlap while the informativeness actually varies: common words or phrases (e.g. “You are going to") significantly contribute to the ROUGE score and readability, but they are almost irrelevant to essential contents. The semantic slot values (e.g. [restaurant_name], [time]) are more essential compared to other words in the summary. However, ROUGE did not take this into consideration. To address this drawback in ROUGE, we propose a new evaluation metric: Critical Information Completeness (CIC). Formally, CIC is a recall of semantic slot information between a candidate summary and a reference summary. CIC is defined as follows:
where $V$ stands for a set of delexicalized values in the reference summary, $Count_{match}(v)$ is the number of values co-occurring in the candidate summary and reference summary, and $m$ is the number of values in set $V$. In our experiments, CIC is computed as the arithmetic mean over all the dialog domains to retain the overall performance.
CIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities. For example, in news summarization the proper nouns are the critical information to retain.
Experimental Settings ::: Implementation Details
We implemented our baselines with OpenNMT framework BIBREF33. We delexicalize utterances according to the belief span annotation. To maintain the generalizability of SPNet, we combine the slots that refer to the same information from different dialog domains into one slot (e.g. time). Instead of using pre-trained word embeddings like GloVe BIBREF34, we train word embeddings from scratch with a 128-dimension embedding layer. We set the hidden states of the bidirectional LSTM encoders to 256 dimensions, and the unidirectional LSTM decoder to 512 dimension. Our model is optimized using Adam BIBREF35 with a learning rate of 0.001, $\beta _1=0.9$, $\beta _2=0.999$. We reduce the learning rate to half to avoid overfitting when the validation loss increases. We set the hyperparameter $\lambda $ to 0.5 in the objective function and the batch size to eight. We use beam search with a beam size of three during decoding. We use the validation set to select the model parameter. Our model with and without multi-task takes about 15 epochs and seven epochs to converge, respectively.
Results and Discussions ::: Automatic Evaluation Results
To demonstrate SPNet's effectiveness, we compare it with two state-of-the-art methods, Pointer-Generator BIBREF5 and Transformer BIBREF6. Pointer-Generator is the state-of-the-art method in abstractive document summarization. In inference, we use length penalty and coverage penalty mentioned in BIBREF36. The hyperparameters in the original implementation BIBREF5 were used. Transformer uses attention mechanisms to replace recurrence for sequence transduction. Transformer generalizes well to many sequence-to-sequence problems, so we adapt it to our task, following the implementation in the official OpenNMT-py documentation.
We show all the models' results in Table TABREF24. We observe that SPNet reaches the highest score in both ROUGE and CIC. Both Pointer-Generator and Transformer achieve high ROUGE scores, but a relative low CIC scores. It suggests that the baselines have more room for improvement on preserving critical slot information. All the scaffolds we propose can be applied to different neural network models. In this work we select Pointer-Generator as our base model in SPNet because we observe that Transformer only has a small improvement over Pointer-Generator but is having a higher cost on training time and computing resources. We observe that SPNet outperforms other methods in all the automatic evaluation metrics with a big margin, as it incorporates all the three semantic scaffolds. Semantic slot contributes the most to SPNet's increased performance, bringing the largest increase on all automatic evaluation metrics.
Results and Discussions ::: Human Evaluation Results
We also perform human evaluation to verify if our method's increased performance on automatic evaluation metrics entails better human perceived quality. We randomly select 100 test samples from MultiWOZ test set for evaluation. We recruit 150 crowd workers from Amazon Mechanical Turk. For each sample, we show the conversation, reference summary, as well as summaries generated by Pointer-Generator and SPNet to three different participants. The participants are asked to score each summary on three indicators: relevance, conciseness and readability on a 1 to 5 scale, and rank the summary pair (tie allowed).
We present human evaluation results in Table TABREF27. In the scoring part, our model outperforms Pointer-Generator in all three evaluation metrics. SPNet scored better than Pointer-Generator on relevance and readability. All generated summaries are relatively concise; therefore, they score very similar in conciseness. Ground truth is still perceived as more relevant and readable than SPNet results. However, ground truth does not get a high absolute score. From the feedback of the evaluators, we found that they think that the ground truth has not covered all the necessary information in the conversation, and the description is not so natural. This motivates us to collect a dialog summarization dataset with high-quality human-written summaries in the future. Results in the ranking evaluation show more differences between different summaries. SPNet outperforms Pointer-Generator with a large margin. Its performance is relatively close to the ground truth summary.
Results and Discussions ::: Case study
Table TABREF25 shows an example summary from all models along with ground truth summary. We observe that Pointer-Generator ignores some essential fragments, such as the restaurant booking information (6 people, Sunday, 18:45). Missing information always belongs to the last several domains (restaurant in this case) in a multi-domain dialog. We also observe that separately encoding two speakers reduces repetition and inconsistency. For instance, Pointer-Generator's summary mentions “free wifi" several times and has conflicting requirements on wifi. This is because dialogs has information redundancy, but single-speaker model ignores such dialog property.
Our method has limitations. In the example shown in Table TABREF25, our summary does not mention the hotel name (Alexander Bed and Breakfast) and its address (517a Coldham Lane) referred in the source. It occurs because the ground truth summary doe not cover it in the training data. As a supervised method, SPNet is hard to generate a summary containing additional information beyond the ground truth. However, in some cases, SPNet can also correctly summarize the content not covered in the reference summary (see Table TABREF31 in Appendix).
Furthermore, although our SPNet achieves a much-improved performance, the application of SPNet still needs extra annotations for semantic scaffolds. For a dialog dataset, speaker role scaffold is a natural pattern for modeling. Most multi-domain dialog corpus has the domain annotation. While for texts, for example news, its topic categorization such as sports or entertainment can be used as domain annotation. We find that semantic slot scaffold brings the most significant improvement, but it is seldom explicitly annotated. However, the semantic slot scaffold can be relaxed to any critical entities in the corpus, such as team name in sports news or professional terminology in a technical meeting.
Conclusion and Future Work
We adapt a dialog generation dataset, MultiWOZ to an abstractive dialog summarization dataset. We propose SPNet, an end-to-end model that incorporates the speaker role, semantic slot and dialog domain as the semantic scaffolds to improve abstractive summary quality. We also propose an automatic evaluation metric CIC that considers semantic slot relevance to serve as a complementary metric to ROUGE. SPNet outperforms baseline methods in both automatic and human evaluation metrics. It suggests that involving semantic scaffolds efficiently improves abstractive summarization quality in the dialog scene.
Moreover, we can easily extend SPNet to other summarization tasks. We plan to apply semantic slot scaffold to news summarization. Specifically, we can annotate the critical entities such as person names or location names to ensure that they are captured correctly in the generated summary. We also plan to collect a human-human dialog dataset with more diverse human-written summaries. | Our encoder-decoder framework employs separate encoding for different speakers in the dialog., We integrate semantic slot scaffold by performing delexicalization on original dialogs., We integrate dialog domain scaffold through a multi-task framework. |
6da2cb3187d3f28b75ac0a61f6562a8adf716109 | 6da2cb3187d3f28b75ac0a61f6562a8adf716109_0 | Q: What are previous state-of-the-art document summarization methods used?
Text: Introduction
Summarization aims to condense a piece of text to a shorter version, retaining the critical information. On dialogs, summarization has various promising applications in the real world. For instance, the automatic doctor-patient interaction summary can save doctors' massive amount of time used for filling medical records. There is also a general demand for summarizing meetings in order to track project progress in the industry. Generally, multi-party conversations with interactive communication are more difficult to summarize than single-speaker documents. Hence, dialog summarization will be a potential field in summarization track.
There are two types of summarization: extractive and abstractive. Extractive summarization selects sentences or phrases directly from the source text and merges them to a summary, while abstractive summarization attempts to generate novel expressions to condense information. Previous dialog summarization research mostly study extractive summarization BIBREF1, BIBREF2. Extractive methods merge selected important utterances from a dialog to form summary. Because dialogs are highly dependant on their histories, it is difficult to produce coherent discourses with a set of non-consecutive conversation turns. Therefore, extractive summarization is not the best approach to summarize dialogs. However, most modern abstractive methods focus on single-speaker documents rather than dialogs due to the lack of dialog summarization corpora. Popular abstractive summarization dataset like CNN/Daily Mail BIBREF3 is on news documents. AMI meeting corpus BIBREF4 is the common benchmark, but it only has extractive summary.
In this work, we introduce a dataset for abstractive dialog summarization based on MultiWOZ BIBREF0. Seq2Seq models such as Pointer-Generator BIBREF5 have achieved high-quality summaries of news document. However, directly applying a news summarizer to dialog results in two drawbacks: informative entities such as place name are difficult to capture precisely and contents in different domains are summarized unequally. To address these problems, we propose Scaffold Pointer Network (SPNet). SPNet incorporates three types of semantic scaffolds in dialog: speaker role, semantic slot, and dialog domain. Firstly, SPNet adapts separate encoder to attentional Seq2Seq framework, producing distinct semantic representations for different speaker roles. Then, our method inputs delexicalized utterances for producing delexicalized summary, and fills in slot values to generate complete summary. Finally, we incorporate dialog domain scaffold by jointly optimizing dialog domain classification task along with the summarization task. We evaluate SPNet with both automatic and human evaluation metrics on MultiWOZ. SPNet outperforms Pointer-Generator BIBREF5 and Transformer BIBREF6 on all the metrics.
Related Work
BIBREF7 first applied modern neural models to abstractive summarization. Their approach is based on Seq2Seq framework BIBREF8 and attention mechanism BIBREF9, achieving state-of-the-art results on Gigaword and DUC-2004 dataset. BIBREF10 proposed copy mechanism in summarization, demonstrating its effectiveness by combining the advantages of extractive and abstractive approach. BIBREF5 applied pointing BIBREF11 as copy mechanism and use coverage mechanism BIBREF12 to discourage repetition. Most recently, reinforcement learning (RL) has been employed in abstractive summarization. RL-based approaches directly optimize the objectives of summarization BIBREF13, BIBREF14. However, deep reinforcement learning approaches are difficult to train and more prone to exposure bias BIBREF15.
Recently, pre-training methods are popular in NLP applications. BERT BIBREF16 and GPT BIBREF17 have achieved state-of-the-art performance in many tasks, including summarization. For instance, BIBREF18 proposed a method to pre-train hierarchical document encoder for extractive summarization. BIBREF19 proposed two strategies to incorporate a pre-trained model (GPT) to perform the abstractive summarizer and achieved a better performance. However, there has not been much research on adapting pre-trained models to dialog summarization.
Dialog summarization, specifically meeting summarization, has been studied extensively. Previous work generally focused on statistical machine learning methods in extractive dialog summarization: BIBREF20 used skip-chain conditional random fields (CRFs) BIBREF21 as a ranking method in extractive meeting summarization. BIBREF22 compared support vector machines (SVMs) BIBREF23 with LDA-based topic models BIBREF24 for producing decision summaries. However, abstractive dialog summarization was less explored due to the lack of a suitable benchmark. Recent work BIBREF25, BIBREF26, BIBREF27 created abstractive dialog summary benchmarks with existing dialog corpus. BIBREF26 annotated topic descriptions in AMI meeting corpus as the summary. However, topics they defined are coarse, such as “industrial designer presentation". They also proposed a model with a sentence-gated mechanism incorporating dialog acts to perform abstractive summarization. Moreover, BIBREF28 first built a model to summarize audio-visual meeting data with an abstractive method. However, previous work has not investigated the utilization of semantic patterns in dialog, so we explore it in-depth in our work.
Proposed Method
As discussed above, state-of-the-art document summarizers are not applicable in conversation settings. We propose Scaffold Pointer Network (SPNet) based on Pointer-Generator BIBREF5. SPNet incorporates three types of semantic scaffolds to improve abstractive dialog summarization: speaker role, semantic slot and dialog domain.
Proposed Method ::: Background
We first introduce Pointer-Generator BIBREF5. It is a hybrid model of the typical Seq2Seq attention model BIBREF29 and pointer network BIBREF11. Seq2Seq framework encodes source sequence and generates the target sequence with the decoder. The input sequence is fed into the encoder token by token, producing the encoder hidden states $h_i$ in each encoding step. The decoder receives word embedding of the previous word and generates a distribution to decide the target element in this step, retaining decoder hidden states $s_t$. In Pointer-Generator, attention distribution $a^t$ is computed as in BIBREF9:
where $W_h$, $W_s$, $v$ and $b_{attn}$ are all learnable parameters.
With the attention distribution $a^t$, context vector $h_t^*$ is computed as the weighted sum of encoder's hidden states. Context vector is regarded as the attentional information in the source text:
Pointer-Generator differs from typical Seq2Seq attention model in the generation process. The pointing mechanism combines copying words directly from the source text with generating words from a fixed vocabulary. Generation probability $p_{gen}$ is calculated as “a soft switch" to choose from copy and generation:
where $x_t$ is the decoder input, $w_{h^*}$, $w_s$, $w_x$ and $b_{ptr}$ are all learnable parameters. $\sigma $ is sigmoid function, so the generation probability $p_{gen}$ has a range of $[0, 1]$.
The ability to select from copy and generation corresponds to a dynamic vocabulary. Pointer network forms an extended vocabulary for the copied tokens, including all the out-of-vocabulary(OOV) words appeared in the source text. The final probability distribution $P(w)$ on extended vocabulary is computed as follows:
where $P_{vocab}$ is the distribution on the original vocabulary, $V^{\prime }$, $V$, $b$ and $b^{\prime }$ are learnable parameters used to calculate such distribution.
Proposed Method ::: Scaffold Pointer Network (SPNet)
Our Scaffold Pointer Network (depicted in Figure FIGREF7) is based on Pointer-Generator BIBREF5. The contribution of SPNet is three-fold: separate encoding for different roles, incorporating semantic slot scaffold and dialog domain scaffold.
Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Speaker Role Scaffold
Our encoder-decoder framework employs separate encoding for different speakers in the dialog. User utterances $x_t^{usr}$ and system utterances $x_t^{sys}$ are fed into a user encoder and a system encoder separately to obtain encoder hidden states $h_{i}^{usr}$ and $h_{i}^{sys}$ . The attention distributions and context vectors are calculated as described in section SECREF1. In order to merge these two encoders in our framework, the decoder's hidden state $s_0$ is initialized as:
The pointing mechanism in our model follows the Equation DISPLAY_FORM4, and we obtain the context vector $h_t^{*}$:
Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Semantic Slot Scaffold
We integrate semantic slot scaffold by performing delexicalization on original dialogs. Delexicalization is a common pre-processing step in dialog modeling. Specifically, delexicalization replaces the slot values with its semantic slot name(e.g. replace 18:00 with [time]). It is easier for the language modeling to process delexicalized texts, as they have a reduced vocabulary size. But these generated sentences lack the semantic information due to the delexicalization. Some previous dialog system research ignored this issue BIBREF30 or completed single delexicalized utterance BIBREF31 as generated response. We propose to perform delexicalization in dialog summary, since delexicalized utterances can simplify dialog modeling. We fill the generated templates with slots with the copy and pointing mechanism.
We first train the model with the delexicalized utterance. Attention distribution $a^t$ over the source tokens instructs the decoder to fill up the slots with lexicalized values:
Note that $w_{slot}$ specifies the tokens that represents the slot name (e.g. [hotel_place], [time]). Decoder directly copies lexicalized value $value(w_i)$ conditioned on attention distribution $a_i^t$. If $w$ is not a slot token, then the probability $P(w)$ is calculated as Equation DISPLAY_FORM5.
Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Dialog Domain Scaffold
We integrate dialog domain scaffold through a multi-task framework. Dialog domain indicates different conversation task content, for example, booking hotel, restaurant and taxi in MultiWOZ dataset. Generally, the content in different domains varies so multi-domain task summarization is more difficult than single-domain. We include domain classification as the auxiliary task to incorporate the prior that different domains have different content. Feedback from the domain classification task provides domain specific information for the encoder to learn better representations. For domain classification, we feed the concatenated encoder hidden state through a binary classifier with two linear layers, producing domain probability $d$. The $i^{th}$ element $d_i$ in $d$ represents the probability of the $i^{th}$ domain:
where $U$, $U^{\prime }$, $b_{d}$ and $b_{d}^{\prime }$ are all trainable parameters in the classifier. We denote the loss function of summarization as $loss_1$ and domain classification as $loss_2$. Assume target word at timestep $t$ is $w_t^{*}$, $loss_1$ is the arithmetic mean of the negative log likelihood of $w_t^{*}$ over the generated sequence:
The domain classification task is a multi-label binary classification problem. We use binary cross entropy loss between the $i^{th}$ domain label $\hat{d_i}$ and predict probability $d_i$ for this task:
where $|D|$ is the number of domains. Finally, we reweight the classification loss with hyperparameter $\lambda $ and the objective function is:
Experimental Settings ::: Dataset
We validate SPNet on MultiWOZ-2.0 dataset BIBREF0. MultiWOZ consists of multi-domain conversations between a tourist and a information center clerk on varies booking tasks or domains, such as booking restaurants, hotels, taxis, etc. There are 10,438 dialogs, spanning over seven domains. 3,406 of them are single-domain (8.93 turns on average) and 7,302 are multi-domain (15.39 turns on average). During MultiWOZ data collection, instruction is provided for crowd workers to perform the task. We use the instructions as the dialog summary, and an example data is shown in Table TABREF25. Dialog domain label is extracted from existing MultiWOZ annotation. In the experiment, we split the dataset into 8,438 training, 1,000 validation, and 1,000 testing.
Experimental Settings ::: Evaluation Metrics
ROUGE BIBREF32 is a standard metric for summarization, designed to measure the surface word alignment between a generated summary and a human written summary. We evaluate our model with ROUGE-1, ROUGE-2 and ROUGE-L. They measure the word-overlap, bigram-overlap, and longest common sequence between the reference summary and the generated summary respectively. We obtain ROUGE scores using the files2rouge package. However, ROUGE is insufficient to measure summarization performance. The following example shows its limitations:
Reference: You are going to [restaurant_name] at [time].
Summary: You are going to [restaurant_name] at.
In this case, the summary has a high ROUGE score, as it has a considerable proportion of word overlap with the reference summary. However, it still has poor relevance and readability, for leaving out one of the most critical information: [time]. ROUGE treats each word equally in computing n-gram overlap while the informativeness actually varies: common words or phrases (e.g. “You are going to") significantly contribute to the ROUGE score and readability, but they are almost irrelevant to essential contents. The semantic slot values (e.g. [restaurant_name], [time]) are more essential compared to other words in the summary. However, ROUGE did not take this into consideration. To address this drawback in ROUGE, we propose a new evaluation metric: Critical Information Completeness (CIC). Formally, CIC is a recall of semantic slot information between a candidate summary and a reference summary. CIC is defined as follows:
where $V$ stands for a set of delexicalized values in the reference summary, $Count_{match}(v)$ is the number of values co-occurring in the candidate summary and reference summary, and $m$ is the number of values in set $V$. In our experiments, CIC is computed as the arithmetic mean over all the dialog domains to retain the overall performance.
CIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities. For example, in news summarization the proper nouns are the critical information to retain.
Experimental Settings ::: Implementation Details
We implemented our baselines with OpenNMT framework BIBREF33. We delexicalize utterances according to the belief span annotation. To maintain the generalizability of SPNet, we combine the slots that refer to the same information from different dialog domains into one slot (e.g. time). Instead of using pre-trained word embeddings like GloVe BIBREF34, we train word embeddings from scratch with a 128-dimension embedding layer. We set the hidden states of the bidirectional LSTM encoders to 256 dimensions, and the unidirectional LSTM decoder to 512 dimension. Our model is optimized using Adam BIBREF35 with a learning rate of 0.001, $\beta _1=0.9$, $\beta _2=0.999$. We reduce the learning rate to half to avoid overfitting when the validation loss increases. We set the hyperparameter $\lambda $ to 0.5 in the objective function and the batch size to eight. We use beam search with a beam size of three during decoding. We use the validation set to select the model parameter. Our model with and without multi-task takes about 15 epochs and seven epochs to converge, respectively.
Results and Discussions ::: Automatic Evaluation Results
To demonstrate SPNet's effectiveness, we compare it with two state-of-the-art methods, Pointer-Generator BIBREF5 and Transformer BIBREF6. Pointer-Generator is the state-of-the-art method in abstractive document summarization. In inference, we use length penalty and coverage penalty mentioned in BIBREF36. The hyperparameters in the original implementation BIBREF5 were used. Transformer uses attention mechanisms to replace recurrence for sequence transduction. Transformer generalizes well to many sequence-to-sequence problems, so we adapt it to our task, following the implementation in the official OpenNMT-py documentation.
We show all the models' results in Table TABREF24. We observe that SPNet reaches the highest score in both ROUGE and CIC. Both Pointer-Generator and Transformer achieve high ROUGE scores, but a relative low CIC scores. It suggests that the baselines have more room for improvement on preserving critical slot information. All the scaffolds we propose can be applied to different neural network models. In this work we select Pointer-Generator as our base model in SPNet because we observe that Transformer only has a small improvement over Pointer-Generator but is having a higher cost on training time and computing resources. We observe that SPNet outperforms other methods in all the automatic evaluation metrics with a big margin, as it incorporates all the three semantic scaffolds. Semantic slot contributes the most to SPNet's increased performance, bringing the largest increase on all automatic evaluation metrics.
Results and Discussions ::: Human Evaluation Results
We also perform human evaluation to verify if our method's increased performance on automatic evaluation metrics entails better human perceived quality. We randomly select 100 test samples from MultiWOZ test set for evaluation. We recruit 150 crowd workers from Amazon Mechanical Turk. For each sample, we show the conversation, reference summary, as well as summaries generated by Pointer-Generator and SPNet to three different participants. The participants are asked to score each summary on three indicators: relevance, conciseness and readability on a 1 to 5 scale, and rank the summary pair (tie allowed).
We present human evaluation results in Table TABREF27. In the scoring part, our model outperforms Pointer-Generator in all three evaluation metrics. SPNet scored better than Pointer-Generator on relevance and readability. All generated summaries are relatively concise; therefore, they score very similar in conciseness. Ground truth is still perceived as more relevant and readable than SPNet results. However, ground truth does not get a high absolute score. From the feedback of the evaluators, we found that they think that the ground truth has not covered all the necessary information in the conversation, and the description is not so natural. This motivates us to collect a dialog summarization dataset with high-quality human-written summaries in the future. Results in the ranking evaluation show more differences between different summaries. SPNet outperforms Pointer-Generator with a large margin. Its performance is relatively close to the ground truth summary.
Results and Discussions ::: Case study
Table TABREF25 shows an example summary from all models along with ground truth summary. We observe that Pointer-Generator ignores some essential fragments, such as the restaurant booking information (6 people, Sunday, 18:45). Missing information always belongs to the last several domains (restaurant in this case) in a multi-domain dialog. We also observe that separately encoding two speakers reduces repetition and inconsistency. For instance, Pointer-Generator's summary mentions “free wifi" several times and has conflicting requirements on wifi. This is because dialogs has information redundancy, but single-speaker model ignores such dialog property.
Our method has limitations. In the example shown in Table TABREF25, our summary does not mention the hotel name (Alexander Bed and Breakfast) and its address (517a Coldham Lane) referred in the source. It occurs because the ground truth summary doe not cover it in the training data. As a supervised method, SPNet is hard to generate a summary containing additional information beyond the ground truth. However, in some cases, SPNet can also correctly summarize the content not covered in the reference summary (see Table TABREF31 in Appendix).
Furthermore, although our SPNet achieves a much-improved performance, the application of SPNet still needs extra annotations for semantic scaffolds. For a dialog dataset, speaker role scaffold is a natural pattern for modeling. Most multi-domain dialog corpus has the domain annotation. While for texts, for example news, its topic categorization such as sports or entertainment can be used as domain annotation. We find that semantic slot scaffold brings the most significant improvement, but it is seldom explicitly annotated. However, the semantic slot scaffold can be relaxed to any critical entities in the corpus, such as team name in sports news or professional terminology in a technical meeting.
Conclusion and Future Work
We adapt a dialog generation dataset, MultiWOZ to an abstractive dialog summarization dataset. We propose SPNet, an end-to-end model that incorporates the speaker role, semantic slot and dialog domain as the semantic scaffolds to improve abstractive summary quality. We also propose an automatic evaluation metric CIC that considers semantic slot relevance to serve as a complementary metric to ROUGE. SPNet outperforms baseline methods in both automatic and human evaluation metrics. It suggests that involving semantic scaffolds efficiently improves abstractive summarization quality in the dialog scene.
Moreover, we can easily extend SPNet to other summarization tasks. We plan to apply semantic slot scaffold to news summarization. Specifically, we can annotate the critical entities such as person names or location names to ensure that they are captured correctly in the generated summary. We also plan to collect a human-human dialog dataset with more diverse human-written summaries. | Pointer-Generator, Transformer |
c47e87efab11f661993a14cf2d7506be641375e4 | c47e87efab11f661993a14cf2d7506be641375e4_0 | Q: How does new evaluation metric considers critical informative entities?
Text: Introduction
Summarization aims to condense a piece of text to a shorter version, retaining the critical information. On dialogs, summarization has various promising applications in the real world. For instance, the automatic doctor-patient interaction summary can save doctors' massive amount of time used for filling medical records. There is also a general demand for summarizing meetings in order to track project progress in the industry. Generally, multi-party conversations with interactive communication are more difficult to summarize than single-speaker documents. Hence, dialog summarization will be a potential field in summarization track.
There are two types of summarization: extractive and abstractive. Extractive summarization selects sentences or phrases directly from the source text and merges them to a summary, while abstractive summarization attempts to generate novel expressions to condense information. Previous dialog summarization research mostly study extractive summarization BIBREF1, BIBREF2. Extractive methods merge selected important utterances from a dialog to form summary. Because dialogs are highly dependant on their histories, it is difficult to produce coherent discourses with a set of non-consecutive conversation turns. Therefore, extractive summarization is not the best approach to summarize dialogs. However, most modern abstractive methods focus on single-speaker documents rather than dialogs due to the lack of dialog summarization corpora. Popular abstractive summarization dataset like CNN/Daily Mail BIBREF3 is on news documents. AMI meeting corpus BIBREF4 is the common benchmark, but it only has extractive summary.
In this work, we introduce a dataset for abstractive dialog summarization based on MultiWOZ BIBREF0. Seq2Seq models such as Pointer-Generator BIBREF5 have achieved high-quality summaries of news document. However, directly applying a news summarizer to dialog results in two drawbacks: informative entities such as place name are difficult to capture precisely and contents in different domains are summarized unequally. To address these problems, we propose Scaffold Pointer Network (SPNet). SPNet incorporates three types of semantic scaffolds in dialog: speaker role, semantic slot, and dialog domain. Firstly, SPNet adapts separate encoder to attentional Seq2Seq framework, producing distinct semantic representations for different speaker roles. Then, our method inputs delexicalized utterances for producing delexicalized summary, and fills in slot values to generate complete summary. Finally, we incorporate dialog domain scaffold by jointly optimizing dialog domain classification task along with the summarization task. We evaluate SPNet with both automatic and human evaluation metrics on MultiWOZ. SPNet outperforms Pointer-Generator BIBREF5 and Transformer BIBREF6 on all the metrics.
Related Work
BIBREF7 first applied modern neural models to abstractive summarization. Their approach is based on Seq2Seq framework BIBREF8 and attention mechanism BIBREF9, achieving state-of-the-art results on Gigaword and DUC-2004 dataset. BIBREF10 proposed copy mechanism in summarization, demonstrating its effectiveness by combining the advantages of extractive and abstractive approach. BIBREF5 applied pointing BIBREF11 as copy mechanism and use coverage mechanism BIBREF12 to discourage repetition. Most recently, reinforcement learning (RL) has been employed in abstractive summarization. RL-based approaches directly optimize the objectives of summarization BIBREF13, BIBREF14. However, deep reinforcement learning approaches are difficult to train and more prone to exposure bias BIBREF15.
Recently, pre-training methods are popular in NLP applications. BERT BIBREF16 and GPT BIBREF17 have achieved state-of-the-art performance in many tasks, including summarization. For instance, BIBREF18 proposed a method to pre-train hierarchical document encoder for extractive summarization. BIBREF19 proposed two strategies to incorporate a pre-trained model (GPT) to perform the abstractive summarizer and achieved a better performance. However, there has not been much research on adapting pre-trained models to dialog summarization.
Dialog summarization, specifically meeting summarization, has been studied extensively. Previous work generally focused on statistical machine learning methods in extractive dialog summarization: BIBREF20 used skip-chain conditional random fields (CRFs) BIBREF21 as a ranking method in extractive meeting summarization. BIBREF22 compared support vector machines (SVMs) BIBREF23 with LDA-based topic models BIBREF24 for producing decision summaries. However, abstractive dialog summarization was less explored due to the lack of a suitable benchmark. Recent work BIBREF25, BIBREF26, BIBREF27 created abstractive dialog summary benchmarks with existing dialog corpus. BIBREF26 annotated topic descriptions in AMI meeting corpus as the summary. However, topics they defined are coarse, such as “industrial designer presentation". They also proposed a model with a sentence-gated mechanism incorporating dialog acts to perform abstractive summarization. Moreover, BIBREF28 first built a model to summarize audio-visual meeting data with an abstractive method. However, previous work has not investigated the utilization of semantic patterns in dialog, so we explore it in-depth in our work.
Proposed Method
As discussed above, state-of-the-art document summarizers are not applicable in conversation settings. We propose Scaffold Pointer Network (SPNet) based on Pointer-Generator BIBREF5. SPNet incorporates three types of semantic scaffolds to improve abstractive dialog summarization: speaker role, semantic slot and dialog domain.
Proposed Method ::: Background
We first introduce Pointer-Generator BIBREF5. It is a hybrid model of the typical Seq2Seq attention model BIBREF29 and pointer network BIBREF11. Seq2Seq framework encodes source sequence and generates the target sequence with the decoder. The input sequence is fed into the encoder token by token, producing the encoder hidden states $h_i$ in each encoding step. The decoder receives word embedding of the previous word and generates a distribution to decide the target element in this step, retaining decoder hidden states $s_t$. In Pointer-Generator, attention distribution $a^t$ is computed as in BIBREF9:
where $W_h$, $W_s$, $v$ and $b_{attn}$ are all learnable parameters.
With the attention distribution $a^t$, context vector $h_t^*$ is computed as the weighted sum of encoder's hidden states. Context vector is regarded as the attentional information in the source text:
Pointer-Generator differs from typical Seq2Seq attention model in the generation process. The pointing mechanism combines copying words directly from the source text with generating words from a fixed vocabulary. Generation probability $p_{gen}$ is calculated as “a soft switch" to choose from copy and generation:
where $x_t$ is the decoder input, $w_{h^*}$, $w_s$, $w_x$ and $b_{ptr}$ are all learnable parameters. $\sigma $ is sigmoid function, so the generation probability $p_{gen}$ has a range of $[0, 1]$.
The ability to select from copy and generation corresponds to a dynamic vocabulary. Pointer network forms an extended vocabulary for the copied tokens, including all the out-of-vocabulary(OOV) words appeared in the source text. The final probability distribution $P(w)$ on extended vocabulary is computed as follows:
where $P_{vocab}$ is the distribution on the original vocabulary, $V^{\prime }$, $V$, $b$ and $b^{\prime }$ are learnable parameters used to calculate such distribution.
Proposed Method ::: Scaffold Pointer Network (SPNet)
Our Scaffold Pointer Network (depicted in Figure FIGREF7) is based on Pointer-Generator BIBREF5. The contribution of SPNet is three-fold: separate encoding for different roles, incorporating semantic slot scaffold and dialog domain scaffold.
Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Speaker Role Scaffold
Our encoder-decoder framework employs separate encoding for different speakers in the dialog. User utterances $x_t^{usr}$ and system utterances $x_t^{sys}$ are fed into a user encoder and a system encoder separately to obtain encoder hidden states $h_{i}^{usr}$ and $h_{i}^{sys}$ . The attention distributions and context vectors are calculated as described in section SECREF1. In order to merge these two encoders in our framework, the decoder's hidden state $s_0$ is initialized as:
The pointing mechanism in our model follows the Equation DISPLAY_FORM4, and we obtain the context vector $h_t^{*}$:
Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Semantic Slot Scaffold
We integrate semantic slot scaffold by performing delexicalization on original dialogs. Delexicalization is a common pre-processing step in dialog modeling. Specifically, delexicalization replaces the slot values with its semantic slot name(e.g. replace 18:00 with [time]). It is easier for the language modeling to process delexicalized texts, as they have a reduced vocabulary size. But these generated sentences lack the semantic information due to the delexicalization. Some previous dialog system research ignored this issue BIBREF30 or completed single delexicalized utterance BIBREF31 as generated response. We propose to perform delexicalization in dialog summary, since delexicalized utterances can simplify dialog modeling. We fill the generated templates with slots with the copy and pointing mechanism.
We first train the model with the delexicalized utterance. Attention distribution $a^t$ over the source tokens instructs the decoder to fill up the slots with lexicalized values:
Note that $w_{slot}$ specifies the tokens that represents the slot name (e.g. [hotel_place], [time]). Decoder directly copies lexicalized value $value(w_i)$ conditioned on attention distribution $a_i^t$. If $w$ is not a slot token, then the probability $P(w)$ is calculated as Equation DISPLAY_FORM5.
Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Dialog Domain Scaffold
We integrate dialog domain scaffold through a multi-task framework. Dialog domain indicates different conversation task content, for example, booking hotel, restaurant and taxi in MultiWOZ dataset. Generally, the content in different domains varies so multi-domain task summarization is more difficult than single-domain. We include domain classification as the auxiliary task to incorporate the prior that different domains have different content. Feedback from the domain classification task provides domain specific information for the encoder to learn better representations. For domain classification, we feed the concatenated encoder hidden state through a binary classifier with two linear layers, producing domain probability $d$. The $i^{th}$ element $d_i$ in $d$ represents the probability of the $i^{th}$ domain:
where $U$, $U^{\prime }$, $b_{d}$ and $b_{d}^{\prime }$ are all trainable parameters in the classifier. We denote the loss function of summarization as $loss_1$ and domain classification as $loss_2$. Assume target word at timestep $t$ is $w_t^{*}$, $loss_1$ is the arithmetic mean of the negative log likelihood of $w_t^{*}$ over the generated sequence:
The domain classification task is a multi-label binary classification problem. We use binary cross entropy loss between the $i^{th}$ domain label $\hat{d_i}$ and predict probability $d_i$ for this task:
where $|D|$ is the number of domains. Finally, we reweight the classification loss with hyperparameter $\lambda $ and the objective function is:
Experimental Settings ::: Dataset
We validate SPNet on MultiWOZ-2.0 dataset BIBREF0. MultiWOZ consists of multi-domain conversations between a tourist and a information center clerk on varies booking tasks or domains, such as booking restaurants, hotels, taxis, etc. There are 10,438 dialogs, spanning over seven domains. 3,406 of them are single-domain (8.93 turns on average) and 7,302 are multi-domain (15.39 turns on average). During MultiWOZ data collection, instruction is provided for crowd workers to perform the task. We use the instructions as the dialog summary, and an example data is shown in Table TABREF25. Dialog domain label is extracted from existing MultiWOZ annotation. In the experiment, we split the dataset into 8,438 training, 1,000 validation, and 1,000 testing.
Experimental Settings ::: Evaluation Metrics
ROUGE BIBREF32 is a standard metric for summarization, designed to measure the surface word alignment between a generated summary and a human written summary. We evaluate our model with ROUGE-1, ROUGE-2 and ROUGE-L. They measure the word-overlap, bigram-overlap, and longest common sequence between the reference summary and the generated summary respectively. We obtain ROUGE scores using the files2rouge package. However, ROUGE is insufficient to measure summarization performance. The following example shows its limitations:
Reference: You are going to [restaurant_name] at [time].
Summary: You are going to [restaurant_name] at.
In this case, the summary has a high ROUGE score, as it has a considerable proportion of word overlap with the reference summary. However, it still has poor relevance and readability, for leaving out one of the most critical information: [time]. ROUGE treats each word equally in computing n-gram overlap while the informativeness actually varies: common words or phrases (e.g. “You are going to") significantly contribute to the ROUGE score and readability, but they are almost irrelevant to essential contents. The semantic slot values (e.g. [restaurant_name], [time]) are more essential compared to other words in the summary. However, ROUGE did not take this into consideration. To address this drawback in ROUGE, we propose a new evaluation metric: Critical Information Completeness (CIC). Formally, CIC is a recall of semantic slot information between a candidate summary and a reference summary. CIC is defined as follows:
where $V$ stands for a set of delexicalized values in the reference summary, $Count_{match}(v)$ is the number of values co-occurring in the candidate summary and reference summary, and $m$ is the number of values in set $V$. In our experiments, CIC is computed as the arithmetic mean over all the dialog domains to retain the overall performance.
CIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities. For example, in news summarization the proper nouns are the critical information to retain.
Experimental Settings ::: Implementation Details
We implemented our baselines with OpenNMT framework BIBREF33. We delexicalize utterances according to the belief span annotation. To maintain the generalizability of SPNet, we combine the slots that refer to the same information from different dialog domains into one slot (e.g. time). Instead of using pre-trained word embeddings like GloVe BIBREF34, we train word embeddings from scratch with a 128-dimension embedding layer. We set the hidden states of the bidirectional LSTM encoders to 256 dimensions, and the unidirectional LSTM decoder to 512 dimension. Our model is optimized using Adam BIBREF35 with a learning rate of 0.001, $\beta _1=0.9$, $\beta _2=0.999$. We reduce the learning rate to half to avoid overfitting when the validation loss increases. We set the hyperparameter $\lambda $ to 0.5 in the objective function and the batch size to eight. We use beam search with a beam size of three during decoding. We use the validation set to select the model parameter. Our model with and without multi-task takes about 15 epochs and seven epochs to converge, respectively.
Results and Discussions ::: Automatic Evaluation Results
To demonstrate SPNet's effectiveness, we compare it with two state-of-the-art methods, Pointer-Generator BIBREF5 and Transformer BIBREF6. Pointer-Generator is the state-of-the-art method in abstractive document summarization. In inference, we use length penalty and coverage penalty mentioned in BIBREF36. The hyperparameters in the original implementation BIBREF5 were used. Transformer uses attention mechanisms to replace recurrence for sequence transduction. Transformer generalizes well to many sequence-to-sequence problems, so we adapt it to our task, following the implementation in the official OpenNMT-py documentation.
We show all the models' results in Table TABREF24. We observe that SPNet reaches the highest score in both ROUGE and CIC. Both Pointer-Generator and Transformer achieve high ROUGE scores, but a relative low CIC scores. It suggests that the baselines have more room for improvement on preserving critical slot information. All the scaffolds we propose can be applied to different neural network models. In this work we select Pointer-Generator as our base model in SPNet because we observe that Transformer only has a small improvement over Pointer-Generator but is having a higher cost on training time and computing resources. We observe that SPNet outperforms other methods in all the automatic evaluation metrics with a big margin, as it incorporates all the three semantic scaffolds. Semantic slot contributes the most to SPNet's increased performance, bringing the largest increase on all automatic evaluation metrics.
Results and Discussions ::: Human Evaluation Results
We also perform human evaluation to verify if our method's increased performance on automatic evaluation metrics entails better human perceived quality. We randomly select 100 test samples from MultiWOZ test set for evaluation. We recruit 150 crowd workers from Amazon Mechanical Turk. For each sample, we show the conversation, reference summary, as well as summaries generated by Pointer-Generator and SPNet to three different participants. The participants are asked to score each summary on three indicators: relevance, conciseness and readability on a 1 to 5 scale, and rank the summary pair (tie allowed).
We present human evaluation results in Table TABREF27. In the scoring part, our model outperforms Pointer-Generator in all three evaluation metrics. SPNet scored better than Pointer-Generator on relevance and readability. All generated summaries are relatively concise; therefore, they score very similar in conciseness. Ground truth is still perceived as more relevant and readable than SPNet results. However, ground truth does not get a high absolute score. From the feedback of the evaluators, we found that they think that the ground truth has not covered all the necessary information in the conversation, and the description is not so natural. This motivates us to collect a dialog summarization dataset with high-quality human-written summaries in the future. Results in the ranking evaluation show more differences between different summaries. SPNet outperforms Pointer-Generator with a large margin. Its performance is relatively close to the ground truth summary.
Results and Discussions ::: Case study
Table TABREF25 shows an example summary from all models along with ground truth summary. We observe that Pointer-Generator ignores some essential fragments, such as the restaurant booking information (6 people, Sunday, 18:45). Missing information always belongs to the last several domains (restaurant in this case) in a multi-domain dialog. We also observe that separately encoding two speakers reduces repetition and inconsistency. For instance, Pointer-Generator's summary mentions “free wifi" several times and has conflicting requirements on wifi. This is because dialogs has information redundancy, but single-speaker model ignores such dialog property.
Our method has limitations. In the example shown in Table TABREF25, our summary does not mention the hotel name (Alexander Bed and Breakfast) and its address (517a Coldham Lane) referred in the source. It occurs because the ground truth summary doe not cover it in the training data. As a supervised method, SPNet is hard to generate a summary containing additional information beyond the ground truth. However, in some cases, SPNet can also correctly summarize the content not covered in the reference summary (see Table TABREF31 in Appendix).
Furthermore, although our SPNet achieves a much-improved performance, the application of SPNet still needs extra annotations for semantic scaffolds. For a dialog dataset, speaker role scaffold is a natural pattern for modeling. Most multi-domain dialog corpus has the domain annotation. While for texts, for example news, its topic categorization such as sports or entertainment can be used as domain annotation. We find that semantic slot scaffold brings the most significant improvement, but it is seldom explicitly annotated. However, the semantic slot scaffold can be relaxed to any critical entities in the corpus, such as team name in sports news or professional terminology in a technical meeting.
Conclusion and Future Work
We adapt a dialog generation dataset, MultiWOZ to an abstractive dialog summarization dataset. We propose SPNet, an end-to-end model that incorporates the speaker role, semantic slot and dialog domain as the semantic scaffolds to improve abstractive summary quality. We also propose an automatic evaluation metric CIC that considers semantic slot relevance to serve as a complementary metric to ROUGE. SPNet outperforms baseline methods in both automatic and human evaluation metrics. It suggests that involving semantic scaffolds efficiently improves abstractive summarization quality in the dialog scene.
Moreover, we can easily extend SPNet to other summarization tasks. We plan to apply semantic slot scaffold to news summarization. Specifically, we can annotate the critical entities such as person names or location names to ensure that they are captured correctly in the generated summary. We also plan to collect a human-human dialog dataset with more diverse human-written summaries. | Answer with content missing: (formula for CIC) it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities |
14684ad200915ff1e3fc2a89cb614e472a1a2854 | 14684ad200915ff1e3fc2a89cb614e472a1a2854_0 | Q: Is new evaluation metric extension of ROGUE?
Text: Introduction
Summarization aims to condense a piece of text to a shorter version, retaining the critical information. On dialogs, summarization has various promising applications in the real world. For instance, the automatic doctor-patient interaction summary can save doctors' massive amount of time used for filling medical records. There is also a general demand for summarizing meetings in order to track project progress in the industry. Generally, multi-party conversations with interactive communication are more difficult to summarize than single-speaker documents. Hence, dialog summarization will be a potential field in summarization track.
There are two types of summarization: extractive and abstractive. Extractive summarization selects sentences or phrases directly from the source text and merges them to a summary, while abstractive summarization attempts to generate novel expressions to condense information. Previous dialog summarization research mostly study extractive summarization BIBREF1, BIBREF2. Extractive methods merge selected important utterances from a dialog to form summary. Because dialogs are highly dependant on their histories, it is difficult to produce coherent discourses with a set of non-consecutive conversation turns. Therefore, extractive summarization is not the best approach to summarize dialogs. However, most modern abstractive methods focus on single-speaker documents rather than dialogs due to the lack of dialog summarization corpora. Popular abstractive summarization dataset like CNN/Daily Mail BIBREF3 is on news documents. AMI meeting corpus BIBREF4 is the common benchmark, but it only has extractive summary.
In this work, we introduce a dataset for abstractive dialog summarization based on MultiWOZ BIBREF0. Seq2Seq models such as Pointer-Generator BIBREF5 have achieved high-quality summaries of news document. However, directly applying a news summarizer to dialog results in two drawbacks: informative entities such as place name are difficult to capture precisely and contents in different domains are summarized unequally. To address these problems, we propose Scaffold Pointer Network (SPNet). SPNet incorporates three types of semantic scaffolds in dialog: speaker role, semantic slot, and dialog domain. Firstly, SPNet adapts separate encoder to attentional Seq2Seq framework, producing distinct semantic representations for different speaker roles. Then, our method inputs delexicalized utterances for producing delexicalized summary, and fills in slot values to generate complete summary. Finally, we incorporate dialog domain scaffold by jointly optimizing dialog domain classification task along with the summarization task. We evaluate SPNet with both automatic and human evaluation metrics on MultiWOZ. SPNet outperforms Pointer-Generator BIBREF5 and Transformer BIBREF6 on all the metrics.
Related Work
BIBREF7 first applied modern neural models to abstractive summarization. Their approach is based on Seq2Seq framework BIBREF8 and attention mechanism BIBREF9, achieving state-of-the-art results on Gigaword and DUC-2004 dataset. BIBREF10 proposed copy mechanism in summarization, demonstrating its effectiveness by combining the advantages of extractive and abstractive approach. BIBREF5 applied pointing BIBREF11 as copy mechanism and use coverage mechanism BIBREF12 to discourage repetition. Most recently, reinforcement learning (RL) has been employed in abstractive summarization. RL-based approaches directly optimize the objectives of summarization BIBREF13, BIBREF14. However, deep reinforcement learning approaches are difficult to train and more prone to exposure bias BIBREF15.
Recently, pre-training methods are popular in NLP applications. BERT BIBREF16 and GPT BIBREF17 have achieved state-of-the-art performance in many tasks, including summarization. For instance, BIBREF18 proposed a method to pre-train hierarchical document encoder for extractive summarization. BIBREF19 proposed two strategies to incorporate a pre-trained model (GPT) to perform the abstractive summarizer and achieved a better performance. However, there has not been much research on adapting pre-trained models to dialog summarization.
Dialog summarization, specifically meeting summarization, has been studied extensively. Previous work generally focused on statistical machine learning methods in extractive dialog summarization: BIBREF20 used skip-chain conditional random fields (CRFs) BIBREF21 as a ranking method in extractive meeting summarization. BIBREF22 compared support vector machines (SVMs) BIBREF23 with LDA-based topic models BIBREF24 for producing decision summaries. However, abstractive dialog summarization was less explored due to the lack of a suitable benchmark. Recent work BIBREF25, BIBREF26, BIBREF27 created abstractive dialog summary benchmarks with existing dialog corpus. BIBREF26 annotated topic descriptions in AMI meeting corpus as the summary. However, topics they defined are coarse, such as “industrial designer presentation". They also proposed a model with a sentence-gated mechanism incorporating dialog acts to perform abstractive summarization. Moreover, BIBREF28 first built a model to summarize audio-visual meeting data with an abstractive method. However, previous work has not investigated the utilization of semantic patterns in dialog, so we explore it in-depth in our work.
Proposed Method
As discussed above, state-of-the-art document summarizers are not applicable in conversation settings. We propose Scaffold Pointer Network (SPNet) based on Pointer-Generator BIBREF5. SPNet incorporates three types of semantic scaffolds to improve abstractive dialog summarization: speaker role, semantic slot and dialog domain.
Proposed Method ::: Background
We first introduce Pointer-Generator BIBREF5. It is a hybrid model of the typical Seq2Seq attention model BIBREF29 and pointer network BIBREF11. Seq2Seq framework encodes source sequence and generates the target sequence with the decoder. The input sequence is fed into the encoder token by token, producing the encoder hidden states $h_i$ in each encoding step. The decoder receives word embedding of the previous word and generates a distribution to decide the target element in this step, retaining decoder hidden states $s_t$. In Pointer-Generator, attention distribution $a^t$ is computed as in BIBREF9:
where $W_h$, $W_s$, $v$ and $b_{attn}$ are all learnable parameters.
With the attention distribution $a^t$, context vector $h_t^*$ is computed as the weighted sum of encoder's hidden states. Context vector is regarded as the attentional information in the source text:
Pointer-Generator differs from typical Seq2Seq attention model in the generation process. The pointing mechanism combines copying words directly from the source text with generating words from a fixed vocabulary. Generation probability $p_{gen}$ is calculated as “a soft switch" to choose from copy and generation:
where $x_t$ is the decoder input, $w_{h^*}$, $w_s$, $w_x$ and $b_{ptr}$ are all learnable parameters. $\sigma $ is sigmoid function, so the generation probability $p_{gen}$ has a range of $[0, 1]$.
The ability to select from copy and generation corresponds to a dynamic vocabulary. Pointer network forms an extended vocabulary for the copied tokens, including all the out-of-vocabulary(OOV) words appeared in the source text. The final probability distribution $P(w)$ on extended vocabulary is computed as follows:
where $P_{vocab}$ is the distribution on the original vocabulary, $V^{\prime }$, $V$, $b$ and $b^{\prime }$ are learnable parameters used to calculate such distribution.
Proposed Method ::: Scaffold Pointer Network (SPNet)
Our Scaffold Pointer Network (depicted in Figure FIGREF7) is based on Pointer-Generator BIBREF5. The contribution of SPNet is three-fold: separate encoding for different roles, incorporating semantic slot scaffold and dialog domain scaffold.
Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Speaker Role Scaffold
Our encoder-decoder framework employs separate encoding for different speakers in the dialog. User utterances $x_t^{usr}$ and system utterances $x_t^{sys}$ are fed into a user encoder and a system encoder separately to obtain encoder hidden states $h_{i}^{usr}$ and $h_{i}^{sys}$ . The attention distributions and context vectors are calculated as described in section SECREF1. In order to merge these two encoders in our framework, the decoder's hidden state $s_0$ is initialized as:
The pointing mechanism in our model follows the Equation DISPLAY_FORM4, and we obtain the context vector $h_t^{*}$:
Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Semantic Slot Scaffold
We integrate semantic slot scaffold by performing delexicalization on original dialogs. Delexicalization is a common pre-processing step in dialog modeling. Specifically, delexicalization replaces the slot values with its semantic slot name(e.g. replace 18:00 with [time]). It is easier for the language modeling to process delexicalized texts, as they have a reduced vocabulary size. But these generated sentences lack the semantic information due to the delexicalization. Some previous dialog system research ignored this issue BIBREF30 or completed single delexicalized utterance BIBREF31 as generated response. We propose to perform delexicalization in dialog summary, since delexicalized utterances can simplify dialog modeling. We fill the generated templates with slots with the copy and pointing mechanism.
We first train the model with the delexicalized utterance. Attention distribution $a^t$ over the source tokens instructs the decoder to fill up the slots with lexicalized values:
Note that $w_{slot}$ specifies the tokens that represents the slot name (e.g. [hotel_place], [time]). Decoder directly copies lexicalized value $value(w_i)$ conditioned on attention distribution $a_i^t$. If $w$ is not a slot token, then the probability $P(w)$ is calculated as Equation DISPLAY_FORM5.
Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Dialog Domain Scaffold
We integrate dialog domain scaffold through a multi-task framework. Dialog domain indicates different conversation task content, for example, booking hotel, restaurant and taxi in MultiWOZ dataset. Generally, the content in different domains varies so multi-domain task summarization is more difficult than single-domain. We include domain classification as the auxiliary task to incorporate the prior that different domains have different content. Feedback from the domain classification task provides domain specific information for the encoder to learn better representations. For domain classification, we feed the concatenated encoder hidden state through a binary classifier with two linear layers, producing domain probability $d$. The $i^{th}$ element $d_i$ in $d$ represents the probability of the $i^{th}$ domain:
where $U$, $U^{\prime }$, $b_{d}$ and $b_{d}^{\prime }$ are all trainable parameters in the classifier. We denote the loss function of summarization as $loss_1$ and domain classification as $loss_2$. Assume target word at timestep $t$ is $w_t^{*}$, $loss_1$ is the arithmetic mean of the negative log likelihood of $w_t^{*}$ over the generated sequence:
The domain classification task is a multi-label binary classification problem. We use binary cross entropy loss between the $i^{th}$ domain label $\hat{d_i}$ and predict probability $d_i$ for this task:
where $|D|$ is the number of domains. Finally, we reweight the classification loss with hyperparameter $\lambda $ and the objective function is:
Experimental Settings ::: Dataset
We validate SPNet on MultiWOZ-2.0 dataset BIBREF0. MultiWOZ consists of multi-domain conversations between a tourist and a information center clerk on varies booking tasks or domains, such as booking restaurants, hotels, taxis, etc. There are 10,438 dialogs, spanning over seven domains. 3,406 of them are single-domain (8.93 turns on average) and 7,302 are multi-domain (15.39 turns on average). During MultiWOZ data collection, instruction is provided for crowd workers to perform the task. We use the instructions as the dialog summary, and an example data is shown in Table TABREF25. Dialog domain label is extracted from existing MultiWOZ annotation. In the experiment, we split the dataset into 8,438 training, 1,000 validation, and 1,000 testing.
Experimental Settings ::: Evaluation Metrics
ROUGE BIBREF32 is a standard metric for summarization, designed to measure the surface word alignment between a generated summary and a human written summary. We evaluate our model with ROUGE-1, ROUGE-2 and ROUGE-L. They measure the word-overlap, bigram-overlap, and longest common sequence between the reference summary and the generated summary respectively. We obtain ROUGE scores using the files2rouge package. However, ROUGE is insufficient to measure summarization performance. The following example shows its limitations:
Reference: You are going to [restaurant_name] at [time].
Summary: You are going to [restaurant_name] at.
In this case, the summary has a high ROUGE score, as it has a considerable proportion of word overlap with the reference summary. However, it still has poor relevance and readability, for leaving out one of the most critical information: [time]. ROUGE treats each word equally in computing n-gram overlap while the informativeness actually varies: common words or phrases (e.g. “You are going to") significantly contribute to the ROUGE score and readability, but they are almost irrelevant to essential contents. The semantic slot values (e.g. [restaurant_name], [time]) are more essential compared to other words in the summary. However, ROUGE did not take this into consideration. To address this drawback in ROUGE, we propose a new evaluation metric: Critical Information Completeness (CIC). Formally, CIC is a recall of semantic slot information between a candidate summary and a reference summary. CIC is defined as follows:
where $V$ stands for a set of delexicalized values in the reference summary, $Count_{match}(v)$ is the number of values co-occurring in the candidate summary and reference summary, and $m$ is the number of values in set $V$. In our experiments, CIC is computed as the arithmetic mean over all the dialog domains to retain the overall performance.
CIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities. For example, in news summarization the proper nouns are the critical information to retain.
Experimental Settings ::: Implementation Details
We implemented our baselines with OpenNMT framework BIBREF33. We delexicalize utterances according to the belief span annotation. To maintain the generalizability of SPNet, we combine the slots that refer to the same information from different dialog domains into one slot (e.g. time). Instead of using pre-trained word embeddings like GloVe BIBREF34, we train word embeddings from scratch with a 128-dimension embedding layer. We set the hidden states of the bidirectional LSTM encoders to 256 dimensions, and the unidirectional LSTM decoder to 512 dimension. Our model is optimized using Adam BIBREF35 with a learning rate of 0.001, $\beta _1=0.9$, $\beta _2=0.999$. We reduce the learning rate to half to avoid overfitting when the validation loss increases. We set the hyperparameter $\lambda $ to 0.5 in the objective function and the batch size to eight. We use beam search with a beam size of three during decoding. We use the validation set to select the model parameter. Our model with and without multi-task takes about 15 epochs and seven epochs to converge, respectively.
Results and Discussions ::: Automatic Evaluation Results
To demonstrate SPNet's effectiveness, we compare it with two state-of-the-art methods, Pointer-Generator BIBREF5 and Transformer BIBREF6. Pointer-Generator is the state-of-the-art method in abstractive document summarization. In inference, we use length penalty and coverage penalty mentioned in BIBREF36. The hyperparameters in the original implementation BIBREF5 were used. Transformer uses attention mechanisms to replace recurrence for sequence transduction. Transformer generalizes well to many sequence-to-sequence problems, so we adapt it to our task, following the implementation in the official OpenNMT-py documentation.
We show all the models' results in Table TABREF24. We observe that SPNet reaches the highest score in both ROUGE and CIC. Both Pointer-Generator and Transformer achieve high ROUGE scores, but a relative low CIC scores. It suggests that the baselines have more room for improvement on preserving critical slot information. All the scaffolds we propose can be applied to different neural network models. In this work we select Pointer-Generator as our base model in SPNet because we observe that Transformer only has a small improvement over Pointer-Generator but is having a higher cost on training time and computing resources. We observe that SPNet outperforms other methods in all the automatic evaluation metrics with a big margin, as it incorporates all the three semantic scaffolds. Semantic slot contributes the most to SPNet's increased performance, bringing the largest increase on all automatic evaluation metrics.
Results and Discussions ::: Human Evaluation Results
We also perform human evaluation to verify if our method's increased performance on automatic evaluation metrics entails better human perceived quality. We randomly select 100 test samples from MultiWOZ test set for evaluation. We recruit 150 crowd workers from Amazon Mechanical Turk. For each sample, we show the conversation, reference summary, as well as summaries generated by Pointer-Generator and SPNet to three different participants. The participants are asked to score each summary on three indicators: relevance, conciseness and readability on a 1 to 5 scale, and rank the summary pair (tie allowed).
We present human evaluation results in Table TABREF27. In the scoring part, our model outperforms Pointer-Generator in all three evaluation metrics. SPNet scored better than Pointer-Generator on relevance and readability. All generated summaries are relatively concise; therefore, they score very similar in conciseness. Ground truth is still perceived as more relevant and readable than SPNet results. However, ground truth does not get a high absolute score. From the feedback of the evaluators, we found that they think that the ground truth has not covered all the necessary information in the conversation, and the description is not so natural. This motivates us to collect a dialog summarization dataset with high-quality human-written summaries in the future. Results in the ranking evaluation show more differences between different summaries. SPNet outperforms Pointer-Generator with a large margin. Its performance is relatively close to the ground truth summary.
Results and Discussions ::: Case study
Table TABREF25 shows an example summary from all models along with ground truth summary. We observe that Pointer-Generator ignores some essential fragments, such as the restaurant booking information (6 people, Sunday, 18:45). Missing information always belongs to the last several domains (restaurant in this case) in a multi-domain dialog. We also observe that separately encoding two speakers reduces repetition and inconsistency. For instance, Pointer-Generator's summary mentions “free wifi" several times and has conflicting requirements on wifi. This is because dialogs has information redundancy, but single-speaker model ignores such dialog property.
Our method has limitations. In the example shown in Table TABREF25, our summary does not mention the hotel name (Alexander Bed and Breakfast) and its address (517a Coldham Lane) referred in the source. It occurs because the ground truth summary doe not cover it in the training data. As a supervised method, SPNet is hard to generate a summary containing additional information beyond the ground truth. However, in some cases, SPNet can also correctly summarize the content not covered in the reference summary (see Table TABREF31 in Appendix).
Furthermore, although our SPNet achieves a much-improved performance, the application of SPNet still needs extra annotations for semantic scaffolds. For a dialog dataset, speaker role scaffold is a natural pattern for modeling. Most multi-domain dialog corpus has the domain annotation. While for texts, for example news, its topic categorization such as sports or entertainment can be used as domain annotation. We find that semantic slot scaffold brings the most significant improvement, but it is seldom explicitly annotated. However, the semantic slot scaffold can be relaxed to any critical entities in the corpus, such as team name in sports news or professional terminology in a technical meeting.
Conclusion and Future Work
We adapt a dialog generation dataset, MultiWOZ to an abstractive dialog summarization dataset. We propose SPNet, an end-to-end model that incorporates the speaker role, semantic slot and dialog domain as the semantic scaffolds to improve abstractive summary quality. We also propose an automatic evaluation metric CIC that considers semantic slot relevance to serve as a complementary metric to ROUGE. SPNet outperforms baseline methods in both automatic and human evaluation metrics. It suggests that involving semantic scaffolds efficiently improves abstractive summarization quality in the dialog scene.
Moreover, we can easily extend SPNet to other summarization tasks. We plan to apply semantic slot scaffold to news summarization. Specifically, we can annotate the critical entities such as person names or location names to ensure that they are captured correctly in the generated summary. We also plan to collect a human-human dialog dataset with more diverse human-written summaries. | No |
8d1f9d3aa2cc2e2e58d3da0f5edfc3047978f3ee | 8d1f9d3aa2cc2e2e58d3da0f5edfc3047978f3ee_0 | Q: What measures were used for human evaluation?
Text: Introduction
Commonsense reasoning has long been acknowledged as a critical bottleneck of artificial intelligence and especially in natural language processing. It is an ability of combining commonsense facts and logic rules to make new presumptions about ordinary scenes in our daily life. A distinct property of commonsense reasoning problems is that they are generally trivial for human-beings while challenging for machine reasoners.
There have been a few recent tasks and datasets for testing machine commonsense, while most of them frame their problems as multi-choice question answering, such as CSQA BIBREF0 and SWAG BIBREF1. We name this kind of tasks as deterministic commonsense reasoning because they focus on modeling the plausibility of given complete scenes. The systems for these tasks thus have to work with biased selection of distractors, and thus are less practical or challenging. Simply fine-tuning such large pre-trained language encoders can yield near or exceeding human performance BIBREF2. On the other hand, few work has been done so far in testing machine commonsense in a generative reasoning setting, where a reasoner is expected to complete scenes with several given concepts.
Specifically, we would like to investigate if machine-reasoning models can generate a sentence that contains a required set of concepts (i.e. nouns or verbs) while describing a common scene in our daily life. For example, as shown in Figure FIGREF1, given an unordered collection of concepts “{apple (noun), bag (noun), pick (verb), place (verb), tree (noun)}”, a rational reasoner should be able to generate a sentence like “A boy picks some apples from a tree and places them into a bag.”, which describes a natural scene and contains all given concepts. The creation of this sentence is easy for humans while non-trivial for even state-of-the-art conditional language generation models. We argue that such an ability of recovering natural scenes of daily life can benefit a wide range of natural language generation (NLG) tasks including image/video captioning BIBREF3, BIBREF4, scene-based visual reasoning and VQA BIBREF5, storytelling BIBREF6, and dialogue systems BIBREF7, BIBREF8.
Towards empowering machines with the generative commonsense reasoning ability, we create a large-scale dataset, named CommonGen, for the constrained text generation task. We collect $37,263$ concept-sets as the inputs, each of which contains three to five common concepts. These concept-sets are sampled from several large corpora of image/video captions, such that the concepts inside them are more likely to co-occur in natural scenes. Through crowd-sourcing via Amazon Mechanical Turk (AMT), we finally obtain $89,028$ human-written sentences as expected outputs. We investigate the performance of sophisticated sequence generation methods for the proposed task with both automatic metrics and human evaluation. The experiments show that all methods are far from human performance in generative commonsense reasoning. Our main contributions are as follows: 1) We introduce the first large-scale constrained text generation dataset targeting at generative commonsense reasoning; 2) We systematically compare methods for this (lexically) constrained text generation with extensive experiments and evaluation. 3) Our code and data are publicly available (w/ the URL in the abstract), so future research in this direction can be directly developed in a unified framework.
Problem Formulation
In this section, we formulate our task with mathematical notations and discuss its inherent challenges. The input to the task is a set of $n$ concepts $x=\lbrace c_1,c_2,\dots ,c_n\rbrace \in \mathcal {X}$, where $c_i\in \mathcal {C}$ is a common noun or verb. $\mathcal {X}$ denotes the space of concept-sets and $\mathcal {C}$ stands for the concept vocabulary. The expected output of this task is a simple, grammatical sentence $y\in \mathcal {Y}$, describing a natural scene in our daily-life that covers all given concepts in $x$. Note that other forms of given concepts are also accepted, such as plural forms of nouns and verbs. In addition, we also provide rationales as an optional resource to model the generation process. For each pair of $(x, y)$, a rationale $r$ is a list of sentences that explains the background commonsense knowledge used in the scene recovering process.
The task is to learn a structured predictive function $f:\mathcal {X} \rightarrow \mathcal {Y}$, which maps a concept-set to a sentence. Thus, it can be seen as a special case of constrained text generation BIBREF9. The unique challenges of our proposed task come from two main aspects as follows.
Constrained Decoding. Lexically constrained decoding for sentence generation has been an important and challenging research topic in machine translation community BIBREF10, where they focus on how to decode sentences when some words/phrases (e.g. terminology) must present in target sentences (Section SECREF6). However, it is still an open problem how to efficiently generate sentences given an unordered set of multiple keywords with potential morphological changes (e.g. “pick” $\rightarrow $ “picks” in the previous case). Apart from that, the part-of-speech constraints brings even more difficulties (e.g. “place” can be verb/noun).
Commonsense Reasoning. Apart from the challenge in constrained decoding, a generative commonsense reasoner also has to compositionally use (latent) commonsense knowledge for generating most plausible scenes. Recall the illustrative example in Figure FIGREF1, even such a simple scene generation process needs pretty much commonsense knowledge like: 1) “apples grow in trees”; 2) “bags are containers that you can put something in”; 3) “you usually pick something and then place it in a container”. Expected reasoners have to prioritize target scenes over an infinity number of less plausible scenes like “A boy picks an apple tree and places it into bags.” or “A boy places some bags on a tree and picks an apple.”.
The CommonGen Dataset
In this section, we present how we build the CommonGen dataset for testing machine commonsense with generative reasoning. The overall data collection process is as follows. 1) We first collect a large amount of high-quality image/video caption sentences from several existing corpora, 2) Then, we compute co-occurrence statistics about concept-sets of different sizes ($3\sim 5$), such that we can find the concept-sets that are more likely to be present in the same scene. 3) Finally, we ask human crowd-workers from AMT to write scenes with rationales for every given concept-set, which serve as our development and test sets. The training set consists of carefully post-processed human-written caption sentences, which have little overlap with dev/test sets. We present the statistics and show its inherent challenges at the end of this section.
The CommonGen Dataset ::: Collecting Concept-Sets with Captions
Following the general definition in the largest commonsense knowledge graph, ConceptNet BIBREF11, we understand a concept as a common noun or verb. We aim to test the ability of generating natural scenes with a given set of concepts. The expected concept-sets in our task are supposed to be likely co-occur in natural, daily-life scenes . The concepts in images/videos captions, which usually describe scenes in our daily life, thus possess the desired property. We therefore collect a large amount of caption sentences from a variety of datasets, including VATEX BIBREF4, LSMDC BIBREF12, ActivityNet BIBREF13, and SNLI BIBREF15, forming 1,040,330 sentences in total.
We assume if a set of concepts are all mentioned together in more caption sentences, then this concept-set is more like to co-occur. Thus, we compute the co-occurrence frequency of all possible concept-sets that have $3\sim 5$ concepts, named as three/four/five-concept-sets respectively. Each concept-set is associated with at least one caption sentences. We carefully post-process them and take the shortest ones with minimal overlaps as the final data. These initial concept-sets are further divided into three parts: train/dev/test. We then iterate all training concept-sets and remove the ones that have more than two overlapping concepts with any concept-set in the dev or test set. Thus, the dev/test set can better measure the generalization ability of models on unseen combinations of concepts.
The CommonGen Dataset ::: Crowd-Sourcing via AMT
It is true that the above-mentioned associated caption sentences for each concept-set are human-written and do describe scenes that cover all given concepts. However, they are created under specific contexts (i.e. an image or a video) and thus might be less representative for common sense. To better measure the quality and interpretability of generative reasoners, we need to evaluate them with scenes and rationales created by using concept-sets only as the signals for annotators.
We collect more human-written scenes for each concept-set in dev and test set through crowd-sourcing via the Amazon Mechanical Turk platform. Each input concept-set is annotated by at least three different humans. The annotators are also required to give sentences as the rationales, which further encourage them to use common sense in creating their scenes. The crowd-sourced sentences correlate well with the associated captions, meaning that it is reasonable to use caption sentences as training data although they can be partly noisy. Additionally, we utilize a search engine over the OMCS corpus BIBREF16 for retrieving relevant propositions as distant rationales in training data.
The CommonGen Dataset ::: Statistics
We present the statistical information of our final dataset. Firstly, we summarize the basic statistics in Table TABREF9, such as the number of unique concept-sets, scene sentences, and sentence lengths. In total, there are 3,706 unique concepts among all concept-sets, and 3,614/1,018/1,207 in the train/dev/test parts respectively. Note that there are 4% of the dev and 6% of the test concepts never appear in the training data, so we can better understand how well trained models can perform with unseen concepts.
We analyze the overlap between training concept-sets and dev/test concept-sets. By average, we find that 98.8% of the training instances share no common concept at all with dev/test data, such that the dev/test can help us analyze model performance on new combinations of concepts.
We also visualize the frequency distribution of our test concept-sets in Figure FIGREF7 by showing the frequency of top 50 single concepts and co-occurred concept pairs.
Methods
In this section, we introduce the methods that we adopt for the proposed constrained text generation task. We group these methods into several types as follows. Basically, we have different kinds of encoder-decoder architectures with copy attention mechanism, including both classic and recently proposed methods. Apart from that, we utilize the state-of-the-art pre-trained sentence generation model for our task. Moreover, we include three typical models for abstractive summarization, story generation respectively, and keywords-based decoding of language models.
Methods ::: Seq-to-Seq Learning
One very straightforward way is to form this problem as a “sequence”-to-sequence task, where input sequences are randomly sorted sets of given concepts. In this way, encoder-decoder seq2seq architectures based on bidirectional RNN (bRNN) BIBREF17 or Transformer (Trans.) BIBREF18 can be directly adopted to the task, just like many other conditional sequence generation problems (translation, summarization, etc.).
Order-insensitive processing. However, these encoders may degrade because our inputs are actually order-insensitive. We thus try to use multi-layer perceptrons (MLP) with mean-pooling as the encoder (“mean encoder”) over sequences of word vectors to completely eliminate the order sensitivity. Similarly, we consider removing the positional embeddings in Transformers (Trans. w/o Pos).
Copying mechanism. The above-mentioned architectures with vanilla attention can miss the words in input sequences and thus produce either unknown tokens or synonyms. To force the decoder to produce target sentences with a constraint on input sentence, we utilize the copying mechanism BIBREF19 for all these models. We follow the implementation of these methods by OpenNMT-py BIBREF20.
Non-autoregressive generation. Recent advances in conditional sentence generation have a focus on edit-based models, which iteratively refine generated sequences (usually bounded by a fixed length). These models potentially get better performance than auto-regressive methods because of their explicit modeling on iterative refinements. We study typical models including iNAT BIBREF21, Insertion Transformer (InsertTrans) BIBREF22, and Levenshtein Transformer (LevenTrans) BIBREF23.
Methods ::: A BERT-based Method: UniLM
We employ a new unified pre-trained language model, UniLM BIBREF24, which uses BERT BIBREF25 as the encoder and then fine-tunes the whole architecture with different generation-based objective. To the best of our knowledge, the UniLM model is the state-of-the-art method for a wide range of conditional text generation tasks including summarization, question generation, and dialogue responding.
Methods ::: Other methods
Based on the similarity between our task and abstractive summarization and story generation (with given topic words), we also apply Pointer Generator Networks (“PointerGen”) BIBREF26 and Multi-scale Fusion Attention (“Fusion Attn.”) BIBREF27 model respectively for our task.
Methods ::: Incorporating Commonsense Rationales
We explore how to utilize additional commonsense knowledge (i.e. rationales) as the input to the task. Like we mentioned in Section SECREF6, we search relevant sentences from the OMCS corpus as the additional distant rationales, and ground truth rationale sentences for dev/test data. The inputs are no longer the concept-sets themselves, but in a form of “[rationales$|$concept-set]” (i.e. concatenating the rationale sentences and original concept-set strings).
Evaluation
Herein, we present the experimental results for comparing different baseline methods in the proposed setting. We first introduce the setup and automatic metrics, and then we present the results and analysis. Finally, we show human evaluation results and qualitative analysis.
Evaluation ::: Setup
We use the proposed CommonGen dataset in two setting: knowledge-agnostic and knowledge-aware. For the knowledge-agnostic setting, we simply apply the methods in Section SECREF4 while we concatenate rationales and input concept-sets together as the knowledge-aware inputs (“$+r$”).
Evaluation ::: Automatic Metrics
For automatically evaluating our methods, we propose to use widely used metric for image/video captioning. This is because the proposed CommonGen task can be regarded as also a caption task where the context are incomplete scenes with given concept-sets. Therefore, we choose BLEU-3/4 BIBREF28, ROUGE-2/L BIBREF29, CIDEr BIBREF30, and SPICE BIBREF31 as the main metrics. Apart from these classic metrics, we also include a novel embedding-based metric named BERTScore BIBREF32. To make the comparisons more clear, we show the delta of BERTScore results by subtracting the score of merely using input concept-sets as target sentences, named $\triangle $BERTS.
To have an estimation about human performance in each metric, we iteratively treat every reference sentence in dev/test data as the prediction to be compared with all references (including itself). That is, if a model has the same reasoning ability with average performance of our crowd workers, its results should exceed this “human bound”.
Evaluation ::: Experimental Results
We present the experimental results of five groups of methods that are introduced in Section SECREF4. We find that the model UniLM outperforms all other baseline methods by a large margin, which is expected due to it is pre-trained with the BERT encoder towards generation objectives. However, its performance is still way far from the human bound, and this margin is even larger in test data.
We notice that the most recent edit-based model named LevenTrans archives the best performance among models without pre-training at all. This shows that edit-based sequence generation models can better deal with the cases where target sentences share similar vocabulary with source ones. Nonetheless, the other two models within the same sequence modeling framework (i.e. fairseq) are much worse, which might because of their specialty designed for machine translation.
An order-insensitive sequence/set encoder, “mean encoder”, outperform order-sensitive counterparts like “bRNN”. However, such a marginal improvement is not seen in the comparison between “Trans.” vs “Trans. w/o Pos”. We assume that for short sequences the order sensitivity does not harm sequential encoders, while positional embeddings in Transformers can better improve the self-attention mechanism. Also, we find that Transformer-based seq2seq architectures are not outperforming simpler models like bRNN.
As for the use of additional retrieved sentences form OMCS corpus and human-written associated rationales, we find that they are not generally helpful in investigated architectures. Although they increase the BLEU and ROUGE scores, the metrics specially designed for captioning like CIDEr and SPICE are dropping down. We argue that it might because the OMCS sentences are actually not aligned with training data, and more sophisticated methods for encoding such non-sequential facts in a more compositional way.
Evaluation ::: Human Evaluation
From the automatic evaluation results with multiple metrics, we have a rough idea of the performance of all models. However, no automatic metric is perfect, especially for a newly proposed generation task like the CommonGen. We thus ask humans to rank 100 outputs of 6 selected typical models as well as one randomly picked reference sentence, forming seven systems in total. Annotators are educated to rank results by their coverage, fluency, and plausibility in daily life. Then, we compute the cumulative gains of each system in all 100 cases:
$S^{(k)}_i$ is the final score of the $i$-th system by the $k$-th annotator. $G^{k}_{i, j}$ is the rank position of the $i$-th system output for $j$-th example. In our case, $N=100$, $K = 5$, $G^{k}_{i, j}\in [1,7]$.
As shown in Table TABREF22, we compare different systems including human bound for both the above-introduced cumulative ranking scores and the average hit@top3 rates with standard deviations. We find that the correlation between human evaluation and CIDEr and SPICE are better than the other metrics (see Table TABREF15).
Evaluation ::: Qualitative Analysis
For more clearly observe the performance of interested models, we present several real system outputs on the test set in Table TABREF24. We find that models usually cannot cover all given concepts, and also can produce repetitions of given concepts (e.g. “a dog catches a dog”, “a couple of couples”, and “at an object and an object .”). Moreover, we find that the order of actions may be mot natural. For example, the model output “a man pulls a sword out of his mouth and swallows it” makes less sense because a man usually swallow a sword first before he pull it out in such performances.
Related Work ::: Machine Common Sense
Machine common sense (MCS) has long been considered as one of the most significant area in artificial intelligence. Recently, there are various emerging datasets for testing machine commonsense from different angles, such as commonsense extraction BIBREF33, BIBREF34, next situation prediction (SWAG BIBREF1, CODAH BIBREF35, HellaSWAG BIBREF36), cultural/social understanding BIBREF37, BIBREF38, BIBREF39, visual scene comprehension BIBREF40, and general commonsense question answering BIBREF0, BIBREF41. Most of them are in a multi-choice QA setting for discriminative commonsense reasoning, among which CSQA BIBREF0 and SWAG BIBREF1 are two typical examples. The input of the CSQA task is a question that needs commonsense reasoning and there are five candidate answers (words/phrases). The SWAG task asks models to select which situation is the most plausible next situation, given a sentence describing an event.
The two tasks share very similar objectives with large pre-trained language encoders like BERT BIBREF42: Masked-LM can predict the missing words in an incomplete sentence, which is similar to the CSQA setting; NextSentPrediction classifies whether a sentence is the next sentence of the given sentence in the corpora, which can be seen as using distant supervision for the SWAG task. Thus, simply fine-tuning such large pre-trained language encoders can yield near or exceeding human performance BIBREF43, BIBREF2, but it does not necessarily mean machine reasoners can really produce new assumptions in an open and generative setting. The proposed CommonGen, to the best of our knowledge, is the first dataset and task for generative commonsense reasoning.
Related Work ::: Constrained Text Generation
Constrained or controllable text generation aims to decode realistic sentences that have expected attributes such as sentiment BIBREF44, BIBREF9, tense BIBREF9, template BIBREF45, style BIBREF46, BIBREF47, BIBREF48, etc. The most similar scenario with our task is lexically constrained sentence encoding, which has been studied mainly in the machine translation community BIBREF49, BIBREF50 for dealing with terminology and additional bilingual dictionaries.
Classic methods usually modify the (beam) searching algorithms to accommodate lexical constraints like Grid Beam Search BIBREF10. The most recent work in this line is the CGMH BIBREF51 model, which works in the inference stage to sample sentences with a sequence of multiple keywords from language models. However, our task brings more challenges: 1) we do not assume there is a fixed order of keywords in target sentences; 2) we allow morphological changes of the keywords; 3) the decoded sentences must describe highly plausible scenes in our daily life. Current methods cannot well address these issues and also work extremely slow to generate grammatical sentences. We instead mainly investigate sequence-to-sequence architectures, especially models that are based on editing operations and non-autoregressive. Pre-trained seq2seq generation models like UniLM BIBREF24 and BRAT BIBREF52 are usually initialized with pre-trained language encoder and then further fine-tuned with multiple NLG tasks. The UniLM archives the best performance on our proposed CommonGen task, while being far from human-level performance and hardly interpretable.
Conclusion
In this paper, we purpose a novel constrained text generation task for generative commonsense reasoning. We introduce a new large-scale dataset named CommonGen and investigate various methods on them. Through our extensive experiments and human evaluation, we demonstrate that the inherent difficulties of the new task cannot be addressed by even the state-of-the-art pre-trained language generation model.
For the future research, we believe the following directions are highly valuable to explore: 1) specially designed metrics for automatic evaluation that focus on commonsense plausibility; 2) better mechanisms for retrieving and imposing useful commonsense knowledge into sentence generation processes; 3) explicitly modeling keyword-centric edits (e.g. insertion, deletion, morphological changes) such that relevant commonsense knowledge can be well utilized. We also believe that models performed well on CommonGen can be easily transferred to other commonsense-required reasoning tasks with few annotations, including image/video captioning, visual question answering, and discriminative multi-choice commonsense question answering. | To have an estimation about human performance in each metric, we iteratively treat every reference sentence in dev/test data as the prediction to be compared with all references (including itself). |
5065ff56d3c295b8165cb20d8bcfcf3babe9b1b8 | 5065ff56d3c295b8165cb20d8bcfcf3babe9b1b8_0 | Q: What automatic metrics are used for this task?
Text: Introduction
Commonsense reasoning has long been acknowledged as a critical bottleneck of artificial intelligence and especially in natural language processing. It is an ability of combining commonsense facts and logic rules to make new presumptions about ordinary scenes in our daily life. A distinct property of commonsense reasoning problems is that they are generally trivial for human-beings while challenging for machine reasoners.
There have been a few recent tasks and datasets for testing machine commonsense, while most of them frame their problems as multi-choice question answering, such as CSQA BIBREF0 and SWAG BIBREF1. We name this kind of tasks as deterministic commonsense reasoning because they focus on modeling the plausibility of given complete scenes. The systems for these tasks thus have to work with biased selection of distractors, and thus are less practical or challenging. Simply fine-tuning such large pre-trained language encoders can yield near or exceeding human performance BIBREF2. On the other hand, few work has been done so far in testing machine commonsense in a generative reasoning setting, where a reasoner is expected to complete scenes with several given concepts.
Specifically, we would like to investigate if machine-reasoning models can generate a sentence that contains a required set of concepts (i.e. nouns or verbs) while describing a common scene in our daily life. For example, as shown in Figure FIGREF1, given an unordered collection of concepts “{apple (noun), bag (noun), pick (verb), place (verb), tree (noun)}”, a rational reasoner should be able to generate a sentence like “A boy picks some apples from a tree and places them into a bag.”, which describes a natural scene and contains all given concepts. The creation of this sentence is easy for humans while non-trivial for even state-of-the-art conditional language generation models. We argue that such an ability of recovering natural scenes of daily life can benefit a wide range of natural language generation (NLG) tasks including image/video captioning BIBREF3, BIBREF4, scene-based visual reasoning and VQA BIBREF5, storytelling BIBREF6, and dialogue systems BIBREF7, BIBREF8.
Towards empowering machines with the generative commonsense reasoning ability, we create a large-scale dataset, named CommonGen, for the constrained text generation task. We collect $37,263$ concept-sets as the inputs, each of which contains three to five common concepts. These concept-sets are sampled from several large corpora of image/video captions, such that the concepts inside them are more likely to co-occur in natural scenes. Through crowd-sourcing via Amazon Mechanical Turk (AMT), we finally obtain $89,028$ human-written sentences as expected outputs. We investigate the performance of sophisticated sequence generation methods for the proposed task with both automatic metrics and human evaluation. The experiments show that all methods are far from human performance in generative commonsense reasoning. Our main contributions are as follows: 1) We introduce the first large-scale constrained text generation dataset targeting at generative commonsense reasoning; 2) We systematically compare methods for this (lexically) constrained text generation with extensive experiments and evaluation. 3) Our code and data are publicly available (w/ the URL in the abstract), so future research in this direction can be directly developed in a unified framework.
Problem Formulation
In this section, we formulate our task with mathematical notations and discuss its inherent challenges. The input to the task is a set of $n$ concepts $x=\lbrace c_1,c_2,\dots ,c_n\rbrace \in \mathcal {X}$, where $c_i\in \mathcal {C}$ is a common noun or verb. $\mathcal {X}$ denotes the space of concept-sets and $\mathcal {C}$ stands for the concept vocabulary. The expected output of this task is a simple, grammatical sentence $y\in \mathcal {Y}$, describing a natural scene in our daily-life that covers all given concepts in $x$. Note that other forms of given concepts are also accepted, such as plural forms of nouns and verbs. In addition, we also provide rationales as an optional resource to model the generation process. For each pair of $(x, y)$, a rationale $r$ is a list of sentences that explains the background commonsense knowledge used in the scene recovering process.
The task is to learn a structured predictive function $f:\mathcal {X} \rightarrow \mathcal {Y}$, which maps a concept-set to a sentence. Thus, it can be seen as a special case of constrained text generation BIBREF9. The unique challenges of our proposed task come from two main aspects as follows.
Constrained Decoding. Lexically constrained decoding for sentence generation has been an important and challenging research topic in machine translation community BIBREF10, where they focus on how to decode sentences when some words/phrases (e.g. terminology) must present in target sentences (Section SECREF6). However, it is still an open problem how to efficiently generate sentences given an unordered set of multiple keywords with potential morphological changes (e.g. “pick” $\rightarrow $ “picks” in the previous case). Apart from that, the part-of-speech constraints brings even more difficulties (e.g. “place” can be verb/noun).
Commonsense Reasoning. Apart from the challenge in constrained decoding, a generative commonsense reasoner also has to compositionally use (latent) commonsense knowledge for generating most plausible scenes. Recall the illustrative example in Figure FIGREF1, even such a simple scene generation process needs pretty much commonsense knowledge like: 1) “apples grow in trees”; 2) “bags are containers that you can put something in”; 3) “you usually pick something and then place it in a container”. Expected reasoners have to prioritize target scenes over an infinity number of less plausible scenes like “A boy picks an apple tree and places it into bags.” or “A boy places some bags on a tree and picks an apple.”.
The CommonGen Dataset
In this section, we present how we build the CommonGen dataset for testing machine commonsense with generative reasoning. The overall data collection process is as follows. 1) We first collect a large amount of high-quality image/video caption sentences from several existing corpora, 2) Then, we compute co-occurrence statistics about concept-sets of different sizes ($3\sim 5$), such that we can find the concept-sets that are more likely to be present in the same scene. 3) Finally, we ask human crowd-workers from AMT to write scenes with rationales for every given concept-set, which serve as our development and test sets. The training set consists of carefully post-processed human-written caption sentences, which have little overlap with dev/test sets. We present the statistics and show its inherent challenges at the end of this section.
The CommonGen Dataset ::: Collecting Concept-Sets with Captions
Following the general definition in the largest commonsense knowledge graph, ConceptNet BIBREF11, we understand a concept as a common noun or verb. We aim to test the ability of generating natural scenes with a given set of concepts. The expected concept-sets in our task are supposed to be likely co-occur in natural, daily-life scenes . The concepts in images/videos captions, which usually describe scenes in our daily life, thus possess the desired property. We therefore collect a large amount of caption sentences from a variety of datasets, including VATEX BIBREF4, LSMDC BIBREF12, ActivityNet BIBREF13, and SNLI BIBREF15, forming 1,040,330 sentences in total.
We assume if a set of concepts are all mentioned together in more caption sentences, then this concept-set is more like to co-occur. Thus, we compute the co-occurrence frequency of all possible concept-sets that have $3\sim 5$ concepts, named as three/four/five-concept-sets respectively. Each concept-set is associated with at least one caption sentences. We carefully post-process them and take the shortest ones with minimal overlaps as the final data. These initial concept-sets are further divided into three parts: train/dev/test. We then iterate all training concept-sets and remove the ones that have more than two overlapping concepts with any concept-set in the dev or test set. Thus, the dev/test set can better measure the generalization ability of models on unseen combinations of concepts.
The CommonGen Dataset ::: Crowd-Sourcing via AMT
It is true that the above-mentioned associated caption sentences for each concept-set are human-written and do describe scenes that cover all given concepts. However, they are created under specific contexts (i.e. an image or a video) and thus might be less representative for common sense. To better measure the quality and interpretability of generative reasoners, we need to evaluate them with scenes and rationales created by using concept-sets only as the signals for annotators.
We collect more human-written scenes for each concept-set in dev and test set through crowd-sourcing via the Amazon Mechanical Turk platform. Each input concept-set is annotated by at least three different humans. The annotators are also required to give sentences as the rationales, which further encourage them to use common sense in creating their scenes. The crowd-sourced sentences correlate well with the associated captions, meaning that it is reasonable to use caption sentences as training data although they can be partly noisy. Additionally, we utilize a search engine over the OMCS corpus BIBREF16 for retrieving relevant propositions as distant rationales in training data.
The CommonGen Dataset ::: Statistics
We present the statistical information of our final dataset. Firstly, we summarize the basic statistics in Table TABREF9, such as the number of unique concept-sets, scene sentences, and sentence lengths. In total, there are 3,706 unique concepts among all concept-sets, and 3,614/1,018/1,207 in the train/dev/test parts respectively. Note that there are 4% of the dev and 6% of the test concepts never appear in the training data, so we can better understand how well trained models can perform with unseen concepts.
We analyze the overlap between training concept-sets and dev/test concept-sets. By average, we find that 98.8% of the training instances share no common concept at all with dev/test data, such that the dev/test can help us analyze model performance on new combinations of concepts.
We also visualize the frequency distribution of our test concept-sets in Figure FIGREF7 by showing the frequency of top 50 single concepts and co-occurred concept pairs.
Methods
In this section, we introduce the methods that we adopt for the proposed constrained text generation task. We group these methods into several types as follows. Basically, we have different kinds of encoder-decoder architectures with copy attention mechanism, including both classic and recently proposed methods. Apart from that, we utilize the state-of-the-art pre-trained sentence generation model for our task. Moreover, we include three typical models for abstractive summarization, story generation respectively, and keywords-based decoding of language models.
Methods ::: Seq-to-Seq Learning
One very straightforward way is to form this problem as a “sequence”-to-sequence task, where input sequences are randomly sorted sets of given concepts. In this way, encoder-decoder seq2seq architectures based on bidirectional RNN (bRNN) BIBREF17 or Transformer (Trans.) BIBREF18 can be directly adopted to the task, just like many other conditional sequence generation problems (translation, summarization, etc.).
Order-insensitive processing. However, these encoders may degrade because our inputs are actually order-insensitive. We thus try to use multi-layer perceptrons (MLP) with mean-pooling as the encoder (“mean encoder”) over sequences of word vectors to completely eliminate the order sensitivity. Similarly, we consider removing the positional embeddings in Transformers (Trans. w/o Pos).
Copying mechanism. The above-mentioned architectures with vanilla attention can miss the words in input sequences and thus produce either unknown tokens or synonyms. To force the decoder to produce target sentences with a constraint on input sentence, we utilize the copying mechanism BIBREF19 for all these models. We follow the implementation of these methods by OpenNMT-py BIBREF20.
Non-autoregressive generation. Recent advances in conditional sentence generation have a focus on edit-based models, which iteratively refine generated sequences (usually bounded by a fixed length). These models potentially get better performance than auto-regressive methods because of their explicit modeling on iterative refinements. We study typical models including iNAT BIBREF21, Insertion Transformer (InsertTrans) BIBREF22, and Levenshtein Transformer (LevenTrans) BIBREF23.
Methods ::: A BERT-based Method: UniLM
We employ a new unified pre-trained language model, UniLM BIBREF24, which uses BERT BIBREF25 as the encoder and then fine-tunes the whole architecture with different generation-based objective. To the best of our knowledge, the UniLM model is the state-of-the-art method for a wide range of conditional text generation tasks including summarization, question generation, and dialogue responding.
Methods ::: Other methods
Based on the similarity between our task and abstractive summarization and story generation (with given topic words), we also apply Pointer Generator Networks (“PointerGen”) BIBREF26 and Multi-scale Fusion Attention (“Fusion Attn.”) BIBREF27 model respectively for our task.
Methods ::: Incorporating Commonsense Rationales
We explore how to utilize additional commonsense knowledge (i.e. rationales) as the input to the task. Like we mentioned in Section SECREF6, we search relevant sentences from the OMCS corpus as the additional distant rationales, and ground truth rationale sentences for dev/test data. The inputs are no longer the concept-sets themselves, but in a form of “[rationales$|$concept-set]” (i.e. concatenating the rationale sentences and original concept-set strings).
Evaluation
Herein, we present the experimental results for comparing different baseline methods in the proposed setting. We first introduce the setup and automatic metrics, and then we present the results and analysis. Finally, we show human evaluation results and qualitative analysis.
Evaluation ::: Setup
We use the proposed CommonGen dataset in two setting: knowledge-agnostic and knowledge-aware. For the knowledge-agnostic setting, we simply apply the methods in Section SECREF4 while we concatenate rationales and input concept-sets together as the knowledge-aware inputs (“$+r$”).
Evaluation ::: Automatic Metrics
For automatically evaluating our methods, we propose to use widely used metric for image/video captioning. This is because the proposed CommonGen task can be regarded as also a caption task where the context are incomplete scenes with given concept-sets. Therefore, we choose BLEU-3/4 BIBREF28, ROUGE-2/L BIBREF29, CIDEr BIBREF30, and SPICE BIBREF31 as the main metrics. Apart from these classic metrics, we also include a novel embedding-based metric named BERTScore BIBREF32. To make the comparisons more clear, we show the delta of BERTScore results by subtracting the score of merely using input concept-sets as target sentences, named $\triangle $BERTS.
To have an estimation about human performance in each metric, we iteratively treat every reference sentence in dev/test data as the prediction to be compared with all references (including itself). That is, if a model has the same reasoning ability with average performance of our crowd workers, its results should exceed this “human bound”.
Evaluation ::: Experimental Results
We present the experimental results of five groups of methods that are introduced in Section SECREF4. We find that the model UniLM outperforms all other baseline methods by a large margin, which is expected due to it is pre-trained with the BERT encoder towards generation objectives. However, its performance is still way far from the human bound, and this margin is even larger in test data.
We notice that the most recent edit-based model named LevenTrans archives the best performance among models without pre-training at all. This shows that edit-based sequence generation models can better deal with the cases where target sentences share similar vocabulary with source ones. Nonetheless, the other two models within the same sequence modeling framework (i.e. fairseq) are much worse, which might because of their specialty designed for machine translation.
An order-insensitive sequence/set encoder, “mean encoder”, outperform order-sensitive counterparts like “bRNN”. However, such a marginal improvement is not seen in the comparison between “Trans.” vs “Trans. w/o Pos”. We assume that for short sequences the order sensitivity does not harm sequential encoders, while positional embeddings in Transformers can better improve the self-attention mechanism. Also, we find that Transformer-based seq2seq architectures are not outperforming simpler models like bRNN.
As for the use of additional retrieved sentences form OMCS corpus and human-written associated rationales, we find that they are not generally helpful in investigated architectures. Although they increase the BLEU and ROUGE scores, the metrics specially designed for captioning like CIDEr and SPICE are dropping down. We argue that it might because the OMCS sentences are actually not aligned with training data, and more sophisticated methods for encoding such non-sequential facts in a more compositional way.
Evaluation ::: Human Evaluation
From the automatic evaluation results with multiple metrics, we have a rough idea of the performance of all models. However, no automatic metric is perfect, especially for a newly proposed generation task like the CommonGen. We thus ask humans to rank 100 outputs of 6 selected typical models as well as one randomly picked reference sentence, forming seven systems in total. Annotators are educated to rank results by their coverage, fluency, and plausibility in daily life. Then, we compute the cumulative gains of each system in all 100 cases:
$S^{(k)}_i$ is the final score of the $i$-th system by the $k$-th annotator. $G^{k}_{i, j}$ is the rank position of the $i$-th system output for $j$-th example. In our case, $N=100$, $K = 5$, $G^{k}_{i, j}\in [1,7]$.
As shown in Table TABREF22, we compare different systems including human bound for both the above-introduced cumulative ranking scores and the average hit@top3 rates with standard deviations. We find that the correlation between human evaluation and CIDEr and SPICE are better than the other metrics (see Table TABREF15).
Evaluation ::: Qualitative Analysis
For more clearly observe the performance of interested models, we present several real system outputs on the test set in Table TABREF24. We find that models usually cannot cover all given concepts, and also can produce repetitions of given concepts (e.g. “a dog catches a dog”, “a couple of couples”, and “at an object and an object .”). Moreover, we find that the order of actions may be mot natural. For example, the model output “a man pulls a sword out of his mouth and swallows it” makes less sense because a man usually swallow a sword first before he pull it out in such performances.
Related Work ::: Machine Common Sense
Machine common sense (MCS) has long been considered as one of the most significant area in artificial intelligence. Recently, there are various emerging datasets for testing machine commonsense from different angles, such as commonsense extraction BIBREF33, BIBREF34, next situation prediction (SWAG BIBREF1, CODAH BIBREF35, HellaSWAG BIBREF36), cultural/social understanding BIBREF37, BIBREF38, BIBREF39, visual scene comprehension BIBREF40, and general commonsense question answering BIBREF0, BIBREF41. Most of them are in a multi-choice QA setting for discriminative commonsense reasoning, among which CSQA BIBREF0 and SWAG BIBREF1 are two typical examples. The input of the CSQA task is a question that needs commonsense reasoning and there are five candidate answers (words/phrases). The SWAG task asks models to select which situation is the most plausible next situation, given a sentence describing an event.
The two tasks share very similar objectives with large pre-trained language encoders like BERT BIBREF42: Masked-LM can predict the missing words in an incomplete sentence, which is similar to the CSQA setting; NextSentPrediction classifies whether a sentence is the next sentence of the given sentence in the corpora, which can be seen as using distant supervision for the SWAG task. Thus, simply fine-tuning such large pre-trained language encoders can yield near or exceeding human performance BIBREF43, BIBREF2, but it does not necessarily mean machine reasoners can really produce new assumptions in an open and generative setting. The proposed CommonGen, to the best of our knowledge, is the first dataset and task for generative commonsense reasoning.
Related Work ::: Constrained Text Generation
Constrained or controllable text generation aims to decode realistic sentences that have expected attributes such as sentiment BIBREF44, BIBREF9, tense BIBREF9, template BIBREF45, style BIBREF46, BIBREF47, BIBREF48, etc. The most similar scenario with our task is lexically constrained sentence encoding, which has been studied mainly in the machine translation community BIBREF49, BIBREF50 for dealing with terminology and additional bilingual dictionaries.
Classic methods usually modify the (beam) searching algorithms to accommodate lexical constraints like Grid Beam Search BIBREF10. The most recent work in this line is the CGMH BIBREF51 model, which works in the inference stage to sample sentences with a sequence of multiple keywords from language models. However, our task brings more challenges: 1) we do not assume there is a fixed order of keywords in target sentences; 2) we allow morphological changes of the keywords; 3) the decoded sentences must describe highly plausible scenes in our daily life. Current methods cannot well address these issues and also work extremely slow to generate grammatical sentences. We instead mainly investigate sequence-to-sequence architectures, especially models that are based on editing operations and non-autoregressive. Pre-trained seq2seq generation models like UniLM BIBREF24 and BRAT BIBREF52 are usually initialized with pre-trained language encoder and then further fine-tuned with multiple NLG tasks. The UniLM archives the best performance on our proposed CommonGen task, while being far from human-level performance and hardly interpretable.
Conclusion
In this paper, we purpose a novel constrained text generation task for generative commonsense reasoning. We introduce a new large-scale dataset named CommonGen and investigate various methods on them. Through our extensive experiments and human evaluation, we demonstrate that the inherent difficulties of the new task cannot be addressed by even the state-of-the-art pre-trained language generation model.
For the future research, we believe the following directions are highly valuable to explore: 1) specially designed metrics for automatic evaluation that focus on commonsense plausibility; 2) better mechanisms for retrieving and imposing useful commonsense knowledge into sentence generation processes; 3) explicitly modeling keyword-centric edits (e.g. insertion, deletion, morphological changes) such that relevant commonsense knowledge can be well utilized. We also believe that models performed well on CommonGen can be easily transferred to other commonsense-required reasoning tasks with few annotations, including image/video captioning, visual question answering, and discriminative multi-choice commonsense question answering. | BLEU-3/4, ROUGE-2/L, CIDEr, SPICE, BERTScore |
c34a15f1d113083da431e4157aceb11266e9a1b2 | c34a15f1d113083da431e4157aceb11266e9a1b2_0 | Q: Are the models required to also generate rationales?
Text: Introduction
Commonsense reasoning has long been acknowledged as a critical bottleneck of artificial intelligence and especially in natural language processing. It is an ability of combining commonsense facts and logic rules to make new presumptions about ordinary scenes in our daily life. A distinct property of commonsense reasoning problems is that they are generally trivial for human-beings while challenging for machine reasoners.
There have been a few recent tasks and datasets for testing machine commonsense, while most of them frame their problems as multi-choice question answering, such as CSQA BIBREF0 and SWAG BIBREF1. We name this kind of tasks as deterministic commonsense reasoning because they focus on modeling the plausibility of given complete scenes. The systems for these tasks thus have to work with biased selection of distractors, and thus are less practical or challenging. Simply fine-tuning such large pre-trained language encoders can yield near or exceeding human performance BIBREF2. On the other hand, few work has been done so far in testing machine commonsense in a generative reasoning setting, where a reasoner is expected to complete scenes with several given concepts.
Specifically, we would like to investigate if machine-reasoning models can generate a sentence that contains a required set of concepts (i.e. nouns or verbs) while describing a common scene in our daily life. For example, as shown in Figure FIGREF1, given an unordered collection of concepts “{apple (noun), bag (noun), pick (verb), place (verb), tree (noun)}”, a rational reasoner should be able to generate a sentence like “A boy picks some apples from a tree and places them into a bag.”, which describes a natural scene and contains all given concepts. The creation of this sentence is easy for humans while non-trivial for even state-of-the-art conditional language generation models. We argue that such an ability of recovering natural scenes of daily life can benefit a wide range of natural language generation (NLG) tasks including image/video captioning BIBREF3, BIBREF4, scene-based visual reasoning and VQA BIBREF5, storytelling BIBREF6, and dialogue systems BIBREF7, BIBREF8.
Towards empowering machines with the generative commonsense reasoning ability, we create a large-scale dataset, named CommonGen, for the constrained text generation task. We collect $37,263$ concept-sets as the inputs, each of which contains three to five common concepts. These concept-sets are sampled from several large corpora of image/video captions, such that the concepts inside them are more likely to co-occur in natural scenes. Through crowd-sourcing via Amazon Mechanical Turk (AMT), we finally obtain $89,028$ human-written sentences as expected outputs. We investigate the performance of sophisticated sequence generation methods for the proposed task with both automatic metrics and human evaluation. The experiments show that all methods are far from human performance in generative commonsense reasoning. Our main contributions are as follows: 1) We introduce the first large-scale constrained text generation dataset targeting at generative commonsense reasoning; 2) We systematically compare methods for this (lexically) constrained text generation with extensive experiments and evaluation. 3) Our code and data are publicly available (w/ the URL in the abstract), so future research in this direction can be directly developed in a unified framework.
Problem Formulation
In this section, we formulate our task with mathematical notations and discuss its inherent challenges. The input to the task is a set of $n$ concepts $x=\lbrace c_1,c_2,\dots ,c_n\rbrace \in \mathcal {X}$, where $c_i\in \mathcal {C}$ is a common noun or verb. $\mathcal {X}$ denotes the space of concept-sets and $\mathcal {C}$ stands for the concept vocabulary. The expected output of this task is a simple, grammatical sentence $y\in \mathcal {Y}$, describing a natural scene in our daily-life that covers all given concepts in $x$. Note that other forms of given concepts are also accepted, such as plural forms of nouns and verbs. In addition, we also provide rationales as an optional resource to model the generation process. For each pair of $(x, y)$, a rationale $r$ is a list of sentences that explains the background commonsense knowledge used in the scene recovering process.
The task is to learn a structured predictive function $f:\mathcal {X} \rightarrow \mathcal {Y}$, which maps a concept-set to a sentence. Thus, it can be seen as a special case of constrained text generation BIBREF9. The unique challenges of our proposed task come from two main aspects as follows.
Constrained Decoding. Lexically constrained decoding for sentence generation has been an important and challenging research topic in machine translation community BIBREF10, where they focus on how to decode sentences when some words/phrases (e.g. terminology) must present in target sentences (Section SECREF6). However, it is still an open problem how to efficiently generate sentences given an unordered set of multiple keywords with potential morphological changes (e.g. “pick” $\rightarrow $ “picks” in the previous case). Apart from that, the part-of-speech constraints brings even more difficulties (e.g. “place” can be verb/noun).
Commonsense Reasoning. Apart from the challenge in constrained decoding, a generative commonsense reasoner also has to compositionally use (latent) commonsense knowledge for generating most plausible scenes. Recall the illustrative example in Figure FIGREF1, even such a simple scene generation process needs pretty much commonsense knowledge like: 1) “apples grow in trees”; 2) “bags are containers that you can put something in”; 3) “you usually pick something and then place it in a container”. Expected reasoners have to prioritize target scenes over an infinity number of less plausible scenes like “A boy picks an apple tree and places it into bags.” or “A boy places some bags on a tree and picks an apple.”.
The CommonGen Dataset
In this section, we present how we build the CommonGen dataset for testing machine commonsense with generative reasoning. The overall data collection process is as follows. 1) We first collect a large amount of high-quality image/video caption sentences from several existing corpora, 2) Then, we compute co-occurrence statistics about concept-sets of different sizes ($3\sim 5$), such that we can find the concept-sets that are more likely to be present in the same scene. 3) Finally, we ask human crowd-workers from AMT to write scenes with rationales for every given concept-set, which serve as our development and test sets. The training set consists of carefully post-processed human-written caption sentences, which have little overlap with dev/test sets. We present the statistics and show its inherent challenges at the end of this section.
The CommonGen Dataset ::: Collecting Concept-Sets with Captions
Following the general definition in the largest commonsense knowledge graph, ConceptNet BIBREF11, we understand a concept as a common noun or verb. We aim to test the ability of generating natural scenes with a given set of concepts. The expected concept-sets in our task are supposed to be likely co-occur in natural, daily-life scenes . The concepts in images/videos captions, which usually describe scenes in our daily life, thus possess the desired property. We therefore collect a large amount of caption sentences from a variety of datasets, including VATEX BIBREF4, LSMDC BIBREF12, ActivityNet BIBREF13, and SNLI BIBREF15, forming 1,040,330 sentences in total.
We assume if a set of concepts are all mentioned together in more caption sentences, then this concept-set is more like to co-occur. Thus, we compute the co-occurrence frequency of all possible concept-sets that have $3\sim 5$ concepts, named as three/four/five-concept-sets respectively. Each concept-set is associated with at least one caption sentences. We carefully post-process them and take the shortest ones with minimal overlaps as the final data. These initial concept-sets are further divided into three parts: train/dev/test. We then iterate all training concept-sets and remove the ones that have more than two overlapping concepts with any concept-set in the dev or test set. Thus, the dev/test set can better measure the generalization ability of models on unseen combinations of concepts.
The CommonGen Dataset ::: Crowd-Sourcing via AMT
It is true that the above-mentioned associated caption sentences for each concept-set are human-written and do describe scenes that cover all given concepts. However, they are created under specific contexts (i.e. an image or a video) and thus might be less representative for common sense. To better measure the quality and interpretability of generative reasoners, we need to evaluate them with scenes and rationales created by using concept-sets only as the signals for annotators.
We collect more human-written scenes for each concept-set in dev and test set through crowd-sourcing via the Amazon Mechanical Turk platform. Each input concept-set is annotated by at least three different humans. The annotators are also required to give sentences as the rationales, which further encourage them to use common sense in creating their scenes. The crowd-sourced sentences correlate well with the associated captions, meaning that it is reasonable to use caption sentences as training data although they can be partly noisy. Additionally, we utilize a search engine over the OMCS corpus BIBREF16 for retrieving relevant propositions as distant rationales in training data.
The CommonGen Dataset ::: Statistics
We present the statistical information of our final dataset. Firstly, we summarize the basic statistics in Table TABREF9, such as the number of unique concept-sets, scene sentences, and sentence lengths. In total, there are 3,706 unique concepts among all concept-sets, and 3,614/1,018/1,207 in the train/dev/test parts respectively. Note that there are 4% of the dev and 6% of the test concepts never appear in the training data, so we can better understand how well trained models can perform with unseen concepts.
We analyze the overlap between training concept-sets and dev/test concept-sets. By average, we find that 98.8% of the training instances share no common concept at all with dev/test data, such that the dev/test can help us analyze model performance on new combinations of concepts.
We also visualize the frequency distribution of our test concept-sets in Figure FIGREF7 by showing the frequency of top 50 single concepts and co-occurred concept pairs.
Methods
In this section, we introduce the methods that we adopt for the proposed constrained text generation task. We group these methods into several types as follows. Basically, we have different kinds of encoder-decoder architectures with copy attention mechanism, including both classic and recently proposed methods. Apart from that, we utilize the state-of-the-art pre-trained sentence generation model for our task. Moreover, we include three typical models for abstractive summarization, story generation respectively, and keywords-based decoding of language models.
Methods ::: Seq-to-Seq Learning
One very straightforward way is to form this problem as a “sequence”-to-sequence task, where input sequences are randomly sorted sets of given concepts. In this way, encoder-decoder seq2seq architectures based on bidirectional RNN (bRNN) BIBREF17 or Transformer (Trans.) BIBREF18 can be directly adopted to the task, just like many other conditional sequence generation problems (translation, summarization, etc.).
Order-insensitive processing. However, these encoders may degrade because our inputs are actually order-insensitive. We thus try to use multi-layer perceptrons (MLP) with mean-pooling as the encoder (“mean encoder”) over sequences of word vectors to completely eliminate the order sensitivity. Similarly, we consider removing the positional embeddings in Transformers (Trans. w/o Pos).
Copying mechanism. The above-mentioned architectures with vanilla attention can miss the words in input sequences and thus produce either unknown tokens or synonyms. To force the decoder to produce target sentences with a constraint on input sentence, we utilize the copying mechanism BIBREF19 for all these models. We follow the implementation of these methods by OpenNMT-py BIBREF20.
Non-autoregressive generation. Recent advances in conditional sentence generation have a focus on edit-based models, which iteratively refine generated sequences (usually bounded by a fixed length). These models potentially get better performance than auto-regressive methods because of their explicit modeling on iterative refinements. We study typical models including iNAT BIBREF21, Insertion Transformer (InsertTrans) BIBREF22, and Levenshtein Transformer (LevenTrans) BIBREF23.
Methods ::: A BERT-based Method: UniLM
We employ a new unified pre-trained language model, UniLM BIBREF24, which uses BERT BIBREF25 as the encoder and then fine-tunes the whole architecture with different generation-based objective. To the best of our knowledge, the UniLM model is the state-of-the-art method for a wide range of conditional text generation tasks including summarization, question generation, and dialogue responding.
Methods ::: Other methods
Based on the similarity between our task and abstractive summarization and story generation (with given topic words), we also apply Pointer Generator Networks (“PointerGen”) BIBREF26 and Multi-scale Fusion Attention (“Fusion Attn.”) BIBREF27 model respectively for our task.
Methods ::: Incorporating Commonsense Rationales
We explore how to utilize additional commonsense knowledge (i.e. rationales) as the input to the task. Like we mentioned in Section SECREF6, we search relevant sentences from the OMCS corpus as the additional distant rationales, and ground truth rationale sentences for dev/test data. The inputs are no longer the concept-sets themselves, but in a form of “[rationales$|$concept-set]” (i.e. concatenating the rationale sentences and original concept-set strings).
Evaluation
Herein, we present the experimental results for comparing different baseline methods in the proposed setting. We first introduce the setup and automatic metrics, and then we present the results and analysis. Finally, we show human evaluation results and qualitative analysis.
Evaluation ::: Setup
We use the proposed CommonGen dataset in two setting: knowledge-agnostic and knowledge-aware. For the knowledge-agnostic setting, we simply apply the methods in Section SECREF4 while we concatenate rationales and input concept-sets together as the knowledge-aware inputs (“$+r$”).
Evaluation ::: Automatic Metrics
For automatically evaluating our methods, we propose to use widely used metric for image/video captioning. This is because the proposed CommonGen task can be regarded as also a caption task where the context are incomplete scenes with given concept-sets. Therefore, we choose BLEU-3/4 BIBREF28, ROUGE-2/L BIBREF29, CIDEr BIBREF30, and SPICE BIBREF31 as the main metrics. Apart from these classic metrics, we also include a novel embedding-based metric named BERTScore BIBREF32. To make the comparisons more clear, we show the delta of BERTScore results by subtracting the score of merely using input concept-sets as target sentences, named $\triangle $BERTS.
To have an estimation about human performance in each metric, we iteratively treat every reference sentence in dev/test data as the prediction to be compared with all references (including itself). That is, if a model has the same reasoning ability with average performance of our crowd workers, its results should exceed this “human bound”.
Evaluation ::: Experimental Results
We present the experimental results of five groups of methods that are introduced in Section SECREF4. We find that the model UniLM outperforms all other baseline methods by a large margin, which is expected due to it is pre-trained with the BERT encoder towards generation objectives. However, its performance is still way far from the human bound, and this margin is even larger in test data.
We notice that the most recent edit-based model named LevenTrans archives the best performance among models without pre-training at all. This shows that edit-based sequence generation models can better deal with the cases where target sentences share similar vocabulary with source ones. Nonetheless, the other two models within the same sequence modeling framework (i.e. fairseq) are much worse, which might because of their specialty designed for machine translation.
An order-insensitive sequence/set encoder, “mean encoder”, outperform order-sensitive counterparts like “bRNN”. However, such a marginal improvement is not seen in the comparison between “Trans.” vs “Trans. w/o Pos”. We assume that for short sequences the order sensitivity does not harm sequential encoders, while positional embeddings in Transformers can better improve the self-attention mechanism. Also, we find that Transformer-based seq2seq architectures are not outperforming simpler models like bRNN.
As for the use of additional retrieved sentences form OMCS corpus and human-written associated rationales, we find that they are not generally helpful in investigated architectures. Although they increase the BLEU and ROUGE scores, the metrics specially designed for captioning like CIDEr and SPICE are dropping down. We argue that it might because the OMCS sentences are actually not aligned with training data, and more sophisticated methods for encoding such non-sequential facts in a more compositional way.
Evaluation ::: Human Evaluation
From the automatic evaluation results with multiple metrics, we have a rough idea of the performance of all models. However, no automatic metric is perfect, especially for a newly proposed generation task like the CommonGen. We thus ask humans to rank 100 outputs of 6 selected typical models as well as one randomly picked reference sentence, forming seven systems in total. Annotators are educated to rank results by their coverage, fluency, and plausibility in daily life. Then, we compute the cumulative gains of each system in all 100 cases:
$S^{(k)}_i$ is the final score of the $i$-th system by the $k$-th annotator. $G^{k}_{i, j}$ is the rank position of the $i$-th system output for $j$-th example. In our case, $N=100$, $K = 5$, $G^{k}_{i, j}\in [1,7]$.
As shown in Table TABREF22, we compare different systems including human bound for both the above-introduced cumulative ranking scores and the average hit@top3 rates with standard deviations. We find that the correlation between human evaluation and CIDEr and SPICE are better than the other metrics (see Table TABREF15).
Evaluation ::: Qualitative Analysis
For more clearly observe the performance of interested models, we present several real system outputs on the test set in Table TABREF24. We find that models usually cannot cover all given concepts, and also can produce repetitions of given concepts (e.g. “a dog catches a dog”, “a couple of couples”, and “at an object and an object .”). Moreover, we find that the order of actions may be mot natural. For example, the model output “a man pulls a sword out of his mouth and swallows it” makes less sense because a man usually swallow a sword first before he pull it out in such performances.
Related Work ::: Machine Common Sense
Machine common sense (MCS) has long been considered as one of the most significant area in artificial intelligence. Recently, there are various emerging datasets for testing machine commonsense from different angles, such as commonsense extraction BIBREF33, BIBREF34, next situation prediction (SWAG BIBREF1, CODAH BIBREF35, HellaSWAG BIBREF36), cultural/social understanding BIBREF37, BIBREF38, BIBREF39, visual scene comprehension BIBREF40, and general commonsense question answering BIBREF0, BIBREF41. Most of them are in a multi-choice QA setting for discriminative commonsense reasoning, among which CSQA BIBREF0 and SWAG BIBREF1 are two typical examples. The input of the CSQA task is a question that needs commonsense reasoning and there are five candidate answers (words/phrases). The SWAG task asks models to select which situation is the most plausible next situation, given a sentence describing an event.
The two tasks share very similar objectives with large pre-trained language encoders like BERT BIBREF42: Masked-LM can predict the missing words in an incomplete sentence, which is similar to the CSQA setting; NextSentPrediction classifies whether a sentence is the next sentence of the given sentence in the corpora, which can be seen as using distant supervision for the SWAG task. Thus, simply fine-tuning such large pre-trained language encoders can yield near or exceeding human performance BIBREF43, BIBREF2, but it does not necessarily mean machine reasoners can really produce new assumptions in an open and generative setting. The proposed CommonGen, to the best of our knowledge, is the first dataset and task for generative commonsense reasoning.
Related Work ::: Constrained Text Generation
Constrained or controllable text generation aims to decode realistic sentences that have expected attributes such as sentiment BIBREF44, BIBREF9, tense BIBREF9, template BIBREF45, style BIBREF46, BIBREF47, BIBREF48, etc. The most similar scenario with our task is lexically constrained sentence encoding, which has been studied mainly in the machine translation community BIBREF49, BIBREF50 for dealing with terminology and additional bilingual dictionaries.
Classic methods usually modify the (beam) searching algorithms to accommodate lexical constraints like Grid Beam Search BIBREF10. The most recent work in this line is the CGMH BIBREF51 model, which works in the inference stage to sample sentences with a sequence of multiple keywords from language models. However, our task brings more challenges: 1) we do not assume there is a fixed order of keywords in target sentences; 2) we allow morphological changes of the keywords; 3) the decoded sentences must describe highly plausible scenes in our daily life. Current methods cannot well address these issues and also work extremely slow to generate grammatical sentences. We instead mainly investigate sequence-to-sequence architectures, especially models that are based on editing operations and non-autoregressive. Pre-trained seq2seq generation models like UniLM BIBREF24 and BRAT BIBREF52 are usually initialized with pre-trained language encoder and then further fine-tuned with multiple NLG tasks. The UniLM archives the best performance on our proposed CommonGen task, while being far from human-level performance and hardly interpretable.
Conclusion
In this paper, we purpose a novel constrained text generation task for generative commonsense reasoning. We introduce a new large-scale dataset named CommonGen and investigate various methods on them. Through our extensive experiments and human evaluation, we demonstrate that the inherent difficulties of the new task cannot be addressed by even the state-of-the-art pre-trained language generation model.
For the future research, we believe the following directions are highly valuable to explore: 1) specially designed metrics for automatic evaluation that focus on commonsense plausibility; 2) better mechanisms for retrieving and imposing useful commonsense knowledge into sentence generation processes; 3) explicitly modeling keyword-centric edits (e.g. insertion, deletion, morphological changes) such that relevant commonsense knowledge can be well utilized. We also believe that models performed well on CommonGen can be easily transferred to other commonsense-required reasoning tasks with few annotations, including image/video captioning, visual question answering, and discriminative multi-choice commonsense question answering. | No |
061682beb3dbd7c76cfa26f7ae650e548503d977 | 061682beb3dbd7c76cfa26f7ae650e548503d977_0 | Q: Are the rationales generated after the sentences were written?
Text: Introduction
Commonsense reasoning has long been acknowledged as a critical bottleneck of artificial intelligence and especially in natural language processing. It is an ability of combining commonsense facts and logic rules to make new presumptions about ordinary scenes in our daily life. A distinct property of commonsense reasoning problems is that they are generally trivial for human-beings while challenging for machine reasoners.
There have been a few recent tasks and datasets for testing machine commonsense, while most of them frame their problems as multi-choice question answering, such as CSQA BIBREF0 and SWAG BIBREF1. We name this kind of tasks as deterministic commonsense reasoning because they focus on modeling the plausibility of given complete scenes. The systems for these tasks thus have to work with biased selection of distractors, and thus are less practical or challenging. Simply fine-tuning such large pre-trained language encoders can yield near or exceeding human performance BIBREF2. On the other hand, few work has been done so far in testing machine commonsense in a generative reasoning setting, where a reasoner is expected to complete scenes with several given concepts.
Specifically, we would like to investigate if machine-reasoning models can generate a sentence that contains a required set of concepts (i.e. nouns or verbs) while describing a common scene in our daily life. For example, as shown in Figure FIGREF1, given an unordered collection of concepts “{apple (noun), bag (noun), pick (verb), place (verb), tree (noun)}”, a rational reasoner should be able to generate a sentence like “A boy picks some apples from a tree and places them into a bag.”, which describes a natural scene and contains all given concepts. The creation of this sentence is easy for humans while non-trivial for even state-of-the-art conditional language generation models. We argue that such an ability of recovering natural scenes of daily life can benefit a wide range of natural language generation (NLG) tasks including image/video captioning BIBREF3, BIBREF4, scene-based visual reasoning and VQA BIBREF5, storytelling BIBREF6, and dialogue systems BIBREF7, BIBREF8.
Towards empowering machines with the generative commonsense reasoning ability, we create a large-scale dataset, named CommonGen, for the constrained text generation task. We collect $37,263$ concept-sets as the inputs, each of which contains three to five common concepts. These concept-sets are sampled from several large corpora of image/video captions, such that the concepts inside them are more likely to co-occur in natural scenes. Through crowd-sourcing via Amazon Mechanical Turk (AMT), we finally obtain $89,028$ human-written sentences as expected outputs. We investigate the performance of sophisticated sequence generation methods for the proposed task with both automatic metrics and human evaluation. The experiments show that all methods are far from human performance in generative commonsense reasoning. Our main contributions are as follows: 1) We introduce the first large-scale constrained text generation dataset targeting at generative commonsense reasoning; 2) We systematically compare methods for this (lexically) constrained text generation with extensive experiments and evaluation. 3) Our code and data are publicly available (w/ the URL in the abstract), so future research in this direction can be directly developed in a unified framework.
Problem Formulation
In this section, we formulate our task with mathematical notations and discuss its inherent challenges. The input to the task is a set of $n$ concepts $x=\lbrace c_1,c_2,\dots ,c_n\rbrace \in \mathcal {X}$, where $c_i\in \mathcal {C}$ is a common noun or verb. $\mathcal {X}$ denotes the space of concept-sets and $\mathcal {C}$ stands for the concept vocabulary. The expected output of this task is a simple, grammatical sentence $y\in \mathcal {Y}$, describing a natural scene in our daily-life that covers all given concepts in $x$. Note that other forms of given concepts are also accepted, such as plural forms of nouns and verbs. In addition, we also provide rationales as an optional resource to model the generation process. For each pair of $(x, y)$, a rationale $r$ is a list of sentences that explains the background commonsense knowledge used in the scene recovering process.
The task is to learn a structured predictive function $f:\mathcal {X} \rightarrow \mathcal {Y}$, which maps a concept-set to a sentence. Thus, it can be seen as a special case of constrained text generation BIBREF9. The unique challenges of our proposed task come from two main aspects as follows.
Constrained Decoding. Lexically constrained decoding for sentence generation has been an important and challenging research topic in machine translation community BIBREF10, where they focus on how to decode sentences when some words/phrases (e.g. terminology) must present in target sentences (Section SECREF6). However, it is still an open problem how to efficiently generate sentences given an unordered set of multiple keywords with potential morphological changes (e.g. “pick” $\rightarrow $ “picks” in the previous case). Apart from that, the part-of-speech constraints brings even more difficulties (e.g. “place” can be verb/noun).
Commonsense Reasoning. Apart from the challenge in constrained decoding, a generative commonsense reasoner also has to compositionally use (latent) commonsense knowledge for generating most plausible scenes. Recall the illustrative example in Figure FIGREF1, even such a simple scene generation process needs pretty much commonsense knowledge like: 1) “apples grow in trees”; 2) “bags are containers that you can put something in”; 3) “you usually pick something and then place it in a container”. Expected reasoners have to prioritize target scenes over an infinity number of less plausible scenes like “A boy picks an apple tree and places it into bags.” or “A boy places some bags on a tree and picks an apple.”.
The CommonGen Dataset
In this section, we present how we build the CommonGen dataset for testing machine commonsense with generative reasoning. The overall data collection process is as follows. 1) We first collect a large amount of high-quality image/video caption sentences from several existing corpora, 2) Then, we compute co-occurrence statistics about concept-sets of different sizes ($3\sim 5$), such that we can find the concept-sets that are more likely to be present in the same scene. 3) Finally, we ask human crowd-workers from AMT to write scenes with rationales for every given concept-set, which serve as our development and test sets. The training set consists of carefully post-processed human-written caption sentences, which have little overlap with dev/test sets. We present the statistics and show its inherent challenges at the end of this section.
The CommonGen Dataset ::: Collecting Concept-Sets with Captions
Following the general definition in the largest commonsense knowledge graph, ConceptNet BIBREF11, we understand a concept as a common noun or verb. We aim to test the ability of generating natural scenes with a given set of concepts. The expected concept-sets in our task are supposed to be likely co-occur in natural, daily-life scenes . The concepts in images/videos captions, which usually describe scenes in our daily life, thus possess the desired property. We therefore collect a large amount of caption sentences from a variety of datasets, including VATEX BIBREF4, LSMDC BIBREF12, ActivityNet BIBREF13, and SNLI BIBREF15, forming 1,040,330 sentences in total.
We assume if a set of concepts are all mentioned together in more caption sentences, then this concept-set is more like to co-occur. Thus, we compute the co-occurrence frequency of all possible concept-sets that have $3\sim 5$ concepts, named as three/four/five-concept-sets respectively. Each concept-set is associated with at least one caption sentences. We carefully post-process them and take the shortest ones with minimal overlaps as the final data. These initial concept-sets are further divided into three parts: train/dev/test. We then iterate all training concept-sets and remove the ones that have more than two overlapping concepts with any concept-set in the dev or test set. Thus, the dev/test set can better measure the generalization ability of models on unseen combinations of concepts.
The CommonGen Dataset ::: Crowd-Sourcing via AMT
It is true that the above-mentioned associated caption sentences for each concept-set are human-written and do describe scenes that cover all given concepts. However, they are created under specific contexts (i.e. an image or a video) and thus might be less representative for common sense. To better measure the quality and interpretability of generative reasoners, we need to evaluate them with scenes and rationales created by using concept-sets only as the signals for annotators.
We collect more human-written scenes for each concept-set in dev and test set through crowd-sourcing via the Amazon Mechanical Turk platform. Each input concept-set is annotated by at least three different humans. The annotators are also required to give sentences as the rationales, which further encourage them to use common sense in creating their scenes. The crowd-sourced sentences correlate well with the associated captions, meaning that it is reasonable to use caption sentences as training data although they can be partly noisy. Additionally, we utilize a search engine over the OMCS corpus BIBREF16 for retrieving relevant propositions as distant rationales in training data.
The CommonGen Dataset ::: Statistics
We present the statistical information of our final dataset. Firstly, we summarize the basic statistics in Table TABREF9, such as the number of unique concept-sets, scene sentences, and sentence lengths. In total, there are 3,706 unique concepts among all concept-sets, and 3,614/1,018/1,207 in the train/dev/test parts respectively. Note that there are 4% of the dev and 6% of the test concepts never appear in the training data, so we can better understand how well trained models can perform with unseen concepts.
We analyze the overlap between training concept-sets and dev/test concept-sets. By average, we find that 98.8% of the training instances share no common concept at all with dev/test data, such that the dev/test can help us analyze model performance on new combinations of concepts.
We also visualize the frequency distribution of our test concept-sets in Figure FIGREF7 by showing the frequency of top 50 single concepts and co-occurred concept pairs.
Methods
In this section, we introduce the methods that we adopt for the proposed constrained text generation task. We group these methods into several types as follows. Basically, we have different kinds of encoder-decoder architectures with copy attention mechanism, including both classic and recently proposed methods. Apart from that, we utilize the state-of-the-art pre-trained sentence generation model for our task. Moreover, we include three typical models for abstractive summarization, story generation respectively, and keywords-based decoding of language models.
Methods ::: Seq-to-Seq Learning
One very straightforward way is to form this problem as a “sequence”-to-sequence task, where input sequences are randomly sorted sets of given concepts. In this way, encoder-decoder seq2seq architectures based on bidirectional RNN (bRNN) BIBREF17 or Transformer (Trans.) BIBREF18 can be directly adopted to the task, just like many other conditional sequence generation problems (translation, summarization, etc.).
Order-insensitive processing. However, these encoders may degrade because our inputs are actually order-insensitive. We thus try to use multi-layer perceptrons (MLP) with mean-pooling as the encoder (“mean encoder”) over sequences of word vectors to completely eliminate the order sensitivity. Similarly, we consider removing the positional embeddings in Transformers (Trans. w/o Pos).
Copying mechanism. The above-mentioned architectures with vanilla attention can miss the words in input sequences and thus produce either unknown tokens or synonyms. To force the decoder to produce target sentences with a constraint on input sentence, we utilize the copying mechanism BIBREF19 for all these models. We follow the implementation of these methods by OpenNMT-py BIBREF20.
Non-autoregressive generation. Recent advances in conditional sentence generation have a focus on edit-based models, which iteratively refine generated sequences (usually bounded by a fixed length). These models potentially get better performance than auto-regressive methods because of their explicit modeling on iterative refinements. We study typical models including iNAT BIBREF21, Insertion Transformer (InsertTrans) BIBREF22, and Levenshtein Transformer (LevenTrans) BIBREF23.
Methods ::: A BERT-based Method: UniLM
We employ a new unified pre-trained language model, UniLM BIBREF24, which uses BERT BIBREF25 as the encoder and then fine-tunes the whole architecture with different generation-based objective. To the best of our knowledge, the UniLM model is the state-of-the-art method for a wide range of conditional text generation tasks including summarization, question generation, and dialogue responding.
Methods ::: Other methods
Based on the similarity between our task and abstractive summarization and story generation (with given topic words), we also apply Pointer Generator Networks (“PointerGen”) BIBREF26 and Multi-scale Fusion Attention (“Fusion Attn.”) BIBREF27 model respectively for our task.
Methods ::: Incorporating Commonsense Rationales
We explore how to utilize additional commonsense knowledge (i.e. rationales) as the input to the task. Like we mentioned in Section SECREF6, we search relevant sentences from the OMCS corpus as the additional distant rationales, and ground truth rationale sentences for dev/test data. The inputs are no longer the concept-sets themselves, but in a form of “[rationales$|$concept-set]” (i.e. concatenating the rationale sentences and original concept-set strings).
Evaluation
Herein, we present the experimental results for comparing different baseline methods in the proposed setting. We first introduce the setup and automatic metrics, and then we present the results and analysis. Finally, we show human evaluation results and qualitative analysis.
Evaluation ::: Setup
We use the proposed CommonGen dataset in two setting: knowledge-agnostic and knowledge-aware. For the knowledge-agnostic setting, we simply apply the methods in Section SECREF4 while we concatenate rationales and input concept-sets together as the knowledge-aware inputs (“$+r$”).
Evaluation ::: Automatic Metrics
For automatically evaluating our methods, we propose to use widely used metric for image/video captioning. This is because the proposed CommonGen task can be regarded as also a caption task where the context are incomplete scenes with given concept-sets. Therefore, we choose BLEU-3/4 BIBREF28, ROUGE-2/L BIBREF29, CIDEr BIBREF30, and SPICE BIBREF31 as the main metrics. Apart from these classic metrics, we also include a novel embedding-based metric named BERTScore BIBREF32. To make the comparisons more clear, we show the delta of BERTScore results by subtracting the score of merely using input concept-sets as target sentences, named $\triangle $BERTS.
To have an estimation about human performance in each metric, we iteratively treat every reference sentence in dev/test data as the prediction to be compared with all references (including itself). That is, if a model has the same reasoning ability with average performance of our crowd workers, its results should exceed this “human bound”.
Evaluation ::: Experimental Results
We present the experimental results of five groups of methods that are introduced in Section SECREF4. We find that the model UniLM outperforms all other baseline methods by a large margin, which is expected due to it is pre-trained with the BERT encoder towards generation objectives. However, its performance is still way far from the human bound, and this margin is even larger in test data.
We notice that the most recent edit-based model named LevenTrans archives the best performance among models without pre-training at all. This shows that edit-based sequence generation models can better deal with the cases where target sentences share similar vocabulary with source ones. Nonetheless, the other two models within the same sequence modeling framework (i.e. fairseq) are much worse, which might because of their specialty designed for machine translation.
An order-insensitive sequence/set encoder, “mean encoder”, outperform order-sensitive counterparts like “bRNN”. However, such a marginal improvement is not seen in the comparison between “Trans.” vs “Trans. w/o Pos”. We assume that for short sequences the order sensitivity does not harm sequential encoders, while positional embeddings in Transformers can better improve the self-attention mechanism. Also, we find that Transformer-based seq2seq architectures are not outperforming simpler models like bRNN.
As for the use of additional retrieved sentences form OMCS corpus and human-written associated rationales, we find that they are not generally helpful in investigated architectures. Although they increase the BLEU and ROUGE scores, the metrics specially designed for captioning like CIDEr and SPICE are dropping down. We argue that it might because the OMCS sentences are actually not aligned with training data, and more sophisticated methods for encoding such non-sequential facts in a more compositional way.
Evaluation ::: Human Evaluation
From the automatic evaluation results with multiple metrics, we have a rough idea of the performance of all models. However, no automatic metric is perfect, especially for a newly proposed generation task like the CommonGen. We thus ask humans to rank 100 outputs of 6 selected typical models as well as one randomly picked reference sentence, forming seven systems in total. Annotators are educated to rank results by their coverage, fluency, and plausibility in daily life. Then, we compute the cumulative gains of each system in all 100 cases:
$S^{(k)}_i$ is the final score of the $i$-th system by the $k$-th annotator. $G^{k}_{i, j}$ is the rank position of the $i$-th system output for $j$-th example. In our case, $N=100$, $K = 5$, $G^{k}_{i, j}\in [1,7]$.
As shown in Table TABREF22, we compare different systems including human bound for both the above-introduced cumulative ranking scores and the average hit@top3 rates with standard deviations. We find that the correlation between human evaluation and CIDEr and SPICE are better than the other metrics (see Table TABREF15).
Evaluation ::: Qualitative Analysis
For more clearly observe the performance of interested models, we present several real system outputs on the test set in Table TABREF24. We find that models usually cannot cover all given concepts, and also can produce repetitions of given concepts (e.g. “a dog catches a dog”, “a couple of couples”, and “at an object and an object .”). Moreover, we find that the order of actions may be mot natural. For example, the model output “a man pulls a sword out of his mouth and swallows it” makes less sense because a man usually swallow a sword first before he pull it out in such performances.
Related Work ::: Machine Common Sense
Machine common sense (MCS) has long been considered as one of the most significant area in artificial intelligence. Recently, there are various emerging datasets for testing machine commonsense from different angles, such as commonsense extraction BIBREF33, BIBREF34, next situation prediction (SWAG BIBREF1, CODAH BIBREF35, HellaSWAG BIBREF36), cultural/social understanding BIBREF37, BIBREF38, BIBREF39, visual scene comprehension BIBREF40, and general commonsense question answering BIBREF0, BIBREF41. Most of them are in a multi-choice QA setting for discriminative commonsense reasoning, among which CSQA BIBREF0 and SWAG BIBREF1 are two typical examples. The input of the CSQA task is a question that needs commonsense reasoning and there are five candidate answers (words/phrases). The SWAG task asks models to select which situation is the most plausible next situation, given a sentence describing an event.
The two tasks share very similar objectives with large pre-trained language encoders like BERT BIBREF42: Masked-LM can predict the missing words in an incomplete sentence, which is similar to the CSQA setting; NextSentPrediction classifies whether a sentence is the next sentence of the given sentence in the corpora, which can be seen as using distant supervision for the SWAG task. Thus, simply fine-tuning such large pre-trained language encoders can yield near or exceeding human performance BIBREF43, BIBREF2, but it does not necessarily mean machine reasoners can really produce new assumptions in an open and generative setting. The proposed CommonGen, to the best of our knowledge, is the first dataset and task for generative commonsense reasoning.
Related Work ::: Constrained Text Generation
Constrained or controllable text generation aims to decode realistic sentences that have expected attributes such as sentiment BIBREF44, BIBREF9, tense BIBREF9, template BIBREF45, style BIBREF46, BIBREF47, BIBREF48, etc. The most similar scenario with our task is lexically constrained sentence encoding, which has been studied mainly in the machine translation community BIBREF49, BIBREF50 for dealing with terminology and additional bilingual dictionaries.
Classic methods usually modify the (beam) searching algorithms to accommodate lexical constraints like Grid Beam Search BIBREF10. The most recent work in this line is the CGMH BIBREF51 model, which works in the inference stage to sample sentences with a sequence of multiple keywords from language models. However, our task brings more challenges: 1) we do not assume there is a fixed order of keywords in target sentences; 2) we allow morphological changes of the keywords; 3) the decoded sentences must describe highly plausible scenes in our daily life. Current methods cannot well address these issues and also work extremely slow to generate grammatical sentences. We instead mainly investigate sequence-to-sequence architectures, especially models that are based on editing operations and non-autoregressive. Pre-trained seq2seq generation models like UniLM BIBREF24 and BRAT BIBREF52 are usually initialized with pre-trained language encoder and then further fine-tuned with multiple NLG tasks. The UniLM archives the best performance on our proposed CommonGen task, while being far from human-level performance and hardly interpretable.
Conclusion
In this paper, we purpose a novel constrained text generation task for generative commonsense reasoning. We introduce a new large-scale dataset named CommonGen and investigate various methods on them. Through our extensive experiments and human evaluation, we demonstrate that the inherent difficulties of the new task cannot be addressed by even the state-of-the-art pre-trained language generation model.
For the future research, we believe the following directions are highly valuable to explore: 1) specially designed metrics for automatic evaluation that focus on commonsense plausibility; 2) better mechanisms for retrieving and imposing useful commonsense knowledge into sentence generation processes; 3) explicitly modeling keyword-centric edits (e.g. insertion, deletion, morphological changes) such that relevant commonsense knowledge can be well utilized. We also believe that models performed well on CommonGen can be easily transferred to other commonsense-required reasoning tasks with few annotations, including image/video captioning, visual question answering, and discriminative multi-choice commonsense question answering. | Yes |
3518d8eb84f6228407cfabaf509fd63d60351203 | 3518d8eb84f6228407cfabaf509fd63d60351203_0 | Q: Are the sentences in the dataset written by humans who were shown the concept-sets?
Text: Introduction
Commonsense reasoning has long been acknowledged as a critical bottleneck of artificial intelligence and especially in natural language processing. It is an ability of combining commonsense facts and logic rules to make new presumptions about ordinary scenes in our daily life. A distinct property of commonsense reasoning problems is that they are generally trivial for human-beings while challenging for machine reasoners.
There have been a few recent tasks and datasets for testing machine commonsense, while most of them frame their problems as multi-choice question answering, such as CSQA BIBREF0 and SWAG BIBREF1. We name this kind of tasks as deterministic commonsense reasoning because they focus on modeling the plausibility of given complete scenes. The systems for these tasks thus have to work with biased selection of distractors, and thus are less practical or challenging. Simply fine-tuning such large pre-trained language encoders can yield near or exceeding human performance BIBREF2. On the other hand, few work has been done so far in testing machine commonsense in a generative reasoning setting, where a reasoner is expected to complete scenes with several given concepts.
Specifically, we would like to investigate if machine-reasoning models can generate a sentence that contains a required set of concepts (i.e. nouns or verbs) while describing a common scene in our daily life. For example, as shown in Figure FIGREF1, given an unordered collection of concepts “{apple (noun), bag (noun), pick (verb), place (verb), tree (noun)}”, a rational reasoner should be able to generate a sentence like “A boy picks some apples from a tree and places them into a bag.”, which describes a natural scene and contains all given concepts. The creation of this sentence is easy for humans while non-trivial for even state-of-the-art conditional language generation models. We argue that such an ability of recovering natural scenes of daily life can benefit a wide range of natural language generation (NLG) tasks including image/video captioning BIBREF3, BIBREF4, scene-based visual reasoning and VQA BIBREF5, storytelling BIBREF6, and dialogue systems BIBREF7, BIBREF8.
Towards empowering machines with the generative commonsense reasoning ability, we create a large-scale dataset, named CommonGen, for the constrained text generation task. We collect $37,263$ concept-sets as the inputs, each of which contains three to five common concepts. These concept-sets are sampled from several large corpora of image/video captions, such that the concepts inside them are more likely to co-occur in natural scenes. Through crowd-sourcing via Amazon Mechanical Turk (AMT), we finally obtain $89,028$ human-written sentences as expected outputs. We investigate the performance of sophisticated sequence generation methods for the proposed task with both automatic metrics and human evaluation. The experiments show that all methods are far from human performance in generative commonsense reasoning. Our main contributions are as follows: 1) We introduce the first large-scale constrained text generation dataset targeting at generative commonsense reasoning; 2) We systematically compare methods for this (lexically) constrained text generation with extensive experiments and evaluation. 3) Our code and data are publicly available (w/ the URL in the abstract), so future research in this direction can be directly developed in a unified framework.
Problem Formulation
In this section, we formulate our task with mathematical notations and discuss its inherent challenges. The input to the task is a set of $n$ concepts $x=\lbrace c_1,c_2,\dots ,c_n\rbrace \in \mathcal {X}$, where $c_i\in \mathcal {C}$ is a common noun or verb. $\mathcal {X}$ denotes the space of concept-sets and $\mathcal {C}$ stands for the concept vocabulary. The expected output of this task is a simple, grammatical sentence $y\in \mathcal {Y}$, describing a natural scene in our daily-life that covers all given concepts in $x$. Note that other forms of given concepts are also accepted, such as plural forms of nouns and verbs. In addition, we also provide rationales as an optional resource to model the generation process. For each pair of $(x, y)$, a rationale $r$ is a list of sentences that explains the background commonsense knowledge used in the scene recovering process.
The task is to learn a structured predictive function $f:\mathcal {X} \rightarrow \mathcal {Y}$, which maps a concept-set to a sentence. Thus, it can be seen as a special case of constrained text generation BIBREF9. The unique challenges of our proposed task come from two main aspects as follows.
Constrained Decoding. Lexically constrained decoding for sentence generation has been an important and challenging research topic in machine translation community BIBREF10, where they focus on how to decode sentences when some words/phrases (e.g. terminology) must present in target sentences (Section SECREF6). However, it is still an open problem how to efficiently generate sentences given an unordered set of multiple keywords with potential morphological changes (e.g. “pick” $\rightarrow $ “picks” in the previous case). Apart from that, the part-of-speech constraints brings even more difficulties (e.g. “place” can be verb/noun).
Commonsense Reasoning. Apart from the challenge in constrained decoding, a generative commonsense reasoner also has to compositionally use (latent) commonsense knowledge for generating most plausible scenes. Recall the illustrative example in Figure FIGREF1, even such a simple scene generation process needs pretty much commonsense knowledge like: 1) “apples grow in trees”; 2) “bags are containers that you can put something in”; 3) “you usually pick something and then place it in a container”. Expected reasoners have to prioritize target scenes over an infinity number of less plausible scenes like “A boy picks an apple tree and places it into bags.” or “A boy places some bags on a tree and picks an apple.”.
The CommonGen Dataset
In this section, we present how we build the CommonGen dataset for testing machine commonsense with generative reasoning. The overall data collection process is as follows. 1) We first collect a large amount of high-quality image/video caption sentences from several existing corpora, 2) Then, we compute co-occurrence statistics about concept-sets of different sizes ($3\sim 5$), such that we can find the concept-sets that are more likely to be present in the same scene. 3) Finally, we ask human crowd-workers from AMT to write scenes with rationales for every given concept-set, which serve as our development and test sets. The training set consists of carefully post-processed human-written caption sentences, which have little overlap with dev/test sets. We present the statistics and show its inherent challenges at the end of this section.
The CommonGen Dataset ::: Collecting Concept-Sets with Captions
Following the general definition in the largest commonsense knowledge graph, ConceptNet BIBREF11, we understand a concept as a common noun or verb. We aim to test the ability of generating natural scenes with a given set of concepts. The expected concept-sets in our task are supposed to be likely co-occur in natural, daily-life scenes . The concepts in images/videos captions, which usually describe scenes in our daily life, thus possess the desired property. We therefore collect a large amount of caption sentences from a variety of datasets, including VATEX BIBREF4, LSMDC BIBREF12, ActivityNet BIBREF13, and SNLI BIBREF15, forming 1,040,330 sentences in total.
We assume if a set of concepts are all mentioned together in more caption sentences, then this concept-set is more like to co-occur. Thus, we compute the co-occurrence frequency of all possible concept-sets that have $3\sim 5$ concepts, named as three/four/five-concept-sets respectively. Each concept-set is associated with at least one caption sentences. We carefully post-process them and take the shortest ones with minimal overlaps as the final data. These initial concept-sets are further divided into three parts: train/dev/test. We then iterate all training concept-sets and remove the ones that have more than two overlapping concepts with any concept-set in the dev or test set. Thus, the dev/test set can better measure the generalization ability of models on unseen combinations of concepts.
The CommonGen Dataset ::: Crowd-Sourcing via AMT
It is true that the above-mentioned associated caption sentences for each concept-set are human-written and do describe scenes that cover all given concepts. However, they are created under specific contexts (i.e. an image or a video) and thus might be less representative for common sense. To better measure the quality and interpretability of generative reasoners, we need to evaluate them with scenes and rationales created by using concept-sets only as the signals for annotators.
We collect more human-written scenes for each concept-set in dev and test set through crowd-sourcing via the Amazon Mechanical Turk platform. Each input concept-set is annotated by at least three different humans. The annotators are also required to give sentences as the rationales, which further encourage them to use common sense in creating their scenes. The crowd-sourced sentences correlate well with the associated captions, meaning that it is reasonable to use caption sentences as training data although they can be partly noisy. Additionally, we utilize a search engine over the OMCS corpus BIBREF16 for retrieving relevant propositions as distant rationales in training data.
The CommonGen Dataset ::: Statistics
We present the statistical information of our final dataset. Firstly, we summarize the basic statistics in Table TABREF9, such as the number of unique concept-sets, scene sentences, and sentence lengths. In total, there are 3,706 unique concepts among all concept-sets, and 3,614/1,018/1,207 in the train/dev/test parts respectively. Note that there are 4% of the dev and 6% of the test concepts never appear in the training data, so we can better understand how well trained models can perform with unseen concepts.
We analyze the overlap between training concept-sets and dev/test concept-sets. By average, we find that 98.8% of the training instances share no common concept at all with dev/test data, such that the dev/test can help us analyze model performance on new combinations of concepts.
We also visualize the frequency distribution of our test concept-sets in Figure FIGREF7 by showing the frequency of top 50 single concepts and co-occurred concept pairs.
Methods
In this section, we introduce the methods that we adopt for the proposed constrained text generation task. We group these methods into several types as follows. Basically, we have different kinds of encoder-decoder architectures with copy attention mechanism, including both classic and recently proposed methods. Apart from that, we utilize the state-of-the-art pre-trained sentence generation model for our task. Moreover, we include three typical models for abstractive summarization, story generation respectively, and keywords-based decoding of language models.
Methods ::: Seq-to-Seq Learning
One very straightforward way is to form this problem as a “sequence”-to-sequence task, where input sequences are randomly sorted sets of given concepts. In this way, encoder-decoder seq2seq architectures based on bidirectional RNN (bRNN) BIBREF17 or Transformer (Trans.) BIBREF18 can be directly adopted to the task, just like many other conditional sequence generation problems (translation, summarization, etc.).
Order-insensitive processing. However, these encoders may degrade because our inputs are actually order-insensitive. We thus try to use multi-layer perceptrons (MLP) with mean-pooling as the encoder (“mean encoder”) over sequences of word vectors to completely eliminate the order sensitivity. Similarly, we consider removing the positional embeddings in Transformers (Trans. w/o Pos).
Copying mechanism. The above-mentioned architectures with vanilla attention can miss the words in input sequences and thus produce either unknown tokens or synonyms. To force the decoder to produce target sentences with a constraint on input sentence, we utilize the copying mechanism BIBREF19 for all these models. We follow the implementation of these methods by OpenNMT-py BIBREF20.
Non-autoregressive generation. Recent advances in conditional sentence generation have a focus on edit-based models, which iteratively refine generated sequences (usually bounded by a fixed length). These models potentially get better performance than auto-regressive methods because of their explicit modeling on iterative refinements. We study typical models including iNAT BIBREF21, Insertion Transformer (InsertTrans) BIBREF22, and Levenshtein Transformer (LevenTrans) BIBREF23.
Methods ::: A BERT-based Method: UniLM
We employ a new unified pre-trained language model, UniLM BIBREF24, which uses BERT BIBREF25 as the encoder and then fine-tunes the whole architecture with different generation-based objective. To the best of our knowledge, the UniLM model is the state-of-the-art method for a wide range of conditional text generation tasks including summarization, question generation, and dialogue responding.
Methods ::: Other methods
Based on the similarity between our task and abstractive summarization and story generation (with given topic words), we also apply Pointer Generator Networks (“PointerGen”) BIBREF26 and Multi-scale Fusion Attention (“Fusion Attn.”) BIBREF27 model respectively for our task.
Methods ::: Incorporating Commonsense Rationales
We explore how to utilize additional commonsense knowledge (i.e. rationales) as the input to the task. Like we mentioned in Section SECREF6, we search relevant sentences from the OMCS corpus as the additional distant rationales, and ground truth rationale sentences for dev/test data. The inputs are no longer the concept-sets themselves, but in a form of “[rationales$|$concept-set]” (i.e. concatenating the rationale sentences and original concept-set strings).
Evaluation
Herein, we present the experimental results for comparing different baseline methods in the proposed setting. We first introduce the setup and automatic metrics, and then we present the results and analysis. Finally, we show human evaluation results and qualitative analysis.
Evaluation ::: Setup
We use the proposed CommonGen dataset in two setting: knowledge-agnostic and knowledge-aware. For the knowledge-agnostic setting, we simply apply the methods in Section SECREF4 while we concatenate rationales and input concept-sets together as the knowledge-aware inputs (“$+r$”).
Evaluation ::: Automatic Metrics
For automatically evaluating our methods, we propose to use widely used metric for image/video captioning. This is because the proposed CommonGen task can be regarded as also a caption task where the context are incomplete scenes with given concept-sets. Therefore, we choose BLEU-3/4 BIBREF28, ROUGE-2/L BIBREF29, CIDEr BIBREF30, and SPICE BIBREF31 as the main metrics. Apart from these classic metrics, we also include a novel embedding-based metric named BERTScore BIBREF32. To make the comparisons more clear, we show the delta of BERTScore results by subtracting the score of merely using input concept-sets as target sentences, named $\triangle $BERTS.
To have an estimation about human performance in each metric, we iteratively treat every reference sentence in dev/test data as the prediction to be compared with all references (including itself). That is, if a model has the same reasoning ability with average performance of our crowd workers, its results should exceed this “human bound”.
Evaluation ::: Experimental Results
We present the experimental results of five groups of methods that are introduced in Section SECREF4. We find that the model UniLM outperforms all other baseline methods by a large margin, which is expected due to it is pre-trained with the BERT encoder towards generation objectives. However, its performance is still way far from the human bound, and this margin is even larger in test data.
We notice that the most recent edit-based model named LevenTrans archives the best performance among models without pre-training at all. This shows that edit-based sequence generation models can better deal with the cases where target sentences share similar vocabulary with source ones. Nonetheless, the other two models within the same sequence modeling framework (i.e. fairseq) are much worse, which might because of their specialty designed for machine translation.
An order-insensitive sequence/set encoder, “mean encoder”, outperform order-sensitive counterparts like “bRNN”. However, such a marginal improvement is not seen in the comparison between “Trans.” vs “Trans. w/o Pos”. We assume that for short sequences the order sensitivity does not harm sequential encoders, while positional embeddings in Transformers can better improve the self-attention mechanism. Also, we find that Transformer-based seq2seq architectures are not outperforming simpler models like bRNN.
As for the use of additional retrieved sentences form OMCS corpus and human-written associated rationales, we find that they are not generally helpful in investigated architectures. Although they increase the BLEU and ROUGE scores, the metrics specially designed for captioning like CIDEr and SPICE are dropping down. We argue that it might because the OMCS sentences are actually not aligned with training data, and more sophisticated methods for encoding such non-sequential facts in a more compositional way.
Evaluation ::: Human Evaluation
From the automatic evaluation results with multiple metrics, we have a rough idea of the performance of all models. However, no automatic metric is perfect, especially for a newly proposed generation task like the CommonGen. We thus ask humans to rank 100 outputs of 6 selected typical models as well as one randomly picked reference sentence, forming seven systems in total. Annotators are educated to rank results by their coverage, fluency, and plausibility in daily life. Then, we compute the cumulative gains of each system in all 100 cases:
$S^{(k)}_i$ is the final score of the $i$-th system by the $k$-th annotator. $G^{k}_{i, j}$ is the rank position of the $i$-th system output for $j$-th example. In our case, $N=100$, $K = 5$, $G^{k}_{i, j}\in [1,7]$.
As shown in Table TABREF22, we compare different systems including human bound for both the above-introduced cumulative ranking scores and the average hit@top3 rates with standard deviations. We find that the correlation between human evaluation and CIDEr and SPICE are better than the other metrics (see Table TABREF15).
Evaluation ::: Qualitative Analysis
For more clearly observe the performance of interested models, we present several real system outputs on the test set in Table TABREF24. We find that models usually cannot cover all given concepts, and also can produce repetitions of given concepts (e.g. “a dog catches a dog”, “a couple of couples”, and “at an object and an object .”). Moreover, we find that the order of actions may be mot natural. For example, the model output “a man pulls a sword out of his mouth and swallows it” makes less sense because a man usually swallow a sword first before he pull it out in such performances.
Related Work ::: Machine Common Sense
Machine common sense (MCS) has long been considered as one of the most significant area in artificial intelligence. Recently, there are various emerging datasets for testing machine commonsense from different angles, such as commonsense extraction BIBREF33, BIBREF34, next situation prediction (SWAG BIBREF1, CODAH BIBREF35, HellaSWAG BIBREF36), cultural/social understanding BIBREF37, BIBREF38, BIBREF39, visual scene comprehension BIBREF40, and general commonsense question answering BIBREF0, BIBREF41. Most of them are in a multi-choice QA setting for discriminative commonsense reasoning, among which CSQA BIBREF0 and SWAG BIBREF1 are two typical examples. The input of the CSQA task is a question that needs commonsense reasoning and there are five candidate answers (words/phrases). The SWAG task asks models to select which situation is the most plausible next situation, given a sentence describing an event.
The two tasks share very similar objectives with large pre-trained language encoders like BERT BIBREF42: Masked-LM can predict the missing words in an incomplete sentence, which is similar to the CSQA setting; NextSentPrediction classifies whether a sentence is the next sentence of the given sentence in the corpora, which can be seen as using distant supervision for the SWAG task. Thus, simply fine-tuning such large pre-trained language encoders can yield near or exceeding human performance BIBREF43, BIBREF2, but it does not necessarily mean machine reasoners can really produce new assumptions in an open and generative setting. The proposed CommonGen, to the best of our knowledge, is the first dataset and task for generative commonsense reasoning.
Related Work ::: Constrained Text Generation
Constrained or controllable text generation aims to decode realistic sentences that have expected attributes such as sentiment BIBREF44, BIBREF9, tense BIBREF9, template BIBREF45, style BIBREF46, BIBREF47, BIBREF48, etc. The most similar scenario with our task is lexically constrained sentence encoding, which has been studied mainly in the machine translation community BIBREF49, BIBREF50 for dealing with terminology and additional bilingual dictionaries.
Classic methods usually modify the (beam) searching algorithms to accommodate lexical constraints like Grid Beam Search BIBREF10. The most recent work in this line is the CGMH BIBREF51 model, which works in the inference stage to sample sentences with a sequence of multiple keywords from language models. However, our task brings more challenges: 1) we do not assume there is a fixed order of keywords in target sentences; 2) we allow morphological changes of the keywords; 3) the decoded sentences must describe highly plausible scenes in our daily life. Current methods cannot well address these issues and also work extremely slow to generate grammatical sentences. We instead mainly investigate sequence-to-sequence architectures, especially models that are based on editing operations and non-autoregressive. Pre-trained seq2seq generation models like UniLM BIBREF24 and BRAT BIBREF52 are usually initialized with pre-trained language encoder and then further fine-tuned with multiple NLG tasks. The UniLM archives the best performance on our proposed CommonGen task, while being far from human-level performance and hardly interpretable.
Conclusion
In this paper, we purpose a novel constrained text generation task for generative commonsense reasoning. We introduce a new large-scale dataset named CommonGen and investigate various methods on them. Through our extensive experiments and human evaluation, we demonstrate that the inherent difficulties of the new task cannot be addressed by even the state-of-the-art pre-trained language generation model.
For the future research, we believe the following directions are highly valuable to explore: 1) specially designed metrics for automatic evaluation that focus on commonsense plausibility; 2) better mechanisms for retrieving and imposing useful commonsense knowledge into sentence generation processes; 3) explicitly modeling keyword-centric edits (e.g. insertion, deletion, morphological changes) such that relevant commonsense knowledge can be well utilized. We also believe that models performed well on CommonGen can be easily transferred to other commonsense-required reasoning tasks with few annotations, including image/video captioning, visual question answering, and discriminative multi-choice commonsense question answering. | Yes |
617c77a600be5529b3391ab0c21504cd288cc7c7 | 617c77a600be5529b3391ab0c21504cd288cc7c7_0 | Q: Where do the concept sets come from?
Text: Introduction
Commonsense reasoning has long been acknowledged as a critical bottleneck of artificial intelligence and especially in natural language processing. It is an ability of combining commonsense facts and logic rules to make new presumptions about ordinary scenes in our daily life. A distinct property of commonsense reasoning problems is that they are generally trivial for human-beings while challenging for machine reasoners.
There have been a few recent tasks and datasets for testing machine commonsense, while most of them frame their problems as multi-choice question answering, such as CSQA BIBREF0 and SWAG BIBREF1. We name this kind of tasks as deterministic commonsense reasoning because they focus on modeling the plausibility of given complete scenes. The systems for these tasks thus have to work with biased selection of distractors, and thus are less practical or challenging. Simply fine-tuning such large pre-trained language encoders can yield near or exceeding human performance BIBREF2. On the other hand, few work has been done so far in testing machine commonsense in a generative reasoning setting, where a reasoner is expected to complete scenes with several given concepts.
Specifically, we would like to investigate if machine-reasoning models can generate a sentence that contains a required set of concepts (i.e. nouns or verbs) while describing a common scene in our daily life. For example, as shown in Figure FIGREF1, given an unordered collection of concepts “{apple (noun), bag (noun), pick (verb), place (verb), tree (noun)}”, a rational reasoner should be able to generate a sentence like “A boy picks some apples from a tree and places them into a bag.”, which describes a natural scene and contains all given concepts. The creation of this sentence is easy for humans while non-trivial for even state-of-the-art conditional language generation models. We argue that such an ability of recovering natural scenes of daily life can benefit a wide range of natural language generation (NLG) tasks including image/video captioning BIBREF3, BIBREF4, scene-based visual reasoning and VQA BIBREF5, storytelling BIBREF6, and dialogue systems BIBREF7, BIBREF8.
Towards empowering machines with the generative commonsense reasoning ability, we create a large-scale dataset, named CommonGen, for the constrained text generation task. We collect $37,263$ concept-sets as the inputs, each of which contains three to five common concepts. These concept-sets are sampled from several large corpora of image/video captions, such that the concepts inside them are more likely to co-occur in natural scenes. Through crowd-sourcing via Amazon Mechanical Turk (AMT), we finally obtain $89,028$ human-written sentences as expected outputs. We investigate the performance of sophisticated sequence generation methods for the proposed task with both automatic metrics and human evaluation. The experiments show that all methods are far from human performance in generative commonsense reasoning. Our main contributions are as follows: 1) We introduce the first large-scale constrained text generation dataset targeting at generative commonsense reasoning; 2) We systematically compare methods for this (lexically) constrained text generation with extensive experiments and evaluation. 3) Our code and data are publicly available (w/ the URL in the abstract), so future research in this direction can be directly developed in a unified framework.
Problem Formulation
In this section, we formulate our task with mathematical notations and discuss its inherent challenges. The input to the task is a set of $n$ concepts $x=\lbrace c_1,c_2,\dots ,c_n\rbrace \in \mathcal {X}$, where $c_i\in \mathcal {C}$ is a common noun or verb. $\mathcal {X}$ denotes the space of concept-sets and $\mathcal {C}$ stands for the concept vocabulary. The expected output of this task is a simple, grammatical sentence $y\in \mathcal {Y}$, describing a natural scene in our daily-life that covers all given concepts in $x$. Note that other forms of given concepts are also accepted, such as plural forms of nouns and verbs. In addition, we also provide rationales as an optional resource to model the generation process. For each pair of $(x, y)$, a rationale $r$ is a list of sentences that explains the background commonsense knowledge used in the scene recovering process.
The task is to learn a structured predictive function $f:\mathcal {X} \rightarrow \mathcal {Y}$, which maps a concept-set to a sentence. Thus, it can be seen as a special case of constrained text generation BIBREF9. The unique challenges of our proposed task come from two main aspects as follows.
Constrained Decoding. Lexically constrained decoding for sentence generation has been an important and challenging research topic in machine translation community BIBREF10, where they focus on how to decode sentences when some words/phrases (e.g. terminology) must present in target sentences (Section SECREF6). However, it is still an open problem how to efficiently generate sentences given an unordered set of multiple keywords with potential morphological changes (e.g. “pick” $\rightarrow $ “picks” in the previous case). Apart from that, the part-of-speech constraints brings even more difficulties (e.g. “place” can be verb/noun).
Commonsense Reasoning. Apart from the challenge in constrained decoding, a generative commonsense reasoner also has to compositionally use (latent) commonsense knowledge for generating most plausible scenes. Recall the illustrative example in Figure FIGREF1, even such a simple scene generation process needs pretty much commonsense knowledge like: 1) “apples grow in trees”; 2) “bags are containers that you can put something in”; 3) “you usually pick something and then place it in a container”. Expected reasoners have to prioritize target scenes over an infinity number of less plausible scenes like “A boy picks an apple tree and places it into bags.” or “A boy places some bags on a tree and picks an apple.”.
The CommonGen Dataset
In this section, we present how we build the CommonGen dataset for testing machine commonsense with generative reasoning. The overall data collection process is as follows. 1) We first collect a large amount of high-quality image/video caption sentences from several existing corpora, 2) Then, we compute co-occurrence statistics about concept-sets of different sizes ($3\sim 5$), such that we can find the concept-sets that are more likely to be present in the same scene. 3) Finally, we ask human crowd-workers from AMT to write scenes with rationales for every given concept-set, which serve as our development and test sets. The training set consists of carefully post-processed human-written caption sentences, which have little overlap with dev/test sets. We present the statistics and show its inherent challenges at the end of this section.
The CommonGen Dataset ::: Collecting Concept-Sets with Captions
Following the general definition in the largest commonsense knowledge graph, ConceptNet BIBREF11, we understand a concept as a common noun or verb. We aim to test the ability of generating natural scenes with a given set of concepts. The expected concept-sets in our task are supposed to be likely co-occur in natural, daily-life scenes . The concepts in images/videos captions, which usually describe scenes in our daily life, thus possess the desired property. We therefore collect a large amount of caption sentences from a variety of datasets, including VATEX BIBREF4, LSMDC BIBREF12, ActivityNet BIBREF13, and SNLI BIBREF15, forming 1,040,330 sentences in total.
We assume if a set of concepts are all mentioned together in more caption sentences, then this concept-set is more like to co-occur. Thus, we compute the co-occurrence frequency of all possible concept-sets that have $3\sim 5$ concepts, named as three/four/five-concept-sets respectively. Each concept-set is associated with at least one caption sentences. We carefully post-process them and take the shortest ones with minimal overlaps as the final data. These initial concept-sets are further divided into three parts: train/dev/test. We then iterate all training concept-sets and remove the ones that have more than two overlapping concepts with any concept-set in the dev or test set. Thus, the dev/test set can better measure the generalization ability of models on unseen combinations of concepts.
The CommonGen Dataset ::: Crowd-Sourcing via AMT
It is true that the above-mentioned associated caption sentences for each concept-set are human-written and do describe scenes that cover all given concepts. However, they are created under specific contexts (i.e. an image or a video) and thus might be less representative for common sense. To better measure the quality and interpretability of generative reasoners, we need to evaluate them with scenes and rationales created by using concept-sets only as the signals for annotators.
We collect more human-written scenes for each concept-set in dev and test set through crowd-sourcing via the Amazon Mechanical Turk platform. Each input concept-set is annotated by at least three different humans. The annotators are also required to give sentences as the rationales, which further encourage them to use common sense in creating their scenes. The crowd-sourced sentences correlate well with the associated captions, meaning that it is reasonable to use caption sentences as training data although they can be partly noisy. Additionally, we utilize a search engine over the OMCS corpus BIBREF16 for retrieving relevant propositions as distant rationales in training data.
The CommonGen Dataset ::: Statistics
We present the statistical information of our final dataset. Firstly, we summarize the basic statistics in Table TABREF9, such as the number of unique concept-sets, scene sentences, and sentence lengths. In total, there are 3,706 unique concepts among all concept-sets, and 3,614/1,018/1,207 in the train/dev/test parts respectively. Note that there are 4% of the dev and 6% of the test concepts never appear in the training data, so we can better understand how well trained models can perform with unseen concepts.
We analyze the overlap between training concept-sets and dev/test concept-sets. By average, we find that 98.8% of the training instances share no common concept at all with dev/test data, such that the dev/test can help us analyze model performance on new combinations of concepts.
We also visualize the frequency distribution of our test concept-sets in Figure FIGREF7 by showing the frequency of top 50 single concepts and co-occurred concept pairs.
Methods
In this section, we introduce the methods that we adopt for the proposed constrained text generation task. We group these methods into several types as follows. Basically, we have different kinds of encoder-decoder architectures with copy attention mechanism, including both classic and recently proposed methods. Apart from that, we utilize the state-of-the-art pre-trained sentence generation model for our task. Moreover, we include three typical models for abstractive summarization, story generation respectively, and keywords-based decoding of language models.
Methods ::: Seq-to-Seq Learning
One very straightforward way is to form this problem as a “sequence”-to-sequence task, where input sequences are randomly sorted sets of given concepts. In this way, encoder-decoder seq2seq architectures based on bidirectional RNN (bRNN) BIBREF17 or Transformer (Trans.) BIBREF18 can be directly adopted to the task, just like many other conditional sequence generation problems (translation, summarization, etc.).
Order-insensitive processing. However, these encoders may degrade because our inputs are actually order-insensitive. We thus try to use multi-layer perceptrons (MLP) with mean-pooling as the encoder (“mean encoder”) over sequences of word vectors to completely eliminate the order sensitivity. Similarly, we consider removing the positional embeddings in Transformers (Trans. w/o Pos).
Copying mechanism. The above-mentioned architectures with vanilla attention can miss the words in input sequences and thus produce either unknown tokens or synonyms. To force the decoder to produce target sentences with a constraint on input sentence, we utilize the copying mechanism BIBREF19 for all these models. We follow the implementation of these methods by OpenNMT-py BIBREF20.
Non-autoregressive generation. Recent advances in conditional sentence generation have a focus on edit-based models, which iteratively refine generated sequences (usually bounded by a fixed length). These models potentially get better performance than auto-regressive methods because of their explicit modeling on iterative refinements. We study typical models including iNAT BIBREF21, Insertion Transformer (InsertTrans) BIBREF22, and Levenshtein Transformer (LevenTrans) BIBREF23.
Methods ::: A BERT-based Method: UniLM
We employ a new unified pre-trained language model, UniLM BIBREF24, which uses BERT BIBREF25 as the encoder and then fine-tunes the whole architecture with different generation-based objective. To the best of our knowledge, the UniLM model is the state-of-the-art method for a wide range of conditional text generation tasks including summarization, question generation, and dialogue responding.
Methods ::: Other methods
Based on the similarity between our task and abstractive summarization and story generation (with given topic words), we also apply Pointer Generator Networks (“PointerGen”) BIBREF26 and Multi-scale Fusion Attention (“Fusion Attn.”) BIBREF27 model respectively for our task.
Methods ::: Incorporating Commonsense Rationales
We explore how to utilize additional commonsense knowledge (i.e. rationales) as the input to the task. Like we mentioned in Section SECREF6, we search relevant sentences from the OMCS corpus as the additional distant rationales, and ground truth rationale sentences for dev/test data. The inputs are no longer the concept-sets themselves, but in a form of “[rationales$|$concept-set]” (i.e. concatenating the rationale sentences and original concept-set strings).
Evaluation
Herein, we present the experimental results for comparing different baseline methods in the proposed setting. We first introduce the setup and automatic metrics, and then we present the results and analysis. Finally, we show human evaluation results and qualitative analysis.
Evaluation ::: Setup
We use the proposed CommonGen dataset in two setting: knowledge-agnostic and knowledge-aware. For the knowledge-agnostic setting, we simply apply the methods in Section SECREF4 while we concatenate rationales and input concept-sets together as the knowledge-aware inputs (“$+r$”).
Evaluation ::: Automatic Metrics
For automatically evaluating our methods, we propose to use widely used metric for image/video captioning. This is because the proposed CommonGen task can be regarded as also a caption task where the context are incomplete scenes with given concept-sets. Therefore, we choose BLEU-3/4 BIBREF28, ROUGE-2/L BIBREF29, CIDEr BIBREF30, and SPICE BIBREF31 as the main metrics. Apart from these classic metrics, we also include a novel embedding-based metric named BERTScore BIBREF32. To make the comparisons more clear, we show the delta of BERTScore results by subtracting the score of merely using input concept-sets as target sentences, named $\triangle $BERTS.
To have an estimation about human performance in each metric, we iteratively treat every reference sentence in dev/test data as the prediction to be compared with all references (including itself). That is, if a model has the same reasoning ability with average performance of our crowd workers, its results should exceed this “human bound”.
Evaluation ::: Experimental Results
We present the experimental results of five groups of methods that are introduced in Section SECREF4. We find that the model UniLM outperforms all other baseline methods by a large margin, which is expected due to it is pre-trained with the BERT encoder towards generation objectives. However, its performance is still way far from the human bound, and this margin is even larger in test data.
We notice that the most recent edit-based model named LevenTrans archives the best performance among models without pre-training at all. This shows that edit-based sequence generation models can better deal with the cases where target sentences share similar vocabulary with source ones. Nonetheless, the other two models within the same sequence modeling framework (i.e. fairseq) are much worse, which might because of their specialty designed for machine translation.
An order-insensitive sequence/set encoder, “mean encoder”, outperform order-sensitive counterparts like “bRNN”. However, such a marginal improvement is not seen in the comparison between “Trans.” vs “Trans. w/o Pos”. We assume that for short sequences the order sensitivity does not harm sequential encoders, while positional embeddings in Transformers can better improve the self-attention mechanism. Also, we find that Transformer-based seq2seq architectures are not outperforming simpler models like bRNN.
As for the use of additional retrieved sentences form OMCS corpus and human-written associated rationales, we find that they are not generally helpful in investigated architectures. Although they increase the BLEU and ROUGE scores, the metrics specially designed for captioning like CIDEr and SPICE are dropping down. We argue that it might because the OMCS sentences are actually not aligned with training data, and more sophisticated methods for encoding such non-sequential facts in a more compositional way.
Evaluation ::: Human Evaluation
From the automatic evaluation results with multiple metrics, we have a rough idea of the performance of all models. However, no automatic metric is perfect, especially for a newly proposed generation task like the CommonGen. We thus ask humans to rank 100 outputs of 6 selected typical models as well as one randomly picked reference sentence, forming seven systems in total. Annotators are educated to rank results by their coverage, fluency, and plausibility in daily life. Then, we compute the cumulative gains of each system in all 100 cases:
$S^{(k)}_i$ is the final score of the $i$-th system by the $k$-th annotator. $G^{k}_{i, j}$ is the rank position of the $i$-th system output for $j$-th example. In our case, $N=100$, $K = 5$, $G^{k}_{i, j}\in [1,7]$.
As shown in Table TABREF22, we compare different systems including human bound for both the above-introduced cumulative ranking scores and the average hit@top3 rates with standard deviations. We find that the correlation between human evaluation and CIDEr and SPICE are better than the other metrics (see Table TABREF15).
Evaluation ::: Qualitative Analysis
For more clearly observe the performance of interested models, we present several real system outputs on the test set in Table TABREF24. We find that models usually cannot cover all given concepts, and also can produce repetitions of given concepts (e.g. “a dog catches a dog”, “a couple of couples”, and “at an object and an object .”). Moreover, we find that the order of actions may be mot natural. For example, the model output “a man pulls a sword out of his mouth and swallows it” makes less sense because a man usually swallow a sword first before he pull it out in such performances.
Related Work ::: Machine Common Sense
Machine common sense (MCS) has long been considered as one of the most significant area in artificial intelligence. Recently, there are various emerging datasets for testing machine commonsense from different angles, such as commonsense extraction BIBREF33, BIBREF34, next situation prediction (SWAG BIBREF1, CODAH BIBREF35, HellaSWAG BIBREF36), cultural/social understanding BIBREF37, BIBREF38, BIBREF39, visual scene comprehension BIBREF40, and general commonsense question answering BIBREF0, BIBREF41. Most of them are in a multi-choice QA setting for discriminative commonsense reasoning, among which CSQA BIBREF0 and SWAG BIBREF1 are two typical examples. The input of the CSQA task is a question that needs commonsense reasoning and there are five candidate answers (words/phrases). The SWAG task asks models to select which situation is the most plausible next situation, given a sentence describing an event.
The two tasks share very similar objectives with large pre-trained language encoders like BERT BIBREF42: Masked-LM can predict the missing words in an incomplete sentence, which is similar to the CSQA setting; NextSentPrediction classifies whether a sentence is the next sentence of the given sentence in the corpora, which can be seen as using distant supervision for the SWAG task. Thus, simply fine-tuning such large pre-trained language encoders can yield near or exceeding human performance BIBREF43, BIBREF2, but it does not necessarily mean machine reasoners can really produce new assumptions in an open and generative setting. The proposed CommonGen, to the best of our knowledge, is the first dataset and task for generative commonsense reasoning.
Related Work ::: Constrained Text Generation
Constrained or controllable text generation aims to decode realistic sentences that have expected attributes such as sentiment BIBREF44, BIBREF9, tense BIBREF9, template BIBREF45, style BIBREF46, BIBREF47, BIBREF48, etc. The most similar scenario with our task is lexically constrained sentence encoding, which has been studied mainly in the machine translation community BIBREF49, BIBREF50 for dealing with terminology and additional bilingual dictionaries.
Classic methods usually modify the (beam) searching algorithms to accommodate lexical constraints like Grid Beam Search BIBREF10. The most recent work in this line is the CGMH BIBREF51 model, which works in the inference stage to sample sentences with a sequence of multiple keywords from language models. However, our task brings more challenges: 1) we do not assume there is a fixed order of keywords in target sentences; 2) we allow morphological changes of the keywords; 3) the decoded sentences must describe highly plausible scenes in our daily life. Current methods cannot well address these issues and also work extremely slow to generate grammatical sentences. We instead mainly investigate sequence-to-sequence architectures, especially models that are based on editing operations and non-autoregressive. Pre-trained seq2seq generation models like UniLM BIBREF24 and BRAT BIBREF52 are usually initialized with pre-trained language encoder and then further fine-tuned with multiple NLG tasks. The UniLM archives the best performance on our proposed CommonGen task, while being far from human-level performance and hardly interpretable.
Conclusion
In this paper, we purpose a novel constrained text generation task for generative commonsense reasoning. We introduce a new large-scale dataset named CommonGen and investigate various methods on them. Through our extensive experiments and human evaluation, we demonstrate that the inherent difficulties of the new task cannot be addressed by even the state-of-the-art pre-trained language generation model.
For the future research, we believe the following directions are highly valuable to explore: 1) specially designed metrics for automatic evaluation that focus on commonsense plausibility; 2) better mechanisms for retrieving and imposing useful commonsense knowledge into sentence generation processes; 3) explicitly modeling keyword-centric edits (e.g. insertion, deletion, morphological changes) such that relevant commonsense knowledge can be well utilized. We also believe that models performed well on CommonGen can be easily transferred to other commonsense-required reasoning tasks with few annotations, including image/video captioning, visual question answering, and discriminative multi-choice commonsense question answering. | These concept-sets are sampled from several large corpora of image/video captions |
53d6cbee3606dd106494e2e98aa93fdd95920375 | 53d6cbee3606dd106494e2e98aa93fdd95920375_0 | Q: How big are improvements of MMM over state of the art?
Text: Introduction
Building a system that comprehends text and answers questions is challenging but fascinating, which can be used to test the machine's ability to understand human language BIBREF0, BIBREF1. Many machine reading comprehension (MRC) based question answering (QA) scenarios and datasets have been introduced over the past few years, which differ from each other in various ways, including the source and format of the context documents, whether external knowledge is needed, the format of the answer, to name a few. We can divide these QA tasks into two categories: 1) extractive/abstractive QA such as SQuAD BIBREF2, and HotPotQA BIBREF3. 2) multiple-choice QA (MCQA) tasks such as MultiRC BIBREF4, and MCTest BIBREF5.
In comparison to extractive/abstractive QA tasks, the answers of the MCQA datasets are in the form of open, natural language sentences and not restricted to spans in text. Various question types exist such as arithmetic, summarization, common sense, logical reasoning, language inference, and sentiment analysis. Therefore it requires more advanced reading skills for the machine to perform well on this task. Table TABREF1 shows one example from one of MCQA datasets, DREAM BIBREF6. To answer the first question in Table TABREF1, the system needs to comprehend the whole dialogue and use some common sense knowledge to infer that such a conversation can only happen between classmates rather than brother and sister. For the second question, the implicit inference relationship between the utterance “You'll forget your head if you're not careful.” in the passage and the answer option “He is too careless.” must be figured out by the model to obtain the correct answer. Many MCQA datasets were collected from language or science exams, which were purposely designed by educational experts and consequently require non-trivial reasoning techniques BIBREF7. As a result, the performance of machine readers on these tasks can more accurately gauge comprehension ability of a model.
Recently large and powerful pre-trained language models such as BERT BIBREF8 have been achieving the state-of-the-art (SOTA) results on various tasks, however, its potency on MCQA datasets has been severely limited by the data insufficiency. For example, the MCTest dataset has two variants: MC160 and MC500, which are curated in a similar way, and MC160 is considered easier than MC500 BIBREF9. However, BERT-based models perform much worse on MC160 compared with MC500 (8–10% gap) since the data size of the former is about three times smaller. To tackle this issue, we investigate how to improve the generalization of BERT-based MCQA models with the constraint of limited training data using four representative MCQA datasets: DREAM, MCTest, TOEFL, and SemEval-2018 Task 11.
We proposed MMM, a Multi-stage Multi-task learning framework for Multi-choice question answering. Our framework involves two sequential stages: coarse-tuning stage using out-of-domain datasets and multi-task learning stage using a larger in-domain dataset. For the first stage, we coarse-tuned our model with natural language inference (NLI) tasks. For the second multi-task fine-tuning stage, we leveraged the current largest MCQA dataset, RACE, as the in-domain source dataset and simultaneously fine-tuned the model on both source and target datasets via multi-task learning. Through extensive experiments, we demonstrate that the two-stage sequential fine-tuning strategy is the optimal choice for BERT-based model on MCQA datasets. Moreover, we also proposed a Multi-step Attention Network (MAN) as the top-level classifier instead of the typical fully-connected neural network for this task and obtained better performance. Our proposed method improves BERT-based baseline models by at least 7% in absolute accuracy for all the MCQA datasets (except the SemEval dataset that already achieves 88.1% for the baseline). As a result, by leveraging BERT and its variant, RoBERTa BIBREF10, our approach advanced the SOTA results for all the MCQA datasets, surpassing the previous SOTA by at least 16% in absolute accuracy (except the SemEval dataset).
Methods
In MCQA, the inputs to the model are a passage, a question, and answer options. The passage, denoted as $P$, consists of a list of sentences. The question and each of the answer options, denoted by $Q$ and $O$, are both single sentences. A MCQA model aims to choose one correct answer from answer options based on $P$ and $Q$.
Methods ::: Model Architecture
Figure FIGREF3 illustrates the model architecture. Specifically, we concatenate the passage, question and one of the answer options into a long sequence. For a question with $n$ answer options, we obtain $n$ token sequences of length $l$. Afterwards, each sequence will be encoded by a sentence encoder to get the representation vector $H \in \mathbb {R}^{d\times l}$, which is then projected into a single value $p=C(H)$ ($p\in \mathbb {R}^{1}$) via a top-level classifier $C$. In this way, we obtain the logit vector $\mathbf {p}=[p_1,p_2,...,p_n]$ for all options of a question, which is then transformed into the probability vector through a softmax layer. We choose the option with highest logit value $p$ as the answer. Cross entropy loss is used as the loss function. We used the pre-trained bidirectional transformer encoder, i.e., BERT and RoBERTa as the sentence encoder. The top-level classifier will be detailed in the next subsection.
Methods ::: Multi-step Attention Network
For the top-level classifier upon the sentence encoder, the simplest choice is a two-layer full-connected neural network (FCNN), which consist of one hidden layer with $tanh$ activation and one output layer without activation. This has been widely adopted when BERT is fine-tuned for the down-streaming classification tasks and performs very well BIBREF8. Inspired from the success of the attention network widely used in the span-based QA task BIBREF11, we propose the multi-step attention network (MAN) as our top-level classifier. Similar to the dynamic or multi-hop memory network BIBREF12, BIBREF13, MAN maintains a state and iteratively refines its prediction via the multi-step reasoning.
The MAN classifier works as follows. A pair of question and answer option together is considered as a whole segment, denoted as $QO$. Suppose the sequence length of the passage is $p$ and that of the question and option pair is $q$. We first construct the working memory of the passage $H^P\in \mathbb {R}^{d\times p}$ by extracting the hidden state vectors of the tokens that belong to $P$ from $H$ and concatenating them together in the original sequence order. Similarly, we obtain the working memory of the (question, option) pair, denoted as $H^{QO}\in \mathbb {R}^{d\times q}$. Alternatively, we can also encode the passage and (question, option) pair individually to get their representation vectors $H^P$ and $H^{QO}$, but we found that processing them in a pair performs better.
We then perform $K$-step reasoning over the memory to output the final prediction. Initially, the initial state $\mathbf {s}^0$ in step 0 is the summary of $H^P$ via self-attention: $\mathbf {s}^0=\sum _i \alpha _i H_i^P$, where $\alpha _i=\frac{exp(w_1^TH_i^P)}{\sum _j exp(w_1^TH_j^P)}$. In the following steps $k \in {1,2,...,K-1}$, the state is calculated by:
where $\mathbf {x}^k=\sum _i\beta _iH_i^{QO}$ and $\beta _i=\frac{exp(w_2^T[\mathbf {s}^{k-1};H_i^{QO}])}{\sum _j exp(w_2^T[\mathbf {s}^{k-1};H_j^{QO}])}$. Here $[x;y]$ is concatenation of the vectors $x$ and $y$. The final logit value is determined using the last step state:
Basically, the MAN classifier calculates the attention scores between the passage and (question, option) pair step by step dynamically such that the attention can refine itself through several steps of deliberation. The attention mechanism can help filter out irrelevant information in the passage against (question, option) pair.
Methods ::: Two Stage Training
We adopt a two-stage procedure to train our model with both in-domain and out-of-domain datasets as shown in Figure FIGREF10.
Methods ::: Two Stage Training ::: Coarse-tuning Stage
We first fine-tune the sentence encoder of our model with natural language inference (NLI) tasks. For exploration, we have also tried to fine-tune the sentence encoder on other types of tasks such as sentiment analysis, paraphrasing, and span-based question answering at this stage. However, we found that only NLI task shows robust and significant improvements for our target multi-choice task. See Section SECREF5 for details.
Methods ::: Two Stage Training ::: Multi-task Learning Stage
After corase-tuning stage, we simultaneously fine-tune our model on a large in-domain source dataset and the target dataset together via multi-task learning. We share all model parameters including the sentence encoder as well as the top-level classifier for these two datasets.
Experimental Setup ::: Datasets
We use four MCQA datasets as the target datasets: DREAM BIBREF6, MCTest BIBREF9, TOEFL BIBREF5, and SemEval-2018 Task 11 BIBREF14, which are summarized in Table TABREF11. For the first coarse-tuning stage with NLI tasks, we use MultiNLI BIBREF15 and SNLI BIBREF16 as the out-of-domain source datasets. For the second stage, we use the current largest MCQA dataset, i.e., RACE BIBREF7 as in-domain source dataset. For all datasets, we use the official train/dev/test splits.
Experimental Setup ::: Speaker Normalization
Passages in DREAM dataset are dialogues between two persons or more. Every utterance in a dialogue starts with the speaker name. For example, in utterance “m: How would he know?”, “m” is the abbreviation of “man” indicating that this utterance is from a man. More than 90% utterances have the speaker names as “w,” “f,” and “m,” which are all abbreviations. However, the speaker names mentioned in the questions are full names such as “woman” and “man.” In order to make it clear for the model to learn which speaker the question is asking about, we used a speaker normalization strategy by replacing “w” or “f” with “woman” and “m” with “man” for the speaker names in the utterances. We found this simple strategy is quite effective, providing us with 1% improvement. We will always use this strategy for the DREAM dataset for our method unless explicitly mentioned.
Experimental Setup ::: Multi-task Learning
For the multi-task learning stage, at each training step, we randomly selected a dataset from the two datasets (RACE and the target dataset) and then randomly fetched a batch of data from that dataset to train the model. This process was repeated until the predefined maximum number of steps or the early stopping criterion has been met. We adopted the proportional sampling strategy, where the probability of sampling a task is proportional to the relative size of each dataset compared to the cumulative size of all datasets BIBREF17.
Experimental Setup ::: Training Details
We used a linear learning rate decay schedule with warm-up proportion of $0.1$. We set the dropout rate as $0.1$. The maximum sequence length is set to 512. We clipped the gradient norm to 5 for DREAM dataset and 0 for other datasets. The learning rate and number of training epochs vary for different datasets and encoder types, which are summarized in Section 1 of the Supplementary Material.
More than 90% of passages have more than 512 words in the TOEFL dataset, which exceed the maximum sequence length that BERT supports, thus we cannot process the whole passage within one forward pass. To solve this issue, we propose the sliding window strategy, in which we split the long passage into several snippets of length 512 with overlaps between subsequent snippets and each snippet from the same passage will be assigned with the same label. In training phase, all snippets will be used for training, and in inference phase, we aggregate the logit vectors of all snippets from the same passage and pick the option with highest logit value as the prediction. In experiments, we found the overlap of 256 words is the optimal, which can improve the BERT-Base model from accuracy of 50.0% to 53.2%. We adopted this sliding window strategy only for the TOEFL dataset.
Results
We first evaluate our method on the DREAM dataset. The results are summarized in Table TABREF16. In the table, we first report the accuracy of the SOTA models in the leaderboard. We then report the performance of our re-implementation of fine-tuned models as another set of strong baselines, among which the RoBERTa-Large model has already surpassed the previous SOTA. For these baselines, the top-level classifier is a two-layer FCNN for BERT-based models and a one-layer FCNN for the RoBERTa-Large model. Lastly, we report model performances that use all our proposed method, MMM (MAN classifier + speaker normalization + two stage learning strategies). As direct comparisons, we also list the accuracy increment between MMM and the baseline with the same sentence encoder marked by the parentheses, from which we can see that the performance augmentation is over 9% for BERT-Base and BERT-Large. Although the RoBERTa-Large baseline has already outperformed the BERT-Large baseline by around 18%, MMM gives us another $\sim $4% improvement, pushing the accuracy closer to the human performance. Overall, MMM has achieved a new SOTA, i.e., test accuracy of 88.9%, which exceeds the previous best by 16.9%.
We also test our method on three other MCQA datasets: MCTest including MC160 and MC500, TOEFL, and SemEval-2018 Task 11. The results are summarized in Table TABREF17. Similarly, we list the previous SOTA models with their scores for comparison. We compared our method with the baselines that use the same sentence encoder. Except for the SemEval dataset, our method can improve the BERT-Large model by at least 10%. For both MCTest and SemEval datasets, our best scores are very close to the reported human performance. The MC160 and MC500 datasets were curated in almost the same way BIBREF9 with only one difference that MC160 is around three times smaller than MC500. We can see from Table TABREF17 that both the BERT and RoBERTa baselines perform much worse on MC160 than MC500. We think the reason is that the data size of MC160 is not enough to well fine-tune the large models with a huge amount of trainable parameters. However, by leveraging the transfer learning techniques we proposed, we can significantly improve the generalization capability of BERT and RoBERTa models on the small datasets so that the best performance of MC160 can even surpass that of MC500. This demonstrates the effectiveness of our method.
To better understand why MMM can be successful, we conducted an ablation study be removing one feature at a time on the BERT-Base model. The results are shown in Table TABREF18. We see that the removal of the second stage multi-task learning part hurts our method most significantly, indicating that the majority of improvement is coming from the knowledge transferred from the in-domain dataset. The first stage of coarse-tuning using NLI datasets is also very important, which provides the model with enhanced language inference ability. As for the top-level classifier, i.e., the MAN module, if we replace it with a typical two-layer FCNN as in BIBREF8, we have 1–2% performance drop. Lastly, for the DREAM dataset, the speaker normalization strategy gives us another $\sim $1% improvement.
Discussion ::: Why does natural language inference help?
As shown in Table TABREF18, coarse-tuning on NLI tasks can help improve the performance of MCQA. We conjecture one of the reasons is that, in order to pick the correct answer, we need to rely on the language inference capability in many cases. As an example in Table TABREF1, the utterance highlighted in the bold and italic font in the dialogue is the evidence sentence from which we can obtain the correct answer to Question 2. There is no token overlap between the evidence sentence and the correct answer, indicating that the model cannot solve this question by surface matching. Nevertheless, the correct answer is an entailment to the evidence sentence while the wrong answers are not. Therefore, the capability of language inference enables the model to correctly predict the answer. On the other hand, we can deem the passage and the pair of (question, answer) as a pair of premise and hypothesis. Then the process of choosing the right answer to a certain question is similar to the process of choosing the hypothesis that can best entail the premise. In this sense, the part of MCQA task can be deemed as a NLI task. This also agrees with the argument that NLI is a fundamental ability of a natural language processing model and it can help support other tasks that require higher level of language processing abilities BIBREF21. We provided several more examples that require language inference reading skills in the Section 2 of the Supplementary Material; they are wrongly predicted by the BERT-Base baseline model but can be correctly solved by exposing the model to NLI data with the coarse-tuning stage.
Discussion ::: Can other tasks help with MCQA?
By analyzing the MCQA datasets, we found that some questions ask about the attitude of one person towards something and in some cases, the correct answer is simply a paraphrase of the evidence sentence in the passage. This finding naturally leads to the question: could other kinds of tasks such as sentiment classification, paraphrasing also help with MCQA problems?
To answer this question, we select several representative datasets for five categories as the up-stream tasks: sentiment analysis, paraphrase, span-based QA, NLI, and MCQA. We conduct experiments where we first train the BERT-Base models on each of the five categories and then further fine-tune our models on the target dataset: DREAM and MC500 (MCTest-MC500). For the sentiment analysis category, we used the Stanford Sentiment Treebank (SST-2) dataset from the GLUE benchmark BIBREF22 (around 60k train examples) and the Yelp dataset (around 430k train examples). For the paraphrase category, three paraphrasing datasets are used from the GLUE benchmark: Microsoft Research Paraphrase Corpus (MRPC), Semantic Textual Similarity Benchmark (STS-B), and Quora Question Pairs (QQP), which are denoted as “GLUE-Para.”. For the span-based QA, we use the SQuAD 1.1, SQuAD 2.0 , and MRQA which is a joint dataset including six popular span-based QA datasets. Table TABREF23 summarizes the results. We see that sentiment analysis datasets do not help much with our target MCQA datasets. But the paraphrase datasets do bring some improvements for MCQA. For span-based QA, only SQuAD 2.0 helps to improve the performance of the target dataset. Interestingly, although MRQA is much larger than other QA datasets (at least six times larger), it makes the performance worst. This suggests that span-based QA might not the appropriate source tasks for transfer learning for MCQA. We hypothesis this could due to the fact that most of the questions are non-extractive (e.g., 84% of questions in DREAM are non-extractive) while all answers are extractive in the span-based QA datasets.
For the completeness of our experiments, we also used various NLI datasets: MultiNLI, SNLI, Question NLI (QLI), Recognizing Textual Entailment (RTE), and Winograd NLI (WNLI) from the GLUE benchmark. We used them in three kinds of combinations: MultiNLI alone, MultiNLI plus SNLI denoted as “NLI”, and combining all five datasets together, denoted as “GLUE-NLI”. As the results shown in Table TABREF23, NLI and GLUE-NLI are comparable and both can improve the target dataset by a large margin.
Lastly, among all these tasks, using the MCQA task itself, i.e., pretraining on RACE dataset, can help boost the performance, most. This result agrees with the intuition that the in-domain dataset can be the most ideal data for transfer learning.
In conclusion, we find that for out-of-domain datasets, the NLI datasets can be most helpful to the MCQA task, indicating that the natural language inference capability should be an important foundation of the MCQA systems. Besides, a larger in-domain dataset, i.e. another MCQA dataset, can also be very useful.
Discussion ::: NLI dataset helps with convergence
The first stage of coarse-tuning with NLI data can not only improve the accuracy but also help the model converge faster and better. Especially for the BERT-Large and RoBERTa-Large models that have much larger amount of trainable parameters, convergence is very sensitive to the optimization settings. However, with the help of NLI datasets , convergence for large models is no longer an issue, as shown in Figure FIGREF25. Under the same optimization hyper-parameters, compared with the baseline, coarse-tuning can make the training loss of the BERT-Base model decrease much faster. More importantly, for the BERT-Large model, without coarse-tuning, the model does not converge at all at the first several epochs, which can be completely resolved by the help of NLI data.
Discussion ::: Multi-stage or Multi-task
In a typical scenario where we have one source and one target dataset, we naturally have a question about whether we should simultaneously train a model on them via multi-task learning or first train on the source dataset then on the target sequentially. Many previous works adopted the latter way BIBREF19, BIBREF20, BIBREF23 and BIBREF20 demonstrated that the sequential fine-tuning approach outperforms the multi-task learning setting in their experiments. However, we had contradictory observations in our experiments. Specifically, we conducted a pair of control experiments: one is that we first fine-tune the BERT-Base model on the source dataset RACE and then further fine-tune on the target dataset, and the other is that we simultaneously train the model on RACE and the target dataset via multi-task learning. The comparison results are shown in Table TABREF27. We see that compared with sequential fine-tuning, the multi-task learning achieved better performance. We conjecture that in the sequential fine-tuning setting, while the model is being fine-tuned on the target dataset, some information or knowledge learned from the source dataset may be lost since the model is no longer exposed to the source dataset in this stage. In comparison, this information can be kept in the multi-task learning setting and thus can better help improve the target dataset.
Now that the multi-task learning approach outperforms the sequential fine-tuning setting, we naturally arrive at another question: what if we merged the coarse-tuning and multi-task learning stages together? That is, what if we simultaneously trained the NLI, source, and target datasets altogether under the multi-task learning framework? We also conducted a pair of control experiments for investigation. The results in Table TABREF27, show that casting the fine-tuning process on three datasets into separate stages performs better, indicating that multi-stage training is also necessary. This verifies our MMM framework with coarse-tuning on out-of-domain datasets and fine-tuning on in-domain datesets.
Discussion ::: Multi-steps reasoning is important
Previous results show that the MAN classifier shows improvement compared with the FCNN classifier, but we are also interested in how the performance change while varying the number of reasoning steps $K$ as shown in Figure FIGREF29. $K=0$ means that we do not use MAN but FCNN as the classifier. We observe that there is a gradual improvement as we increase $K=1$ to $K=5$, but after 5 steps the improvements have saturated. This verifies that an appropriate number of steps of reasoning is important for the memory network to reflect its benefits.
Discussion ::: Could the source dataset be benefited?
So far we have been discussing the case where we do multi-task learning with the source dataset RACE and various much smaller target datasets to help improve the targets. We also want to see whether our proposed techniques can also benefit the source dataset itself. Table TABREF31 summarizes the results of BERT-Base model on the RACE dataset obtained by adding the coarse-tuning stage, adding the multi-task training together with DREAM, and adding the MAN module. From this table, we see that all three techniques can bring in improvements over the baseline model for the source dataset RACE, among which NLI coarse-tuning stage can help elevate the scores most.
Since we found all parts of MMM can work well for the source dataset, we tried to use them to improve the accuracy on RACE. The results are shown in Table TABREF32. We used four kinds of pre-trained sentence encoders: BERT-Base, BERT-Large, XLNet-Large, and RoBERTa-Large. For each encoder, we listed the official report of scores from the leaderboard. Compared with the baselines, MMM leads to improvements ranging from 0.5% to 3.0% in accuracy. Our best result is obtained by the RoBERTa-Large encoder.
Discussion ::: Error Analysis
In order to investigate how well our model performs for different types of questions, we did an error analysis by first randomly selecting 150 samples that had wrong predictions by the BERT-Base baseline model from the development set of DREAM dataset. We then manually classified them into several question types, as shown in Table TABREF34. The annotation criterion is described in the Section 3 of the Supplementary Material. We see that the BERT-Base baseline model still does not do well on matching problems. We then evaluate our best model on these samples and report the accuracy of each question type in the last column of Table TABREF34. We find that our best model can improve upon every question type significantly especially for the matching problems, and most surprisingly, our best model can even greatly improve its ability on solving the arithmetic problems, achieving the accuracy of 73.7%.
However, could our model really do math? To investigate this question, we sampled some arithmetic questions that are correctly predicted by our model, made small alterations to the passage or question, and then checked whether our model can still make correct choices. We found our model is very fragile to these minor alterations, implicating that the model is actually not that good at arithmetic problems. We provided one interesting example in the Section 3 of the Supplementary Material.
Related Work
There are increasing interests in machine reading comprehension (MRC) for question answering (QA). The extractive QA tasks primarily focus on locating text spans from the given document/corpus to answer questions BIBREF2. Answers in abstractive datasets such as MS MARCO BIBREF24, SearchQA BIBREF25, and NarrativeQA BIBREF26 are human-generated and based on source documents or summaries in free text format. However, since annotators tend to copy spans as answers BIBREF27, the majority of answers are still extractive in these datasets. The multi-choice QA datasets are collected either via crowd sourcing, or collected from examinations designed by educational experts BIBREF7. In this type of QA datasets, besides token matching, a significant portion of questions require multi-sentence reasoning and external knowledge BIBREF5.
Progress of research for MRC first relies on the breakthrough of the sentence encoder, from the basic LSTM to the pre-trained transformer based model BIBREF8, which has elevated the performance of all MRC models by a large margin. Besides, the attention mechanisms between the context and the query can empower the neural models with higher performance BIBREF11. In addition, some techniques such as answer verification BIBREF28, multi-hop reasoning BIBREF29, and synthetic data augmentation can be also helpful.
Transfer learning has been widely proved to be effective across many domain in NLP. In the QA domain, the most well-known example of transfer learning would be fine-tuning the pre-trained language model such as BERT to the down-streaming QA datasets such as SQuAD BIBREF8. Besides, multi-task learning can also be deemed as a type of transfer learning, since during the training of multiple datasets from different domains for different tasks, knowledge will be shared and transferred from each task to others, which has been used to build a generalized QA model BIBREF30. However, no previous works have investigated that the knowledge from the NLI datasets can also be transferred to improve the MCQA task.
Conclusions
We propose MMM, a multi-stage multi-task transfer learning method on the multiple-choice question answering tasks. Our two-stage training strategy and the multi-step attention network achieved significant improvements for MCQA. We also did detailed analysis to explore the importance of both our training strategies as well as different kinds of in-domain and out-of-domain datasets. We hope our work here can also shed light on new directions for other NLP domains. | test accuracy of 88.9%, which exceeds the previous best by 16.9% |
9dc844f82f520daf986e83466de0c84d93953754 | 9dc844f82f520daf986e83466de0c84d93953754_0 | Q: What out of domain datasets authors used for coarse-tuning stage?
Text: Introduction
Building a system that comprehends text and answers questions is challenging but fascinating, which can be used to test the machine's ability to understand human language BIBREF0, BIBREF1. Many machine reading comprehension (MRC) based question answering (QA) scenarios and datasets have been introduced over the past few years, which differ from each other in various ways, including the source and format of the context documents, whether external knowledge is needed, the format of the answer, to name a few. We can divide these QA tasks into two categories: 1) extractive/abstractive QA such as SQuAD BIBREF2, and HotPotQA BIBREF3. 2) multiple-choice QA (MCQA) tasks such as MultiRC BIBREF4, and MCTest BIBREF5.
In comparison to extractive/abstractive QA tasks, the answers of the MCQA datasets are in the form of open, natural language sentences and not restricted to spans in text. Various question types exist such as arithmetic, summarization, common sense, logical reasoning, language inference, and sentiment analysis. Therefore it requires more advanced reading skills for the machine to perform well on this task. Table TABREF1 shows one example from one of MCQA datasets, DREAM BIBREF6. To answer the first question in Table TABREF1, the system needs to comprehend the whole dialogue and use some common sense knowledge to infer that such a conversation can only happen between classmates rather than brother and sister. For the second question, the implicit inference relationship between the utterance “You'll forget your head if you're not careful.” in the passage and the answer option “He is too careless.” must be figured out by the model to obtain the correct answer. Many MCQA datasets were collected from language or science exams, which were purposely designed by educational experts and consequently require non-trivial reasoning techniques BIBREF7. As a result, the performance of machine readers on these tasks can more accurately gauge comprehension ability of a model.
Recently large and powerful pre-trained language models such as BERT BIBREF8 have been achieving the state-of-the-art (SOTA) results on various tasks, however, its potency on MCQA datasets has been severely limited by the data insufficiency. For example, the MCTest dataset has two variants: MC160 and MC500, which are curated in a similar way, and MC160 is considered easier than MC500 BIBREF9. However, BERT-based models perform much worse on MC160 compared with MC500 (8–10% gap) since the data size of the former is about three times smaller. To tackle this issue, we investigate how to improve the generalization of BERT-based MCQA models with the constraint of limited training data using four representative MCQA datasets: DREAM, MCTest, TOEFL, and SemEval-2018 Task 11.
We proposed MMM, a Multi-stage Multi-task learning framework for Multi-choice question answering. Our framework involves two sequential stages: coarse-tuning stage using out-of-domain datasets and multi-task learning stage using a larger in-domain dataset. For the first stage, we coarse-tuned our model with natural language inference (NLI) tasks. For the second multi-task fine-tuning stage, we leveraged the current largest MCQA dataset, RACE, as the in-domain source dataset and simultaneously fine-tuned the model on both source and target datasets via multi-task learning. Through extensive experiments, we demonstrate that the two-stage sequential fine-tuning strategy is the optimal choice for BERT-based model on MCQA datasets. Moreover, we also proposed a Multi-step Attention Network (MAN) as the top-level classifier instead of the typical fully-connected neural network for this task and obtained better performance. Our proposed method improves BERT-based baseline models by at least 7% in absolute accuracy for all the MCQA datasets (except the SemEval dataset that already achieves 88.1% for the baseline). As a result, by leveraging BERT and its variant, RoBERTa BIBREF10, our approach advanced the SOTA results for all the MCQA datasets, surpassing the previous SOTA by at least 16% in absolute accuracy (except the SemEval dataset).
Methods
In MCQA, the inputs to the model are a passage, a question, and answer options. The passage, denoted as $P$, consists of a list of sentences. The question and each of the answer options, denoted by $Q$ and $O$, are both single sentences. A MCQA model aims to choose one correct answer from answer options based on $P$ and $Q$.
Methods ::: Model Architecture
Figure FIGREF3 illustrates the model architecture. Specifically, we concatenate the passage, question and one of the answer options into a long sequence. For a question with $n$ answer options, we obtain $n$ token sequences of length $l$. Afterwards, each sequence will be encoded by a sentence encoder to get the representation vector $H \in \mathbb {R}^{d\times l}$, which is then projected into a single value $p=C(H)$ ($p\in \mathbb {R}^{1}$) via a top-level classifier $C$. In this way, we obtain the logit vector $\mathbf {p}=[p_1,p_2,...,p_n]$ for all options of a question, which is then transformed into the probability vector through a softmax layer. We choose the option with highest logit value $p$ as the answer. Cross entropy loss is used as the loss function. We used the pre-trained bidirectional transformer encoder, i.e., BERT and RoBERTa as the sentence encoder. The top-level classifier will be detailed in the next subsection.
Methods ::: Multi-step Attention Network
For the top-level classifier upon the sentence encoder, the simplest choice is a two-layer full-connected neural network (FCNN), which consist of one hidden layer with $tanh$ activation and one output layer without activation. This has been widely adopted when BERT is fine-tuned for the down-streaming classification tasks and performs very well BIBREF8. Inspired from the success of the attention network widely used in the span-based QA task BIBREF11, we propose the multi-step attention network (MAN) as our top-level classifier. Similar to the dynamic or multi-hop memory network BIBREF12, BIBREF13, MAN maintains a state and iteratively refines its prediction via the multi-step reasoning.
The MAN classifier works as follows. A pair of question and answer option together is considered as a whole segment, denoted as $QO$. Suppose the sequence length of the passage is $p$ and that of the question and option pair is $q$. We first construct the working memory of the passage $H^P\in \mathbb {R}^{d\times p}$ by extracting the hidden state vectors of the tokens that belong to $P$ from $H$ and concatenating them together in the original sequence order. Similarly, we obtain the working memory of the (question, option) pair, denoted as $H^{QO}\in \mathbb {R}^{d\times q}$. Alternatively, we can also encode the passage and (question, option) pair individually to get their representation vectors $H^P$ and $H^{QO}$, but we found that processing them in a pair performs better.
We then perform $K$-step reasoning over the memory to output the final prediction. Initially, the initial state $\mathbf {s}^0$ in step 0 is the summary of $H^P$ via self-attention: $\mathbf {s}^0=\sum _i \alpha _i H_i^P$, where $\alpha _i=\frac{exp(w_1^TH_i^P)}{\sum _j exp(w_1^TH_j^P)}$. In the following steps $k \in {1,2,...,K-1}$, the state is calculated by:
where $\mathbf {x}^k=\sum _i\beta _iH_i^{QO}$ and $\beta _i=\frac{exp(w_2^T[\mathbf {s}^{k-1};H_i^{QO}])}{\sum _j exp(w_2^T[\mathbf {s}^{k-1};H_j^{QO}])}$. Here $[x;y]$ is concatenation of the vectors $x$ and $y$. The final logit value is determined using the last step state:
Basically, the MAN classifier calculates the attention scores between the passage and (question, option) pair step by step dynamically such that the attention can refine itself through several steps of deliberation. The attention mechanism can help filter out irrelevant information in the passage against (question, option) pair.
Methods ::: Two Stage Training
We adopt a two-stage procedure to train our model with both in-domain and out-of-domain datasets as shown in Figure FIGREF10.
Methods ::: Two Stage Training ::: Coarse-tuning Stage
We first fine-tune the sentence encoder of our model with natural language inference (NLI) tasks. For exploration, we have also tried to fine-tune the sentence encoder on other types of tasks such as sentiment analysis, paraphrasing, and span-based question answering at this stage. However, we found that only NLI task shows robust and significant improvements for our target multi-choice task. See Section SECREF5 for details.
Methods ::: Two Stage Training ::: Multi-task Learning Stage
After corase-tuning stage, we simultaneously fine-tune our model on a large in-domain source dataset and the target dataset together via multi-task learning. We share all model parameters including the sentence encoder as well as the top-level classifier for these two datasets.
Experimental Setup ::: Datasets
We use four MCQA datasets as the target datasets: DREAM BIBREF6, MCTest BIBREF9, TOEFL BIBREF5, and SemEval-2018 Task 11 BIBREF14, which are summarized in Table TABREF11. For the first coarse-tuning stage with NLI tasks, we use MultiNLI BIBREF15 and SNLI BIBREF16 as the out-of-domain source datasets. For the second stage, we use the current largest MCQA dataset, i.e., RACE BIBREF7 as in-domain source dataset. For all datasets, we use the official train/dev/test splits.
Experimental Setup ::: Speaker Normalization
Passages in DREAM dataset are dialogues between two persons or more. Every utterance in a dialogue starts with the speaker name. For example, in utterance “m: How would he know?”, “m” is the abbreviation of “man” indicating that this utterance is from a man. More than 90% utterances have the speaker names as “w,” “f,” and “m,” which are all abbreviations. However, the speaker names mentioned in the questions are full names such as “woman” and “man.” In order to make it clear for the model to learn which speaker the question is asking about, we used a speaker normalization strategy by replacing “w” or “f” with “woman” and “m” with “man” for the speaker names in the utterances. We found this simple strategy is quite effective, providing us with 1% improvement. We will always use this strategy for the DREAM dataset for our method unless explicitly mentioned.
Experimental Setup ::: Multi-task Learning
For the multi-task learning stage, at each training step, we randomly selected a dataset from the two datasets (RACE and the target dataset) and then randomly fetched a batch of data from that dataset to train the model. This process was repeated until the predefined maximum number of steps or the early stopping criterion has been met. We adopted the proportional sampling strategy, where the probability of sampling a task is proportional to the relative size of each dataset compared to the cumulative size of all datasets BIBREF17.
Experimental Setup ::: Training Details
We used a linear learning rate decay schedule with warm-up proportion of $0.1$. We set the dropout rate as $0.1$. The maximum sequence length is set to 512. We clipped the gradient norm to 5 for DREAM dataset and 0 for other datasets. The learning rate and number of training epochs vary for different datasets and encoder types, which are summarized in Section 1 of the Supplementary Material.
More than 90% of passages have more than 512 words in the TOEFL dataset, which exceed the maximum sequence length that BERT supports, thus we cannot process the whole passage within one forward pass. To solve this issue, we propose the sliding window strategy, in which we split the long passage into several snippets of length 512 with overlaps between subsequent snippets and each snippet from the same passage will be assigned with the same label. In training phase, all snippets will be used for training, and in inference phase, we aggregate the logit vectors of all snippets from the same passage and pick the option with highest logit value as the prediction. In experiments, we found the overlap of 256 words is the optimal, which can improve the BERT-Base model from accuracy of 50.0% to 53.2%. We adopted this sliding window strategy only for the TOEFL dataset.
Results
We first evaluate our method on the DREAM dataset. The results are summarized in Table TABREF16. In the table, we first report the accuracy of the SOTA models in the leaderboard. We then report the performance of our re-implementation of fine-tuned models as another set of strong baselines, among which the RoBERTa-Large model has already surpassed the previous SOTA. For these baselines, the top-level classifier is a two-layer FCNN for BERT-based models and a one-layer FCNN for the RoBERTa-Large model. Lastly, we report model performances that use all our proposed method, MMM (MAN classifier + speaker normalization + two stage learning strategies). As direct comparisons, we also list the accuracy increment between MMM and the baseline with the same sentence encoder marked by the parentheses, from which we can see that the performance augmentation is over 9% for BERT-Base and BERT-Large. Although the RoBERTa-Large baseline has already outperformed the BERT-Large baseline by around 18%, MMM gives us another $\sim $4% improvement, pushing the accuracy closer to the human performance. Overall, MMM has achieved a new SOTA, i.e., test accuracy of 88.9%, which exceeds the previous best by 16.9%.
We also test our method on three other MCQA datasets: MCTest including MC160 and MC500, TOEFL, and SemEval-2018 Task 11. The results are summarized in Table TABREF17. Similarly, we list the previous SOTA models with their scores for comparison. We compared our method with the baselines that use the same sentence encoder. Except for the SemEval dataset, our method can improve the BERT-Large model by at least 10%. For both MCTest and SemEval datasets, our best scores are very close to the reported human performance. The MC160 and MC500 datasets were curated in almost the same way BIBREF9 with only one difference that MC160 is around three times smaller than MC500. We can see from Table TABREF17 that both the BERT and RoBERTa baselines perform much worse on MC160 than MC500. We think the reason is that the data size of MC160 is not enough to well fine-tune the large models with a huge amount of trainable parameters. However, by leveraging the transfer learning techniques we proposed, we can significantly improve the generalization capability of BERT and RoBERTa models on the small datasets so that the best performance of MC160 can even surpass that of MC500. This demonstrates the effectiveness of our method.
To better understand why MMM can be successful, we conducted an ablation study be removing one feature at a time on the BERT-Base model. The results are shown in Table TABREF18. We see that the removal of the second stage multi-task learning part hurts our method most significantly, indicating that the majority of improvement is coming from the knowledge transferred from the in-domain dataset. The first stage of coarse-tuning using NLI datasets is also very important, which provides the model with enhanced language inference ability. As for the top-level classifier, i.e., the MAN module, if we replace it with a typical two-layer FCNN as in BIBREF8, we have 1–2% performance drop. Lastly, for the DREAM dataset, the speaker normalization strategy gives us another $\sim $1% improvement.
Discussion ::: Why does natural language inference help?
As shown in Table TABREF18, coarse-tuning on NLI tasks can help improve the performance of MCQA. We conjecture one of the reasons is that, in order to pick the correct answer, we need to rely on the language inference capability in many cases. As an example in Table TABREF1, the utterance highlighted in the bold and italic font in the dialogue is the evidence sentence from which we can obtain the correct answer to Question 2. There is no token overlap between the evidence sentence and the correct answer, indicating that the model cannot solve this question by surface matching. Nevertheless, the correct answer is an entailment to the evidence sentence while the wrong answers are not. Therefore, the capability of language inference enables the model to correctly predict the answer. On the other hand, we can deem the passage and the pair of (question, answer) as a pair of premise and hypothesis. Then the process of choosing the right answer to a certain question is similar to the process of choosing the hypothesis that can best entail the premise. In this sense, the part of MCQA task can be deemed as a NLI task. This also agrees with the argument that NLI is a fundamental ability of a natural language processing model and it can help support other tasks that require higher level of language processing abilities BIBREF21. We provided several more examples that require language inference reading skills in the Section 2 of the Supplementary Material; they are wrongly predicted by the BERT-Base baseline model but can be correctly solved by exposing the model to NLI data with the coarse-tuning stage.
Discussion ::: Can other tasks help with MCQA?
By analyzing the MCQA datasets, we found that some questions ask about the attitude of one person towards something and in some cases, the correct answer is simply a paraphrase of the evidence sentence in the passage. This finding naturally leads to the question: could other kinds of tasks such as sentiment classification, paraphrasing also help with MCQA problems?
To answer this question, we select several representative datasets for five categories as the up-stream tasks: sentiment analysis, paraphrase, span-based QA, NLI, and MCQA. We conduct experiments where we first train the BERT-Base models on each of the five categories and then further fine-tune our models on the target dataset: DREAM and MC500 (MCTest-MC500). For the sentiment analysis category, we used the Stanford Sentiment Treebank (SST-2) dataset from the GLUE benchmark BIBREF22 (around 60k train examples) and the Yelp dataset (around 430k train examples). For the paraphrase category, three paraphrasing datasets are used from the GLUE benchmark: Microsoft Research Paraphrase Corpus (MRPC), Semantic Textual Similarity Benchmark (STS-B), and Quora Question Pairs (QQP), which are denoted as “GLUE-Para.”. For the span-based QA, we use the SQuAD 1.1, SQuAD 2.0 , and MRQA which is a joint dataset including six popular span-based QA datasets. Table TABREF23 summarizes the results. We see that sentiment analysis datasets do not help much with our target MCQA datasets. But the paraphrase datasets do bring some improvements for MCQA. For span-based QA, only SQuAD 2.0 helps to improve the performance of the target dataset. Interestingly, although MRQA is much larger than other QA datasets (at least six times larger), it makes the performance worst. This suggests that span-based QA might not the appropriate source tasks for transfer learning for MCQA. We hypothesis this could due to the fact that most of the questions are non-extractive (e.g., 84% of questions in DREAM are non-extractive) while all answers are extractive in the span-based QA datasets.
For the completeness of our experiments, we also used various NLI datasets: MultiNLI, SNLI, Question NLI (QLI), Recognizing Textual Entailment (RTE), and Winograd NLI (WNLI) from the GLUE benchmark. We used them in three kinds of combinations: MultiNLI alone, MultiNLI plus SNLI denoted as “NLI”, and combining all five datasets together, denoted as “GLUE-NLI”. As the results shown in Table TABREF23, NLI and GLUE-NLI are comparable and both can improve the target dataset by a large margin.
Lastly, among all these tasks, using the MCQA task itself, i.e., pretraining on RACE dataset, can help boost the performance, most. This result agrees with the intuition that the in-domain dataset can be the most ideal data for transfer learning.
In conclusion, we find that for out-of-domain datasets, the NLI datasets can be most helpful to the MCQA task, indicating that the natural language inference capability should be an important foundation of the MCQA systems. Besides, a larger in-domain dataset, i.e. another MCQA dataset, can also be very useful.
Discussion ::: NLI dataset helps with convergence
The first stage of coarse-tuning with NLI data can not only improve the accuracy but also help the model converge faster and better. Especially for the BERT-Large and RoBERTa-Large models that have much larger amount of trainable parameters, convergence is very sensitive to the optimization settings. However, with the help of NLI datasets , convergence for large models is no longer an issue, as shown in Figure FIGREF25. Under the same optimization hyper-parameters, compared with the baseline, coarse-tuning can make the training loss of the BERT-Base model decrease much faster. More importantly, for the BERT-Large model, without coarse-tuning, the model does not converge at all at the first several epochs, which can be completely resolved by the help of NLI data.
Discussion ::: Multi-stage or Multi-task
In a typical scenario where we have one source and one target dataset, we naturally have a question about whether we should simultaneously train a model on them via multi-task learning or first train on the source dataset then on the target sequentially. Many previous works adopted the latter way BIBREF19, BIBREF20, BIBREF23 and BIBREF20 demonstrated that the sequential fine-tuning approach outperforms the multi-task learning setting in their experiments. However, we had contradictory observations in our experiments. Specifically, we conducted a pair of control experiments: one is that we first fine-tune the BERT-Base model on the source dataset RACE and then further fine-tune on the target dataset, and the other is that we simultaneously train the model on RACE and the target dataset via multi-task learning. The comparison results are shown in Table TABREF27. We see that compared with sequential fine-tuning, the multi-task learning achieved better performance. We conjecture that in the sequential fine-tuning setting, while the model is being fine-tuned on the target dataset, some information or knowledge learned from the source dataset may be lost since the model is no longer exposed to the source dataset in this stage. In comparison, this information can be kept in the multi-task learning setting and thus can better help improve the target dataset.
Now that the multi-task learning approach outperforms the sequential fine-tuning setting, we naturally arrive at another question: what if we merged the coarse-tuning and multi-task learning stages together? That is, what if we simultaneously trained the NLI, source, and target datasets altogether under the multi-task learning framework? We also conducted a pair of control experiments for investigation. The results in Table TABREF27, show that casting the fine-tuning process on three datasets into separate stages performs better, indicating that multi-stage training is also necessary. This verifies our MMM framework with coarse-tuning on out-of-domain datasets and fine-tuning on in-domain datesets.
Discussion ::: Multi-steps reasoning is important
Previous results show that the MAN classifier shows improvement compared with the FCNN classifier, but we are also interested in how the performance change while varying the number of reasoning steps $K$ as shown in Figure FIGREF29. $K=0$ means that we do not use MAN but FCNN as the classifier. We observe that there is a gradual improvement as we increase $K=1$ to $K=5$, but after 5 steps the improvements have saturated. This verifies that an appropriate number of steps of reasoning is important for the memory network to reflect its benefits.
Discussion ::: Could the source dataset be benefited?
So far we have been discussing the case where we do multi-task learning with the source dataset RACE and various much smaller target datasets to help improve the targets. We also want to see whether our proposed techniques can also benefit the source dataset itself. Table TABREF31 summarizes the results of BERT-Base model on the RACE dataset obtained by adding the coarse-tuning stage, adding the multi-task training together with DREAM, and adding the MAN module. From this table, we see that all three techniques can bring in improvements over the baseline model for the source dataset RACE, among which NLI coarse-tuning stage can help elevate the scores most.
Since we found all parts of MMM can work well for the source dataset, we tried to use them to improve the accuracy on RACE. The results are shown in Table TABREF32. We used four kinds of pre-trained sentence encoders: BERT-Base, BERT-Large, XLNet-Large, and RoBERTa-Large. For each encoder, we listed the official report of scores from the leaderboard. Compared with the baselines, MMM leads to improvements ranging from 0.5% to 3.0% in accuracy. Our best result is obtained by the RoBERTa-Large encoder.
Discussion ::: Error Analysis
In order to investigate how well our model performs for different types of questions, we did an error analysis by first randomly selecting 150 samples that had wrong predictions by the BERT-Base baseline model from the development set of DREAM dataset. We then manually classified them into several question types, as shown in Table TABREF34. The annotation criterion is described in the Section 3 of the Supplementary Material. We see that the BERT-Base baseline model still does not do well on matching problems. We then evaluate our best model on these samples and report the accuracy of each question type in the last column of Table TABREF34. We find that our best model can improve upon every question type significantly especially for the matching problems, and most surprisingly, our best model can even greatly improve its ability on solving the arithmetic problems, achieving the accuracy of 73.7%.
However, could our model really do math? To investigate this question, we sampled some arithmetic questions that are correctly predicted by our model, made small alterations to the passage or question, and then checked whether our model can still make correct choices. We found our model is very fragile to these minor alterations, implicating that the model is actually not that good at arithmetic problems. We provided one interesting example in the Section 3 of the Supplementary Material.
Related Work
There are increasing interests in machine reading comprehension (MRC) for question answering (QA). The extractive QA tasks primarily focus on locating text spans from the given document/corpus to answer questions BIBREF2. Answers in abstractive datasets such as MS MARCO BIBREF24, SearchQA BIBREF25, and NarrativeQA BIBREF26 are human-generated and based on source documents or summaries in free text format. However, since annotators tend to copy spans as answers BIBREF27, the majority of answers are still extractive in these datasets. The multi-choice QA datasets are collected either via crowd sourcing, or collected from examinations designed by educational experts BIBREF7. In this type of QA datasets, besides token matching, a significant portion of questions require multi-sentence reasoning and external knowledge BIBREF5.
Progress of research for MRC first relies on the breakthrough of the sentence encoder, from the basic LSTM to the pre-trained transformer based model BIBREF8, which has elevated the performance of all MRC models by a large margin. Besides, the attention mechanisms between the context and the query can empower the neural models with higher performance BIBREF11. In addition, some techniques such as answer verification BIBREF28, multi-hop reasoning BIBREF29, and synthetic data augmentation can be also helpful.
Transfer learning has been widely proved to be effective across many domain in NLP. In the QA domain, the most well-known example of transfer learning would be fine-tuning the pre-trained language model such as BERT to the down-streaming QA datasets such as SQuAD BIBREF8. Besides, multi-task learning can also be deemed as a type of transfer learning, since during the training of multiple datasets from different domains for different tasks, knowledge will be shared and transferred from each task to others, which has been used to build a generalized QA model BIBREF30. However, no previous works have investigated that the knowledge from the NLI datasets can also be transferred to improve the MCQA task.
Conclusions
We propose MMM, a multi-stage multi-task transfer learning method on the multiple-choice question answering tasks. Our two-stage training strategy and the multi-step attention network achieved significant improvements for MCQA. We also did detailed analysis to explore the importance of both our training strategies as well as different kinds of in-domain and out-of-domain datasets. We hope our work here can also shed light on new directions for other NLP domains. | MultiNLI BIBREF15 and SNLI BIBREF16 |
9fe4a2a5b9e5cf29310ab428922cc8e7b2fc1d11 | 9fe4a2a5b9e5cf29310ab428922cc8e7b2fc1d11_0 | Q: What are state of the art methods MMM is compared to?
Text: Introduction
Building a system that comprehends text and answers questions is challenging but fascinating, which can be used to test the machine's ability to understand human language BIBREF0, BIBREF1. Many machine reading comprehension (MRC) based question answering (QA) scenarios and datasets have been introduced over the past few years, which differ from each other in various ways, including the source and format of the context documents, whether external knowledge is needed, the format of the answer, to name a few. We can divide these QA tasks into two categories: 1) extractive/abstractive QA such as SQuAD BIBREF2, and HotPotQA BIBREF3. 2) multiple-choice QA (MCQA) tasks such as MultiRC BIBREF4, and MCTest BIBREF5.
In comparison to extractive/abstractive QA tasks, the answers of the MCQA datasets are in the form of open, natural language sentences and not restricted to spans in text. Various question types exist such as arithmetic, summarization, common sense, logical reasoning, language inference, and sentiment analysis. Therefore it requires more advanced reading skills for the machine to perform well on this task. Table TABREF1 shows one example from one of MCQA datasets, DREAM BIBREF6. To answer the first question in Table TABREF1, the system needs to comprehend the whole dialogue and use some common sense knowledge to infer that such a conversation can only happen between classmates rather than brother and sister. For the second question, the implicit inference relationship between the utterance “You'll forget your head if you're not careful.” in the passage and the answer option “He is too careless.” must be figured out by the model to obtain the correct answer. Many MCQA datasets were collected from language or science exams, which were purposely designed by educational experts and consequently require non-trivial reasoning techniques BIBREF7. As a result, the performance of machine readers on these tasks can more accurately gauge comprehension ability of a model.
Recently large and powerful pre-trained language models such as BERT BIBREF8 have been achieving the state-of-the-art (SOTA) results on various tasks, however, its potency on MCQA datasets has been severely limited by the data insufficiency. For example, the MCTest dataset has two variants: MC160 and MC500, which are curated in a similar way, and MC160 is considered easier than MC500 BIBREF9. However, BERT-based models perform much worse on MC160 compared with MC500 (8–10% gap) since the data size of the former is about three times smaller. To tackle this issue, we investigate how to improve the generalization of BERT-based MCQA models with the constraint of limited training data using four representative MCQA datasets: DREAM, MCTest, TOEFL, and SemEval-2018 Task 11.
We proposed MMM, a Multi-stage Multi-task learning framework for Multi-choice question answering. Our framework involves two sequential stages: coarse-tuning stage using out-of-domain datasets and multi-task learning stage using a larger in-domain dataset. For the first stage, we coarse-tuned our model with natural language inference (NLI) tasks. For the second multi-task fine-tuning stage, we leveraged the current largest MCQA dataset, RACE, as the in-domain source dataset and simultaneously fine-tuned the model on both source and target datasets via multi-task learning. Through extensive experiments, we demonstrate that the two-stage sequential fine-tuning strategy is the optimal choice for BERT-based model on MCQA datasets. Moreover, we also proposed a Multi-step Attention Network (MAN) as the top-level classifier instead of the typical fully-connected neural network for this task and obtained better performance. Our proposed method improves BERT-based baseline models by at least 7% in absolute accuracy for all the MCQA datasets (except the SemEval dataset that already achieves 88.1% for the baseline). As a result, by leveraging BERT and its variant, RoBERTa BIBREF10, our approach advanced the SOTA results for all the MCQA datasets, surpassing the previous SOTA by at least 16% in absolute accuracy (except the SemEval dataset).
Methods
In MCQA, the inputs to the model are a passage, a question, and answer options. The passage, denoted as $P$, consists of a list of sentences. The question and each of the answer options, denoted by $Q$ and $O$, are both single sentences. A MCQA model aims to choose one correct answer from answer options based on $P$ and $Q$.
Methods ::: Model Architecture
Figure FIGREF3 illustrates the model architecture. Specifically, we concatenate the passage, question and one of the answer options into a long sequence. For a question with $n$ answer options, we obtain $n$ token sequences of length $l$. Afterwards, each sequence will be encoded by a sentence encoder to get the representation vector $H \in \mathbb {R}^{d\times l}$, which is then projected into a single value $p=C(H)$ ($p\in \mathbb {R}^{1}$) via a top-level classifier $C$. In this way, we obtain the logit vector $\mathbf {p}=[p_1,p_2,...,p_n]$ for all options of a question, which is then transformed into the probability vector through a softmax layer. We choose the option with highest logit value $p$ as the answer. Cross entropy loss is used as the loss function. We used the pre-trained bidirectional transformer encoder, i.e., BERT and RoBERTa as the sentence encoder. The top-level classifier will be detailed in the next subsection.
Methods ::: Multi-step Attention Network
For the top-level classifier upon the sentence encoder, the simplest choice is a two-layer full-connected neural network (FCNN), which consist of one hidden layer with $tanh$ activation and one output layer without activation. This has been widely adopted when BERT is fine-tuned for the down-streaming classification tasks and performs very well BIBREF8. Inspired from the success of the attention network widely used in the span-based QA task BIBREF11, we propose the multi-step attention network (MAN) as our top-level classifier. Similar to the dynamic or multi-hop memory network BIBREF12, BIBREF13, MAN maintains a state and iteratively refines its prediction via the multi-step reasoning.
The MAN classifier works as follows. A pair of question and answer option together is considered as a whole segment, denoted as $QO$. Suppose the sequence length of the passage is $p$ and that of the question and option pair is $q$. We first construct the working memory of the passage $H^P\in \mathbb {R}^{d\times p}$ by extracting the hidden state vectors of the tokens that belong to $P$ from $H$ and concatenating them together in the original sequence order. Similarly, we obtain the working memory of the (question, option) pair, denoted as $H^{QO}\in \mathbb {R}^{d\times q}$. Alternatively, we can also encode the passage and (question, option) pair individually to get their representation vectors $H^P$ and $H^{QO}$, but we found that processing them in a pair performs better.
We then perform $K$-step reasoning over the memory to output the final prediction. Initially, the initial state $\mathbf {s}^0$ in step 0 is the summary of $H^P$ via self-attention: $\mathbf {s}^0=\sum _i \alpha _i H_i^P$, where $\alpha _i=\frac{exp(w_1^TH_i^P)}{\sum _j exp(w_1^TH_j^P)}$. In the following steps $k \in {1,2,...,K-1}$, the state is calculated by:
where $\mathbf {x}^k=\sum _i\beta _iH_i^{QO}$ and $\beta _i=\frac{exp(w_2^T[\mathbf {s}^{k-1};H_i^{QO}])}{\sum _j exp(w_2^T[\mathbf {s}^{k-1};H_j^{QO}])}$. Here $[x;y]$ is concatenation of the vectors $x$ and $y$. The final logit value is determined using the last step state:
Basically, the MAN classifier calculates the attention scores between the passage and (question, option) pair step by step dynamically such that the attention can refine itself through several steps of deliberation. The attention mechanism can help filter out irrelevant information in the passage against (question, option) pair.
Methods ::: Two Stage Training
We adopt a two-stage procedure to train our model with both in-domain and out-of-domain datasets as shown in Figure FIGREF10.
Methods ::: Two Stage Training ::: Coarse-tuning Stage
We first fine-tune the sentence encoder of our model with natural language inference (NLI) tasks. For exploration, we have also tried to fine-tune the sentence encoder on other types of tasks such as sentiment analysis, paraphrasing, and span-based question answering at this stage. However, we found that only NLI task shows robust and significant improvements for our target multi-choice task. See Section SECREF5 for details.
Methods ::: Two Stage Training ::: Multi-task Learning Stage
After corase-tuning stage, we simultaneously fine-tune our model on a large in-domain source dataset and the target dataset together via multi-task learning. We share all model parameters including the sentence encoder as well as the top-level classifier for these two datasets.
Experimental Setup ::: Datasets
We use four MCQA datasets as the target datasets: DREAM BIBREF6, MCTest BIBREF9, TOEFL BIBREF5, and SemEval-2018 Task 11 BIBREF14, which are summarized in Table TABREF11. For the first coarse-tuning stage with NLI tasks, we use MultiNLI BIBREF15 and SNLI BIBREF16 as the out-of-domain source datasets. For the second stage, we use the current largest MCQA dataset, i.e., RACE BIBREF7 as in-domain source dataset. For all datasets, we use the official train/dev/test splits.
Experimental Setup ::: Speaker Normalization
Passages in DREAM dataset are dialogues between two persons or more. Every utterance in a dialogue starts with the speaker name. For example, in utterance “m: How would he know?”, “m” is the abbreviation of “man” indicating that this utterance is from a man. More than 90% utterances have the speaker names as “w,” “f,” and “m,” which are all abbreviations. However, the speaker names mentioned in the questions are full names such as “woman” and “man.” In order to make it clear for the model to learn which speaker the question is asking about, we used a speaker normalization strategy by replacing “w” or “f” with “woman” and “m” with “man” for the speaker names in the utterances. We found this simple strategy is quite effective, providing us with 1% improvement. We will always use this strategy for the DREAM dataset for our method unless explicitly mentioned.
Experimental Setup ::: Multi-task Learning
For the multi-task learning stage, at each training step, we randomly selected a dataset from the two datasets (RACE and the target dataset) and then randomly fetched a batch of data from that dataset to train the model. This process was repeated until the predefined maximum number of steps or the early stopping criterion has been met. We adopted the proportional sampling strategy, where the probability of sampling a task is proportional to the relative size of each dataset compared to the cumulative size of all datasets BIBREF17.
Experimental Setup ::: Training Details
We used a linear learning rate decay schedule with warm-up proportion of $0.1$. We set the dropout rate as $0.1$. The maximum sequence length is set to 512. We clipped the gradient norm to 5 for DREAM dataset and 0 for other datasets. The learning rate and number of training epochs vary for different datasets and encoder types, which are summarized in Section 1 of the Supplementary Material.
More than 90% of passages have more than 512 words in the TOEFL dataset, which exceed the maximum sequence length that BERT supports, thus we cannot process the whole passage within one forward pass. To solve this issue, we propose the sliding window strategy, in which we split the long passage into several snippets of length 512 with overlaps between subsequent snippets and each snippet from the same passage will be assigned with the same label. In training phase, all snippets will be used for training, and in inference phase, we aggregate the logit vectors of all snippets from the same passage and pick the option with highest logit value as the prediction. In experiments, we found the overlap of 256 words is the optimal, which can improve the BERT-Base model from accuracy of 50.0% to 53.2%. We adopted this sliding window strategy only for the TOEFL dataset.
Results
We first evaluate our method on the DREAM dataset. The results are summarized in Table TABREF16. In the table, we first report the accuracy of the SOTA models in the leaderboard. We then report the performance of our re-implementation of fine-tuned models as another set of strong baselines, among which the RoBERTa-Large model has already surpassed the previous SOTA. For these baselines, the top-level classifier is a two-layer FCNN for BERT-based models and a one-layer FCNN for the RoBERTa-Large model. Lastly, we report model performances that use all our proposed method, MMM (MAN classifier + speaker normalization + two stage learning strategies). As direct comparisons, we also list the accuracy increment between MMM and the baseline with the same sentence encoder marked by the parentheses, from which we can see that the performance augmentation is over 9% for BERT-Base and BERT-Large. Although the RoBERTa-Large baseline has already outperformed the BERT-Large baseline by around 18%, MMM gives us another $\sim $4% improvement, pushing the accuracy closer to the human performance. Overall, MMM has achieved a new SOTA, i.e., test accuracy of 88.9%, which exceeds the previous best by 16.9%.
We also test our method on three other MCQA datasets: MCTest including MC160 and MC500, TOEFL, and SemEval-2018 Task 11. The results are summarized in Table TABREF17. Similarly, we list the previous SOTA models with their scores for comparison. We compared our method with the baselines that use the same sentence encoder. Except for the SemEval dataset, our method can improve the BERT-Large model by at least 10%. For both MCTest and SemEval datasets, our best scores are very close to the reported human performance. The MC160 and MC500 datasets were curated in almost the same way BIBREF9 with only one difference that MC160 is around three times smaller than MC500. We can see from Table TABREF17 that both the BERT and RoBERTa baselines perform much worse on MC160 than MC500. We think the reason is that the data size of MC160 is not enough to well fine-tune the large models with a huge amount of trainable parameters. However, by leveraging the transfer learning techniques we proposed, we can significantly improve the generalization capability of BERT and RoBERTa models on the small datasets so that the best performance of MC160 can even surpass that of MC500. This demonstrates the effectiveness of our method.
To better understand why MMM can be successful, we conducted an ablation study be removing one feature at a time on the BERT-Base model. The results are shown in Table TABREF18. We see that the removal of the second stage multi-task learning part hurts our method most significantly, indicating that the majority of improvement is coming from the knowledge transferred from the in-domain dataset. The first stage of coarse-tuning using NLI datasets is also very important, which provides the model with enhanced language inference ability. As for the top-level classifier, i.e., the MAN module, if we replace it with a typical two-layer FCNN as in BIBREF8, we have 1–2% performance drop. Lastly, for the DREAM dataset, the speaker normalization strategy gives us another $\sim $1% improvement.
Discussion ::: Why does natural language inference help?
As shown in Table TABREF18, coarse-tuning on NLI tasks can help improve the performance of MCQA. We conjecture one of the reasons is that, in order to pick the correct answer, we need to rely on the language inference capability in many cases. As an example in Table TABREF1, the utterance highlighted in the bold and italic font in the dialogue is the evidence sentence from which we can obtain the correct answer to Question 2. There is no token overlap between the evidence sentence and the correct answer, indicating that the model cannot solve this question by surface matching. Nevertheless, the correct answer is an entailment to the evidence sentence while the wrong answers are not. Therefore, the capability of language inference enables the model to correctly predict the answer. On the other hand, we can deem the passage and the pair of (question, answer) as a pair of premise and hypothesis. Then the process of choosing the right answer to a certain question is similar to the process of choosing the hypothesis that can best entail the premise. In this sense, the part of MCQA task can be deemed as a NLI task. This also agrees with the argument that NLI is a fundamental ability of a natural language processing model and it can help support other tasks that require higher level of language processing abilities BIBREF21. We provided several more examples that require language inference reading skills in the Section 2 of the Supplementary Material; they are wrongly predicted by the BERT-Base baseline model but can be correctly solved by exposing the model to NLI data with the coarse-tuning stage.
Discussion ::: Can other tasks help with MCQA?
By analyzing the MCQA datasets, we found that some questions ask about the attitude of one person towards something and in some cases, the correct answer is simply a paraphrase of the evidence sentence in the passage. This finding naturally leads to the question: could other kinds of tasks such as sentiment classification, paraphrasing also help with MCQA problems?
To answer this question, we select several representative datasets for five categories as the up-stream tasks: sentiment analysis, paraphrase, span-based QA, NLI, and MCQA. We conduct experiments where we first train the BERT-Base models on each of the five categories and then further fine-tune our models on the target dataset: DREAM and MC500 (MCTest-MC500). For the sentiment analysis category, we used the Stanford Sentiment Treebank (SST-2) dataset from the GLUE benchmark BIBREF22 (around 60k train examples) and the Yelp dataset (around 430k train examples). For the paraphrase category, three paraphrasing datasets are used from the GLUE benchmark: Microsoft Research Paraphrase Corpus (MRPC), Semantic Textual Similarity Benchmark (STS-B), and Quora Question Pairs (QQP), which are denoted as “GLUE-Para.”. For the span-based QA, we use the SQuAD 1.1, SQuAD 2.0 , and MRQA which is a joint dataset including six popular span-based QA datasets. Table TABREF23 summarizes the results. We see that sentiment analysis datasets do not help much with our target MCQA datasets. But the paraphrase datasets do bring some improvements for MCQA. For span-based QA, only SQuAD 2.0 helps to improve the performance of the target dataset. Interestingly, although MRQA is much larger than other QA datasets (at least six times larger), it makes the performance worst. This suggests that span-based QA might not the appropriate source tasks for transfer learning for MCQA. We hypothesis this could due to the fact that most of the questions are non-extractive (e.g., 84% of questions in DREAM are non-extractive) while all answers are extractive in the span-based QA datasets.
For the completeness of our experiments, we also used various NLI datasets: MultiNLI, SNLI, Question NLI (QLI), Recognizing Textual Entailment (RTE), and Winograd NLI (WNLI) from the GLUE benchmark. We used them in three kinds of combinations: MultiNLI alone, MultiNLI plus SNLI denoted as “NLI”, and combining all five datasets together, denoted as “GLUE-NLI”. As the results shown in Table TABREF23, NLI and GLUE-NLI are comparable and both can improve the target dataset by a large margin.
Lastly, among all these tasks, using the MCQA task itself, i.e., pretraining on RACE dataset, can help boost the performance, most. This result agrees with the intuition that the in-domain dataset can be the most ideal data for transfer learning.
In conclusion, we find that for out-of-domain datasets, the NLI datasets can be most helpful to the MCQA task, indicating that the natural language inference capability should be an important foundation of the MCQA systems. Besides, a larger in-domain dataset, i.e. another MCQA dataset, can also be very useful.
Discussion ::: NLI dataset helps with convergence
The first stage of coarse-tuning with NLI data can not only improve the accuracy but also help the model converge faster and better. Especially for the BERT-Large and RoBERTa-Large models that have much larger amount of trainable parameters, convergence is very sensitive to the optimization settings. However, with the help of NLI datasets , convergence for large models is no longer an issue, as shown in Figure FIGREF25. Under the same optimization hyper-parameters, compared with the baseline, coarse-tuning can make the training loss of the BERT-Base model decrease much faster. More importantly, for the BERT-Large model, without coarse-tuning, the model does not converge at all at the first several epochs, which can be completely resolved by the help of NLI data.
Discussion ::: Multi-stage or Multi-task
In a typical scenario where we have one source and one target dataset, we naturally have a question about whether we should simultaneously train a model on them via multi-task learning or first train on the source dataset then on the target sequentially. Many previous works adopted the latter way BIBREF19, BIBREF20, BIBREF23 and BIBREF20 demonstrated that the sequential fine-tuning approach outperforms the multi-task learning setting in their experiments. However, we had contradictory observations in our experiments. Specifically, we conducted a pair of control experiments: one is that we first fine-tune the BERT-Base model on the source dataset RACE and then further fine-tune on the target dataset, and the other is that we simultaneously train the model on RACE and the target dataset via multi-task learning. The comparison results are shown in Table TABREF27. We see that compared with sequential fine-tuning, the multi-task learning achieved better performance. We conjecture that in the sequential fine-tuning setting, while the model is being fine-tuned on the target dataset, some information or knowledge learned from the source dataset may be lost since the model is no longer exposed to the source dataset in this stage. In comparison, this information can be kept in the multi-task learning setting and thus can better help improve the target dataset.
Now that the multi-task learning approach outperforms the sequential fine-tuning setting, we naturally arrive at another question: what if we merged the coarse-tuning and multi-task learning stages together? That is, what if we simultaneously trained the NLI, source, and target datasets altogether under the multi-task learning framework? We also conducted a pair of control experiments for investigation. The results in Table TABREF27, show that casting the fine-tuning process on three datasets into separate stages performs better, indicating that multi-stage training is also necessary. This verifies our MMM framework with coarse-tuning on out-of-domain datasets and fine-tuning on in-domain datesets.
Discussion ::: Multi-steps reasoning is important
Previous results show that the MAN classifier shows improvement compared with the FCNN classifier, but we are also interested in how the performance change while varying the number of reasoning steps $K$ as shown in Figure FIGREF29. $K=0$ means that we do not use MAN but FCNN as the classifier. We observe that there is a gradual improvement as we increase $K=1$ to $K=5$, but after 5 steps the improvements have saturated. This verifies that an appropriate number of steps of reasoning is important for the memory network to reflect its benefits.
Discussion ::: Could the source dataset be benefited?
So far we have been discussing the case where we do multi-task learning with the source dataset RACE and various much smaller target datasets to help improve the targets. We also want to see whether our proposed techniques can also benefit the source dataset itself. Table TABREF31 summarizes the results of BERT-Base model on the RACE dataset obtained by adding the coarse-tuning stage, adding the multi-task training together with DREAM, and adding the MAN module. From this table, we see that all three techniques can bring in improvements over the baseline model for the source dataset RACE, among which NLI coarse-tuning stage can help elevate the scores most.
Since we found all parts of MMM can work well for the source dataset, we tried to use them to improve the accuracy on RACE. The results are shown in Table TABREF32. We used four kinds of pre-trained sentence encoders: BERT-Base, BERT-Large, XLNet-Large, and RoBERTa-Large. For each encoder, we listed the official report of scores from the leaderboard. Compared with the baselines, MMM leads to improvements ranging from 0.5% to 3.0% in accuracy. Our best result is obtained by the RoBERTa-Large encoder.
Discussion ::: Error Analysis
In order to investigate how well our model performs for different types of questions, we did an error analysis by first randomly selecting 150 samples that had wrong predictions by the BERT-Base baseline model from the development set of DREAM dataset. We then manually classified them into several question types, as shown in Table TABREF34. The annotation criterion is described in the Section 3 of the Supplementary Material. We see that the BERT-Base baseline model still does not do well on matching problems. We then evaluate our best model on these samples and report the accuracy of each question type in the last column of Table TABREF34. We find that our best model can improve upon every question type significantly especially for the matching problems, and most surprisingly, our best model can even greatly improve its ability on solving the arithmetic problems, achieving the accuracy of 73.7%.
However, could our model really do math? To investigate this question, we sampled some arithmetic questions that are correctly predicted by our model, made small alterations to the passage or question, and then checked whether our model can still make correct choices. We found our model is very fragile to these minor alterations, implicating that the model is actually not that good at arithmetic problems. We provided one interesting example in the Section 3 of the Supplementary Material.
Related Work
There are increasing interests in machine reading comprehension (MRC) for question answering (QA). The extractive QA tasks primarily focus on locating text spans from the given document/corpus to answer questions BIBREF2. Answers in abstractive datasets such as MS MARCO BIBREF24, SearchQA BIBREF25, and NarrativeQA BIBREF26 are human-generated and based on source documents or summaries in free text format. However, since annotators tend to copy spans as answers BIBREF27, the majority of answers are still extractive in these datasets. The multi-choice QA datasets are collected either via crowd sourcing, or collected from examinations designed by educational experts BIBREF7. In this type of QA datasets, besides token matching, a significant portion of questions require multi-sentence reasoning and external knowledge BIBREF5.
Progress of research for MRC first relies on the breakthrough of the sentence encoder, from the basic LSTM to the pre-trained transformer based model BIBREF8, which has elevated the performance of all MRC models by a large margin. Besides, the attention mechanisms between the context and the query can empower the neural models with higher performance BIBREF11. In addition, some techniques such as answer verification BIBREF28, multi-hop reasoning BIBREF29, and synthetic data augmentation can be also helpful.
Transfer learning has been widely proved to be effective across many domain in NLP. In the QA domain, the most well-known example of transfer learning would be fine-tuning the pre-trained language model such as BERT to the down-streaming QA datasets such as SQuAD BIBREF8. Besides, multi-task learning can also be deemed as a type of transfer learning, since during the training of multiple datasets from different domains for different tasks, knowledge will be shared and transferred from each task to others, which has been used to build a generalized QA model BIBREF30. However, no previous works have investigated that the knowledge from the NLI datasets can also be transferred to improve the MCQA task.
Conclusions
We propose MMM, a multi-stage multi-task transfer learning method on the multiple-choice question answering tasks. Our two-stage training strategy and the multi-step attention network achieved significant improvements for MCQA. We also did detailed analysis to explore the importance of both our training strategies as well as different kinds of in-domain and out-of-domain datasets. We hope our work here can also shed light on new directions for other NLP domains. | FTLM++, BERT-large, XLNet |
36d892460eb863220cd0881d5823d73bbfda172c | 36d892460eb863220cd0881d5823d73bbfda172c_0 | Q: What four representative datasets are used for bechmark?
Text: Introduction
Building a system that comprehends text and answers questions is challenging but fascinating, which can be used to test the machine's ability to understand human language BIBREF0, BIBREF1. Many machine reading comprehension (MRC) based question answering (QA) scenarios and datasets have been introduced over the past few years, which differ from each other in various ways, including the source and format of the context documents, whether external knowledge is needed, the format of the answer, to name a few. We can divide these QA tasks into two categories: 1) extractive/abstractive QA such as SQuAD BIBREF2, and HotPotQA BIBREF3. 2) multiple-choice QA (MCQA) tasks such as MultiRC BIBREF4, and MCTest BIBREF5.
In comparison to extractive/abstractive QA tasks, the answers of the MCQA datasets are in the form of open, natural language sentences and not restricted to spans in text. Various question types exist such as arithmetic, summarization, common sense, logical reasoning, language inference, and sentiment analysis. Therefore it requires more advanced reading skills for the machine to perform well on this task. Table TABREF1 shows one example from one of MCQA datasets, DREAM BIBREF6. To answer the first question in Table TABREF1, the system needs to comprehend the whole dialogue and use some common sense knowledge to infer that such a conversation can only happen between classmates rather than brother and sister. For the second question, the implicit inference relationship between the utterance “You'll forget your head if you're not careful.” in the passage and the answer option “He is too careless.” must be figured out by the model to obtain the correct answer. Many MCQA datasets were collected from language or science exams, which were purposely designed by educational experts and consequently require non-trivial reasoning techniques BIBREF7. As a result, the performance of machine readers on these tasks can more accurately gauge comprehension ability of a model.
Recently large and powerful pre-trained language models such as BERT BIBREF8 have been achieving the state-of-the-art (SOTA) results on various tasks, however, its potency on MCQA datasets has been severely limited by the data insufficiency. For example, the MCTest dataset has two variants: MC160 and MC500, which are curated in a similar way, and MC160 is considered easier than MC500 BIBREF9. However, BERT-based models perform much worse on MC160 compared with MC500 (8–10% gap) since the data size of the former is about three times smaller. To tackle this issue, we investigate how to improve the generalization of BERT-based MCQA models with the constraint of limited training data using four representative MCQA datasets: DREAM, MCTest, TOEFL, and SemEval-2018 Task 11.
We proposed MMM, a Multi-stage Multi-task learning framework for Multi-choice question answering. Our framework involves two sequential stages: coarse-tuning stage using out-of-domain datasets and multi-task learning stage using a larger in-domain dataset. For the first stage, we coarse-tuned our model with natural language inference (NLI) tasks. For the second multi-task fine-tuning stage, we leveraged the current largest MCQA dataset, RACE, as the in-domain source dataset and simultaneously fine-tuned the model on both source and target datasets via multi-task learning. Through extensive experiments, we demonstrate that the two-stage sequential fine-tuning strategy is the optimal choice for BERT-based model on MCQA datasets. Moreover, we also proposed a Multi-step Attention Network (MAN) as the top-level classifier instead of the typical fully-connected neural network for this task and obtained better performance. Our proposed method improves BERT-based baseline models by at least 7% in absolute accuracy for all the MCQA datasets (except the SemEval dataset that already achieves 88.1% for the baseline). As a result, by leveraging BERT and its variant, RoBERTa BIBREF10, our approach advanced the SOTA results for all the MCQA datasets, surpassing the previous SOTA by at least 16% in absolute accuracy (except the SemEval dataset).
Methods
In MCQA, the inputs to the model are a passage, a question, and answer options. The passage, denoted as $P$, consists of a list of sentences. The question and each of the answer options, denoted by $Q$ and $O$, are both single sentences. A MCQA model aims to choose one correct answer from answer options based on $P$ and $Q$.
Methods ::: Model Architecture
Figure FIGREF3 illustrates the model architecture. Specifically, we concatenate the passage, question and one of the answer options into a long sequence. For a question with $n$ answer options, we obtain $n$ token sequences of length $l$. Afterwards, each sequence will be encoded by a sentence encoder to get the representation vector $H \in \mathbb {R}^{d\times l}$, which is then projected into a single value $p=C(H)$ ($p\in \mathbb {R}^{1}$) via a top-level classifier $C$. In this way, we obtain the logit vector $\mathbf {p}=[p_1,p_2,...,p_n]$ for all options of a question, which is then transformed into the probability vector through a softmax layer. We choose the option with highest logit value $p$ as the answer. Cross entropy loss is used as the loss function. We used the pre-trained bidirectional transformer encoder, i.e., BERT and RoBERTa as the sentence encoder. The top-level classifier will be detailed in the next subsection.
Methods ::: Multi-step Attention Network
For the top-level classifier upon the sentence encoder, the simplest choice is a two-layer full-connected neural network (FCNN), which consist of one hidden layer with $tanh$ activation and one output layer without activation. This has been widely adopted when BERT is fine-tuned for the down-streaming classification tasks and performs very well BIBREF8. Inspired from the success of the attention network widely used in the span-based QA task BIBREF11, we propose the multi-step attention network (MAN) as our top-level classifier. Similar to the dynamic or multi-hop memory network BIBREF12, BIBREF13, MAN maintains a state and iteratively refines its prediction via the multi-step reasoning.
The MAN classifier works as follows. A pair of question and answer option together is considered as a whole segment, denoted as $QO$. Suppose the sequence length of the passage is $p$ and that of the question and option pair is $q$. We first construct the working memory of the passage $H^P\in \mathbb {R}^{d\times p}$ by extracting the hidden state vectors of the tokens that belong to $P$ from $H$ and concatenating them together in the original sequence order. Similarly, we obtain the working memory of the (question, option) pair, denoted as $H^{QO}\in \mathbb {R}^{d\times q}$. Alternatively, we can also encode the passage and (question, option) pair individually to get their representation vectors $H^P$ and $H^{QO}$, but we found that processing them in a pair performs better.
We then perform $K$-step reasoning over the memory to output the final prediction. Initially, the initial state $\mathbf {s}^0$ in step 0 is the summary of $H^P$ via self-attention: $\mathbf {s}^0=\sum _i \alpha _i H_i^P$, where $\alpha _i=\frac{exp(w_1^TH_i^P)}{\sum _j exp(w_1^TH_j^P)}$. In the following steps $k \in {1,2,...,K-1}$, the state is calculated by:
where $\mathbf {x}^k=\sum _i\beta _iH_i^{QO}$ and $\beta _i=\frac{exp(w_2^T[\mathbf {s}^{k-1};H_i^{QO}])}{\sum _j exp(w_2^T[\mathbf {s}^{k-1};H_j^{QO}])}$. Here $[x;y]$ is concatenation of the vectors $x$ and $y$. The final logit value is determined using the last step state:
Basically, the MAN classifier calculates the attention scores between the passage and (question, option) pair step by step dynamically such that the attention can refine itself through several steps of deliberation. The attention mechanism can help filter out irrelevant information in the passage against (question, option) pair.
Methods ::: Two Stage Training
We adopt a two-stage procedure to train our model with both in-domain and out-of-domain datasets as shown in Figure FIGREF10.
Methods ::: Two Stage Training ::: Coarse-tuning Stage
We first fine-tune the sentence encoder of our model with natural language inference (NLI) tasks. For exploration, we have also tried to fine-tune the sentence encoder on other types of tasks such as sentiment analysis, paraphrasing, and span-based question answering at this stage. However, we found that only NLI task shows robust and significant improvements for our target multi-choice task. See Section SECREF5 for details.
Methods ::: Two Stage Training ::: Multi-task Learning Stage
After corase-tuning stage, we simultaneously fine-tune our model on a large in-domain source dataset and the target dataset together via multi-task learning. We share all model parameters including the sentence encoder as well as the top-level classifier for these two datasets.
Experimental Setup ::: Datasets
We use four MCQA datasets as the target datasets: DREAM BIBREF6, MCTest BIBREF9, TOEFL BIBREF5, and SemEval-2018 Task 11 BIBREF14, which are summarized in Table TABREF11. For the first coarse-tuning stage with NLI tasks, we use MultiNLI BIBREF15 and SNLI BIBREF16 as the out-of-domain source datasets. For the second stage, we use the current largest MCQA dataset, i.e., RACE BIBREF7 as in-domain source dataset. For all datasets, we use the official train/dev/test splits.
Experimental Setup ::: Speaker Normalization
Passages in DREAM dataset are dialogues between two persons or more. Every utterance in a dialogue starts with the speaker name. For example, in utterance “m: How would he know?”, “m” is the abbreviation of “man” indicating that this utterance is from a man. More than 90% utterances have the speaker names as “w,” “f,” and “m,” which are all abbreviations. However, the speaker names mentioned in the questions are full names such as “woman” and “man.” In order to make it clear for the model to learn which speaker the question is asking about, we used a speaker normalization strategy by replacing “w” or “f” with “woman” and “m” with “man” for the speaker names in the utterances. We found this simple strategy is quite effective, providing us with 1% improvement. We will always use this strategy for the DREAM dataset for our method unless explicitly mentioned.
Experimental Setup ::: Multi-task Learning
For the multi-task learning stage, at each training step, we randomly selected a dataset from the two datasets (RACE and the target dataset) and then randomly fetched a batch of data from that dataset to train the model. This process was repeated until the predefined maximum number of steps or the early stopping criterion has been met. We adopted the proportional sampling strategy, where the probability of sampling a task is proportional to the relative size of each dataset compared to the cumulative size of all datasets BIBREF17.
Experimental Setup ::: Training Details
We used a linear learning rate decay schedule with warm-up proportion of $0.1$. We set the dropout rate as $0.1$. The maximum sequence length is set to 512. We clipped the gradient norm to 5 for DREAM dataset and 0 for other datasets. The learning rate and number of training epochs vary for different datasets and encoder types, which are summarized in Section 1 of the Supplementary Material.
More than 90% of passages have more than 512 words in the TOEFL dataset, which exceed the maximum sequence length that BERT supports, thus we cannot process the whole passage within one forward pass. To solve this issue, we propose the sliding window strategy, in which we split the long passage into several snippets of length 512 with overlaps between subsequent snippets and each snippet from the same passage will be assigned with the same label. In training phase, all snippets will be used for training, and in inference phase, we aggregate the logit vectors of all snippets from the same passage and pick the option with highest logit value as the prediction. In experiments, we found the overlap of 256 words is the optimal, which can improve the BERT-Base model from accuracy of 50.0% to 53.2%. We adopted this sliding window strategy only for the TOEFL dataset.
Results
We first evaluate our method on the DREAM dataset. The results are summarized in Table TABREF16. In the table, we first report the accuracy of the SOTA models in the leaderboard. We then report the performance of our re-implementation of fine-tuned models as another set of strong baselines, among which the RoBERTa-Large model has already surpassed the previous SOTA. For these baselines, the top-level classifier is a two-layer FCNN for BERT-based models and a one-layer FCNN for the RoBERTa-Large model. Lastly, we report model performances that use all our proposed method, MMM (MAN classifier + speaker normalization + two stage learning strategies). As direct comparisons, we also list the accuracy increment between MMM and the baseline with the same sentence encoder marked by the parentheses, from which we can see that the performance augmentation is over 9% for BERT-Base and BERT-Large. Although the RoBERTa-Large baseline has already outperformed the BERT-Large baseline by around 18%, MMM gives us another $\sim $4% improvement, pushing the accuracy closer to the human performance. Overall, MMM has achieved a new SOTA, i.e., test accuracy of 88.9%, which exceeds the previous best by 16.9%.
We also test our method on three other MCQA datasets: MCTest including MC160 and MC500, TOEFL, and SemEval-2018 Task 11. The results are summarized in Table TABREF17. Similarly, we list the previous SOTA models with their scores for comparison. We compared our method with the baselines that use the same sentence encoder. Except for the SemEval dataset, our method can improve the BERT-Large model by at least 10%. For both MCTest and SemEval datasets, our best scores are very close to the reported human performance. The MC160 and MC500 datasets were curated in almost the same way BIBREF9 with only one difference that MC160 is around three times smaller than MC500. We can see from Table TABREF17 that both the BERT and RoBERTa baselines perform much worse on MC160 than MC500. We think the reason is that the data size of MC160 is not enough to well fine-tune the large models with a huge amount of trainable parameters. However, by leveraging the transfer learning techniques we proposed, we can significantly improve the generalization capability of BERT and RoBERTa models on the small datasets so that the best performance of MC160 can even surpass that of MC500. This demonstrates the effectiveness of our method.
To better understand why MMM can be successful, we conducted an ablation study be removing one feature at a time on the BERT-Base model. The results are shown in Table TABREF18. We see that the removal of the second stage multi-task learning part hurts our method most significantly, indicating that the majority of improvement is coming from the knowledge transferred from the in-domain dataset. The first stage of coarse-tuning using NLI datasets is also very important, which provides the model with enhanced language inference ability. As for the top-level classifier, i.e., the MAN module, if we replace it with a typical two-layer FCNN as in BIBREF8, we have 1–2% performance drop. Lastly, for the DREAM dataset, the speaker normalization strategy gives us another $\sim $1% improvement.
Discussion ::: Why does natural language inference help?
As shown in Table TABREF18, coarse-tuning on NLI tasks can help improve the performance of MCQA. We conjecture one of the reasons is that, in order to pick the correct answer, we need to rely on the language inference capability in many cases. As an example in Table TABREF1, the utterance highlighted in the bold and italic font in the dialogue is the evidence sentence from which we can obtain the correct answer to Question 2. There is no token overlap between the evidence sentence and the correct answer, indicating that the model cannot solve this question by surface matching. Nevertheless, the correct answer is an entailment to the evidence sentence while the wrong answers are not. Therefore, the capability of language inference enables the model to correctly predict the answer. On the other hand, we can deem the passage and the pair of (question, answer) as a pair of premise and hypothesis. Then the process of choosing the right answer to a certain question is similar to the process of choosing the hypothesis that can best entail the premise. In this sense, the part of MCQA task can be deemed as a NLI task. This also agrees with the argument that NLI is a fundamental ability of a natural language processing model and it can help support other tasks that require higher level of language processing abilities BIBREF21. We provided several more examples that require language inference reading skills in the Section 2 of the Supplementary Material; they are wrongly predicted by the BERT-Base baseline model but can be correctly solved by exposing the model to NLI data with the coarse-tuning stage.
Discussion ::: Can other tasks help with MCQA?
By analyzing the MCQA datasets, we found that some questions ask about the attitude of one person towards something and in some cases, the correct answer is simply a paraphrase of the evidence sentence in the passage. This finding naturally leads to the question: could other kinds of tasks such as sentiment classification, paraphrasing also help with MCQA problems?
To answer this question, we select several representative datasets for five categories as the up-stream tasks: sentiment analysis, paraphrase, span-based QA, NLI, and MCQA. We conduct experiments where we first train the BERT-Base models on each of the five categories and then further fine-tune our models on the target dataset: DREAM and MC500 (MCTest-MC500). For the sentiment analysis category, we used the Stanford Sentiment Treebank (SST-2) dataset from the GLUE benchmark BIBREF22 (around 60k train examples) and the Yelp dataset (around 430k train examples). For the paraphrase category, three paraphrasing datasets are used from the GLUE benchmark: Microsoft Research Paraphrase Corpus (MRPC), Semantic Textual Similarity Benchmark (STS-B), and Quora Question Pairs (QQP), which are denoted as “GLUE-Para.”. For the span-based QA, we use the SQuAD 1.1, SQuAD 2.0 , and MRQA which is a joint dataset including six popular span-based QA datasets. Table TABREF23 summarizes the results. We see that sentiment analysis datasets do not help much with our target MCQA datasets. But the paraphrase datasets do bring some improvements for MCQA. For span-based QA, only SQuAD 2.0 helps to improve the performance of the target dataset. Interestingly, although MRQA is much larger than other QA datasets (at least six times larger), it makes the performance worst. This suggests that span-based QA might not the appropriate source tasks for transfer learning for MCQA. We hypothesis this could due to the fact that most of the questions are non-extractive (e.g., 84% of questions in DREAM are non-extractive) while all answers are extractive in the span-based QA datasets.
For the completeness of our experiments, we also used various NLI datasets: MultiNLI, SNLI, Question NLI (QLI), Recognizing Textual Entailment (RTE), and Winograd NLI (WNLI) from the GLUE benchmark. We used them in three kinds of combinations: MultiNLI alone, MultiNLI plus SNLI denoted as “NLI”, and combining all five datasets together, denoted as “GLUE-NLI”. As the results shown in Table TABREF23, NLI and GLUE-NLI are comparable and both can improve the target dataset by a large margin.
Lastly, among all these tasks, using the MCQA task itself, i.e., pretraining on RACE dataset, can help boost the performance, most. This result agrees with the intuition that the in-domain dataset can be the most ideal data for transfer learning.
In conclusion, we find that for out-of-domain datasets, the NLI datasets can be most helpful to the MCQA task, indicating that the natural language inference capability should be an important foundation of the MCQA systems. Besides, a larger in-domain dataset, i.e. another MCQA dataset, can also be very useful.
Discussion ::: NLI dataset helps with convergence
The first stage of coarse-tuning with NLI data can not only improve the accuracy but also help the model converge faster and better. Especially for the BERT-Large and RoBERTa-Large models that have much larger amount of trainable parameters, convergence is very sensitive to the optimization settings. However, with the help of NLI datasets , convergence for large models is no longer an issue, as shown in Figure FIGREF25. Under the same optimization hyper-parameters, compared with the baseline, coarse-tuning can make the training loss of the BERT-Base model decrease much faster. More importantly, for the BERT-Large model, without coarse-tuning, the model does not converge at all at the first several epochs, which can be completely resolved by the help of NLI data.
Discussion ::: Multi-stage or Multi-task
In a typical scenario where we have one source and one target dataset, we naturally have a question about whether we should simultaneously train a model on them via multi-task learning or first train on the source dataset then on the target sequentially. Many previous works adopted the latter way BIBREF19, BIBREF20, BIBREF23 and BIBREF20 demonstrated that the sequential fine-tuning approach outperforms the multi-task learning setting in their experiments. However, we had contradictory observations in our experiments. Specifically, we conducted a pair of control experiments: one is that we first fine-tune the BERT-Base model on the source dataset RACE and then further fine-tune on the target dataset, and the other is that we simultaneously train the model on RACE and the target dataset via multi-task learning. The comparison results are shown in Table TABREF27. We see that compared with sequential fine-tuning, the multi-task learning achieved better performance. We conjecture that in the sequential fine-tuning setting, while the model is being fine-tuned on the target dataset, some information or knowledge learned from the source dataset may be lost since the model is no longer exposed to the source dataset in this stage. In comparison, this information can be kept in the multi-task learning setting and thus can better help improve the target dataset.
Now that the multi-task learning approach outperforms the sequential fine-tuning setting, we naturally arrive at another question: what if we merged the coarse-tuning and multi-task learning stages together? That is, what if we simultaneously trained the NLI, source, and target datasets altogether under the multi-task learning framework? We also conducted a pair of control experiments for investigation. The results in Table TABREF27, show that casting the fine-tuning process on three datasets into separate stages performs better, indicating that multi-stage training is also necessary. This verifies our MMM framework with coarse-tuning on out-of-domain datasets and fine-tuning on in-domain datesets.
Discussion ::: Multi-steps reasoning is important
Previous results show that the MAN classifier shows improvement compared with the FCNN classifier, but we are also interested in how the performance change while varying the number of reasoning steps $K$ as shown in Figure FIGREF29. $K=0$ means that we do not use MAN but FCNN as the classifier. We observe that there is a gradual improvement as we increase $K=1$ to $K=5$, but after 5 steps the improvements have saturated. This verifies that an appropriate number of steps of reasoning is important for the memory network to reflect its benefits.
Discussion ::: Could the source dataset be benefited?
So far we have been discussing the case where we do multi-task learning with the source dataset RACE and various much smaller target datasets to help improve the targets. We also want to see whether our proposed techniques can also benefit the source dataset itself. Table TABREF31 summarizes the results of BERT-Base model on the RACE dataset obtained by adding the coarse-tuning stage, adding the multi-task training together with DREAM, and adding the MAN module. From this table, we see that all three techniques can bring in improvements over the baseline model for the source dataset RACE, among which NLI coarse-tuning stage can help elevate the scores most.
Since we found all parts of MMM can work well for the source dataset, we tried to use them to improve the accuracy on RACE. The results are shown in Table TABREF32. We used four kinds of pre-trained sentence encoders: BERT-Base, BERT-Large, XLNet-Large, and RoBERTa-Large. For each encoder, we listed the official report of scores from the leaderboard. Compared with the baselines, MMM leads to improvements ranging from 0.5% to 3.0% in accuracy. Our best result is obtained by the RoBERTa-Large encoder.
Discussion ::: Error Analysis
In order to investigate how well our model performs for different types of questions, we did an error analysis by first randomly selecting 150 samples that had wrong predictions by the BERT-Base baseline model from the development set of DREAM dataset. We then manually classified them into several question types, as shown in Table TABREF34. The annotation criterion is described in the Section 3 of the Supplementary Material. We see that the BERT-Base baseline model still does not do well on matching problems. We then evaluate our best model on these samples and report the accuracy of each question type in the last column of Table TABREF34. We find that our best model can improve upon every question type significantly especially for the matching problems, and most surprisingly, our best model can even greatly improve its ability on solving the arithmetic problems, achieving the accuracy of 73.7%.
However, could our model really do math? To investigate this question, we sampled some arithmetic questions that are correctly predicted by our model, made small alterations to the passage or question, and then checked whether our model can still make correct choices. We found our model is very fragile to these minor alterations, implicating that the model is actually not that good at arithmetic problems. We provided one interesting example in the Section 3 of the Supplementary Material.
Related Work
There are increasing interests in machine reading comprehension (MRC) for question answering (QA). The extractive QA tasks primarily focus on locating text spans from the given document/corpus to answer questions BIBREF2. Answers in abstractive datasets such as MS MARCO BIBREF24, SearchQA BIBREF25, and NarrativeQA BIBREF26 are human-generated and based on source documents or summaries in free text format. However, since annotators tend to copy spans as answers BIBREF27, the majority of answers are still extractive in these datasets. The multi-choice QA datasets are collected either via crowd sourcing, or collected from examinations designed by educational experts BIBREF7. In this type of QA datasets, besides token matching, a significant portion of questions require multi-sentence reasoning and external knowledge BIBREF5.
Progress of research for MRC first relies on the breakthrough of the sentence encoder, from the basic LSTM to the pre-trained transformer based model BIBREF8, which has elevated the performance of all MRC models by a large margin. Besides, the attention mechanisms between the context and the query can empower the neural models with higher performance BIBREF11. In addition, some techniques such as answer verification BIBREF28, multi-hop reasoning BIBREF29, and synthetic data augmentation can be also helpful.
Transfer learning has been widely proved to be effective across many domain in NLP. In the QA domain, the most well-known example of transfer learning would be fine-tuning the pre-trained language model such as BERT to the down-streaming QA datasets such as SQuAD BIBREF8. Besides, multi-task learning can also be deemed as a type of transfer learning, since during the training of multiple datasets from different domains for different tasks, knowledge will be shared and transferred from each task to others, which has been used to build a generalized QA model BIBREF30. However, no previous works have investigated that the knowledge from the NLI datasets can also be transferred to improve the MCQA task.
Conclusions
We propose MMM, a multi-stage multi-task transfer learning method on the multiple-choice question answering tasks. Our two-stage training strategy and the multi-step attention network achieved significant improvements for MCQA. We also did detailed analysis to explore the importance of both our training strategies as well as different kinds of in-domain and out-of-domain datasets. We hope our work here can also shed light on new directions for other NLP domains. | DREAM, MCTest, TOEFL, and SemEval-2018 Task 11 |
4cbc56d0d53c4c03e459ac43e3c374b75fd48efe | 4cbc56d0d53c4c03e459ac43e3c374b75fd48efe_0 | Q: What baselines did they consider?
Text: INTRODUCTION
Systematic reviews (SR) of randomized controlled trials (RCTs) are regarded as the gold standard for providing information about the effects of interventions to healthcare practitioners, policy makers and members of the public. The quality of these reviews is ensured through a strict methodology that seeks to include all relevant information on the review topic BIBREF0.
A SR, as produced by the quality standards of Cochrane, is conducted to appraise and synthesize all research for a specific research question, therefore providing access to the best available medical evidence where needed BIBREF1. The research question is specified using the PICO (population; intervention; comparator; outcomes) framework. The researchers conduct very broad literature searches in order to retrieve every piece of clinical evidence that meets their review's inclusion criteria, commonly all RCTs of a particular healthcare intervention in a specific population. In a search, no piece of relevant information should be missed. In other words, the aim is to achieve a recall score of one. This implies that the searches are broad BIBREF2, and authors are often left to screen a large number of abstracts manually in order to identify a small fraction of relevant publications for inclusion in the SR BIBREF3.
The number of RCTs is increasing, and with it increases the potential number of reviews and the amount of workload that is implied for each. Research on the basis of PubMed entries shows that both the number of publications and the number of SRs increased rapidly in the last ten years BIBREF4, which is why acceleration of the systematic reviewing process is of interest in order to decrease working hours of highly trained researchers and to make the process more efficient.
In this work, we focus on the detection and annotation of information about the PICO elements of RCTs described in English PubMed abstracts. In practice, the comparators involved in the C of PICO are just additional interventions, so we often refer to PIO (populations; interventions; outcomes) rather than PICO. Focus points for the investigation are the problems of ambiguity in labelled PIO data, integration of training data from different tasks and sources and assessing our model's capacity for transfer learning and domain adaptation.
Recent advances in natural language processing (NLP) offer the potential to be able to automate or semi-automate the process of identifying information to be included in a SR. For example, an automated system might attempt to PICO-annotate large corpora of abstracts, such as RCTs indexed on PubMed, or assess the results retrieved in a literature search and predict which abstract or full text article fits the inclusion criteria of a review. Such systems need to be able to classify and extract data of interest. We show that transformer models perform well on complex data-extraction tasks. Language models are moving away from the semantic, but static representation of words as in Word2Vec BIBREF5, hence providing a richer and more flexible contextualized representation of input features within sentences or long sequences of text.
The rest of this paper is organized as follows. The remainder of this section introduces related work and the contributions of our work. Section 2 describes the process of preparing training data, and introduces approaches to fine-tuning for sentence classification and question answering tasks. Results are presented in section 3, and section 4 includes a critical evaluation and implications for practice.
INTRODUCTION ::: Tools for SR automation and PICO classification
The website systematicreviewtools.com BIBREF6 lists 36 software tools for study selection to date. Some tools are intended for organisational purposes and do not employ PICO classification, such as Covidence BIBREF7. The tool Rayyan uses support vector machines BIBREF8. RobotReviewer uses neural networks, word embeddings and recently also a transformer for named entity recognition (NER) BIBREF9. Question answering systems for PICO data extraction exist based on matching words from knowledge bases, hand-crafted rules and naïve Bayes classification, both on entity and sentence level BIBREF10, BIBREF11, but commonly focus on providing information to practicing clinicians rather than systematic reviewers BIBREF12.
In the following we introduce models related to our sentence and entity classification tasks and the data on which our experiments are based. We made use of previously published training and testing data in order to ensure comparability between models.
INTRODUCTION ::: Sentence classification data
In the context of systematic review (semi)automation, sentence classification can be used in the screening process, by highlighting relevant pieces of text. A long short-term memory (LSTM) neural network trained with sentences of structured abstracts from PubMed was published in 2018 BIBREF13. It uses a pre-trained Word2Vec embedding in order to represent each input word as a fixed vector. Due to the costs associated with labelling, its authors acquired sentence labels via automated annotation. Seven classes were assigned on the basis of structured headings within the text of each abstract. Table TABREF4 provides an overview of class abbreviations and their meaning.In the following we refer to it as the PubMed data.
The LSTM itself yields impressive results with F1 scores for annotation of up to 0.85 for PIO elements, it generalizes across domains and assigns one label per sentence. We were able to confirm these scores by replicating a local version of this model.
INTRODUCTION ::: Question answering data ::: SQuAD
The Stanford Question Answering Dataset (SQuAD) is a reading-comprehension dataset for machine learning tasks. It contains question contexts, questions and answers and is available in two versions. The older version contains only questions that can be answered based on the given context. In its newer version, the dataset also contains questions which can not be answered on the basis of the given context. The SQuAD creators provide an evaluation script, as well as a public leader board to compare model performances BIBREF14.
INTRODUCTION ::: Question answering data ::: Ebm-nlp
In the PICO domain, the potential of NER was shown by Nye and colleagues in using transformers, as well as LSTM and conditional random fields. In the following, we refer to these data as the ebm-nlp corpus. BIBREF15. The ebm-nlp corpus provided us with 5000 tokenized and annotated RCT abstracts for training, and 190 expert-annotated abstracts for testing. Annotation in this corpus include PIO classes, as well as more detailed information such as age, gender or medical condition. We adapted the human-annotated ebm-nlp corpus of abstracts for training our QA-BERT question answering system.
INTRODUCTION ::: Introduction to transformers
In the following, the bidirectional encoder representations from transformers (BERT) architecture is introduced BIBREF16. This architecture's key strengths are rooted in both feature representation and training. A good feature representation is essential to ensure any model's performance, but often data sparsity in the unsupervised training of embedding mechanisms leads to losses in overall performance. By employing a word piece vocabulary, BERT eliminated the problem of previously unseen words. Any word that is not present in the initial vocabulary is split into a sub-word vocabulary. Especially in the biomedical domain this enables richer semantic representations of words describing rare chemical compounds or conditions. A relevant example is the phrase ’two drops of ketorolac tromethamine’, where the initial three words stay intact, while the last words are tokenized to ’ket’, ’#oro’, ’#lac’, ’tro’, ’#meth’, ’#amine’, hence enabling the following model to focus on relevant parts of the input sequence, such as syllables that indicate chemical compounds. When obtaining a numerical representation for its inputs, transformers apply a ’self-attention’ mechanism, which leads to a contextualized representation of each word with respect to its surrounding words.
BERT's weights are pre-trained in an unsupervised manner, based on large corpora of unlabelled text and two pre-training objectives. To achieve bidirectionality, its first pre-training objective includes prediction of randomly masked words. Secondly, a next-sentence prediction task trains the model to capture long-term dependencies. Pre-training is computationally expensive but needs to be carried out only once before sharing the weights together with the vocabulary. Fine-tuning to various downstream tasks can be carried out on the basis of comparably small amounts of labelled data, by changing the upper layers of the neural network to classification layers for different tasks.
SCIBERT is a model based on the BERT-base architecture, with further pre-trained weights based on texts from the Semantic Scholar search engine BIBREF17. We used these weights as one of our three starting points for fine-tuning a sentence classification architecture BIBREF18. Furthermore, BERT-base (uncased) and Bert multilingual (cased, base architecture) were included in the comparison BIBREF16.
INTRODUCTION ::: Weaknesses in the previous sentence classification approach
In the following, we discuss weaknesses in the PubMed data, and LSTM models trained on this type of labelled data. LSTM architectures commonly employ a trimmed version of Word2Vec embeddings as embedding layer. In our case, this leads to 20% of the input data being represented by generic `Unknown' tokens. These words are missing because they occur so rarely that no embedding vector was trained for them. Trimming means that the available embedding vocabulary is then further reduced to the known words of the training, development and testing data, in order to save memory and increase speed. The percentage of unknown tokens is likely to increase when predicting on previously unseen and unlabelled data. We tested our locally trained LSTM on 5000 abstracts from a study-based register BIBREF19 and found that 36% of all unique input features did not have a known representation.
In the case of the labelled training and testing data itself, automatic annotation carries the risk of producing wrongly labelled data. But it also enables the training of neural networks in the first place because manual gold standard annotations for a project on the scale of a LSTM are expensive and time-consuming to produce. As we show later, the automated annotation technique causes noise in the evaluation because as the network learns, it can assign correct tags to wrongly labelled data. We also show that sentence labels are often ambiguous, and that the assignment of a single label limits the quality of the predictions for their use in real-world reviewing tasks.
We acknowledge that the assignment of classes such as `Results' or `Conclusions' to sentences is potentially valuable for many use-cases. However, those sentences can contain additional information related to the PICO classes of interest. In the original LSTM-based model the A, M, R, and C data classes in Table TABREF4 are utilized for sequence optimization, which leads to increased classification scores. Their potential PICO content is neglected, although it represents crucial information in real-world reviewing tasks.
A general weakness of predicting labels for whole sentences is the practical usability of the predictions. We will show sentence highlighting as a potential use-case for focusing reader's attention to passages of interest. However, the data obtained through this method are not fine-grained enough for usage in data extraction, or for the use in pipelines for automated evidence synthesis. Therefore, we expand our experiments to include QA-BERT, a question-answering model that predicts the locations of PICO entities within sentences.
INTRODUCTION ::: Contributions of this research
In this work we investigate state-of-the-art methods for language modelling and sentence classification. Our contributions are centred around developing transformer-based fine-tuning approaches tailored to SR tasks. We compare our sentence classification with the LSTM baseline and evaluate the biggest set of PICO sentence data available at this point BIBREF13. We demonstrate that models based on the BERT architecture solve problems related to ambiguous sentence labels by learning to predict multiple labels reliably. Further, we show that the improved feature representation and contextualization of embeddings lead to improved performance in biomedical data extraction tasks. These fine-tuned models show promising results while providing a level of flexibility to suit reviewing tasks, such as the screening of studies for inclusion in reviews. By predicting on multilingual and full text contexts we showed that the model's capabilities for transfer learning can be useful when dealing with diverse, real-world data.
In the second fine-tuning approach, we apply a question answering architecture to the task of data extraction. Previous models for PICO question answering relied on vast knowledge bases and hand-crafted rules. Our fine-tuning approach shows that an abstract as context, together with a combination of annotated PICO entities and SQuAD data can result in a system that outperforms contemporary entity recognition systems, while retaining general reading comprehension capabilities.
METHODOLOGY ::: Feature representation and advantages of contextualization
A language processing model's performance is limited by its capability of representing linguistic concepts numerically. In this preliminary experiment, we used the PubMed corpus for sentence classification to show the quality of PICO sentence embeddings retrieved from BERT. We mapped a random selection of 3000 population, intervention, and outcome sentences from the PubMed corpus to BERT-base uncased and SCIBERT. This resulted in each sentence being represented by a fixed length vector of 768 dimensions in each layer respectively, as defined by the model architecture's hidden size. These vectors can be obtained for each of the network's layers, and multiple layers can be represented together by concatenation and pooling. We used the t-distributed Stochastic Neighbour Embedding (t-SNE) algorithm to reduce each layer-embedding into two-dimensional space, and plotted the resulting values. Additionally, we computed adjusted rand scores in order to evaluate how well each layer (or concatenation thereof, always using reduce_mean pooling) represents our input sequence. The rand scores quantify the extent to which a naïve K-means (N=3) clustering algorithm in different layers alone led to correct grouping of the input sentences.
METHODOLOGY ::: Sentence classification ::: Preparation of the data
We used the PubMed corpus to fine-tune a sentence classification architecture. Class names and abbreviations are displayed in Table TABREF4. The corpus was supplied in pre-processed form, comprising 24,668 abstracts. For more information about the original dataset we refer to its original publication BIBREF13. Because of the PICO framework, methods for systematic review semi(automation) commonly focus on P, I, and O detection. A, M, R, and C classes are an additional feature of this corpus. They were included in the following experiment because they represent important information in abstracts and they occur in a vast majority of published trial text. Their exclusion can lead to false classification of sentences in full abstracts. In a preliminary experiment we summarized A, M, R, and C sentences as a generic class named ’Other’ in order to shift the model's focus to PIO classes. This resulted in high class imbalance, inferior classification scores and a loss of ability to predict these classes when supporting systematic reviewers during the screening process.
In the following, abstracts that did not include a P, I, and O label were excluded. This left a total of 129,095 sentences for training, and 14,344 for testing (90:10 split).
METHODOLOGY ::: Sentence classification ::: Fine-tuning
We carried out fine-tuning for sentence classification based on BERT-base (uncased), multilingual BERT (cased), and on SCIBERT. We changed the classification layer on top of the original BERT model. It remains as linear, fully connected layer but now employs the sigmoid cross-entropy loss with logits function for optimization. During training, this layer is optimised for predicting probabilities over all seven possible sentence labels. Therefore, this architecture enables multi-class, multi-label predictions. In comparison, the original BERT fine-tuning approach for sentence classification employed a softmax layer in order to obtain multi-class, single-label predictions of the most probable class only. During the training process the model then predicts class labels from Table 1 for each sentence. After each training step, backpropagation then adjusts the model's internal weights. To save GPU resources, a maximal sequence length of 64, batch size 32, learning rate of $2\times 10^{-5}$, a warm-up proportion of 0.1 and two epochs for training were used.
METHODOLOGY ::: Sentence classification ::: Post-training assignment of classes
In the scope of the experiments for this paper, the model returns probabilities for the assignment of each class for every sentence. These probabilities were used to show effects of different probability thresholds (or simply assignment to the most probable class) on recall, precision and F1 scores. The number of classes was set to 7, thereby making use of the full PubMed dataset.
METHODOLOGY ::: Question answering ::: Preparation of the data
Both the training and testing subsets from the ebm-nlp data were adapted to fit the SQuAD format. We merged both datasets in order to train a model which firstly correctly answers PICO questions on the basis of being trained with labelled ebm-nlp data, and secondly retains the flexibility of general-purpose question answering on the basis of SQuAD. We created sets of general, differently phrased P, I, and O questions for the purpose of training a broad representation of each PICO element question.
In this section we describe the process of adapting the ebm-nlp data to the second version of the SQuAD format, and then augmenting the training data with some of the original SQuAD data. Figure FIGREF19 shows an example of the converted data, together with a high-level software architecture description for our QA-BERT model. We created a conversion script to automate this task. To reduce context length, it first split each ebm-nlp abstract into sentences. For each P, I, and O class it checked the presence of annotated entity spans in the ebm-nlp source files. Then, a question was randomly drawn from our set of general questions for this class, to complete a context and a span-answer pair in forming a new SQuAD-like question element. In cases where a sentence did not contain a span, a question was still chosen, but the answer was marked as impossible, with the plausible answer span set to begin at character 0. In the absence of impossible answers, the model would always return some part of the context as answer, and hence be of no use for rarer entities such as P, which only occurs in only 30% of all context sentences.
For the training data, each context can contain one possible answer, whereas for testing multiple question-answer pairs are permitted. An abstract is represented as a domain, subsuming its sentences and question answer-text pairs. In this format, our adapted data are compatible with the original SQuAD v.2 dataset, so we chose varying numbers of original SQuAD items and shuffled them into the training data. This augmentation of the training data aims to reduce the dependency on large labelled corpora for PICO entity extraction. Testing data can optionally be enriched in the same way, but for the presentation of our results we aimed to be comparable with previously published models and therefore chose to evaluate only on the subset of expert-annotated ebm-nlp testing data.
METHODOLOGY ::: Question answering ::: Fine-tuning
The python Huggingface Transformers library was used for fine-tuning the question-answering models. This classification works by adding a span-classification head on top of a pre-trained transformer model. The span-classification mechanism learns to predict the most probable start and end positions of potential answers within a given context BIBREF22.
The Transformers library offers classes for tokenizers, BERT and other transformer models and provides methods for feature representation and optimization. We used BertForQuestionAnswering. Training was carried out on Google's Colab, using the GPU runtime option. We used a batch size of 18 per GPU and a learning rate of $3^{-5}$. Training lasted for 2 epochs, context length was limited to 150. To reduce the time needed to train, we only used BERT-base (uncased) weights as starting points, and used a maximum of 200 out of the 442 SQuAD domains.
To date, the Transformers library includes several BERT, XLM, XLNet, DistilBERT and ALBERT question answering models that can be fine-tuned with the scripts and data that we describe in this paper.
RESULTS ::: Feature representation and contextualization
Figure FIGREF23 shows the dimensionality-reduced vectors for 3000 sentences in BERT-base, along with the positions of three exemplary sentences. All three examples were labelled as 'P' in the gold standard. This visualization highlights overlaps between the sentence data and ambiguity or noise in the labels.
UTF8bsmi
Sentences 1 and 2 are labelled incorrectly, and clearly appear far away from the population class centroid. Sentence 3 is an example of an ambiguous case. It appears very close to the population centroid, but neither its label nor its position reflect the intervention content. This supports a need for multiple tags per sentence, and the fine-tuning of weights within the network.
Figure FIGREF23 shows the same set of sentences, represented by concatenations of SCIBERT outputs. SCIBERT was chosen as an additional baseline model for fine-tuning because it provided the best representation of embedded PICO sentences. When clustered, its embeddings yielded an adjusted rand score of 0.57 for a concatenation of the two layers, compared with 0.25 for BERT-base.
RESULTS ::: Sentence classification
Precision, recall, and F1 scores, including a comparison with the LSTM, are summarized in Table TABREF22. Underlined scores represent the top score across all models, and scores in bold are the best results for single- and multi-label cases respectively. The LSTM assigns one label only and was outperformed in all classes of main interest (P, I, and O).
A potential pitfall of turning this task into multi-label classification is an increase of false-positive predictions, as more labels are assigned than given in the single-labelled testing data in the first place. However, the fine-tuned BERT models achieved high F1 scores, and large improvements in terms of recall and precision. In its last row, Table TABREF22 shows different probability thresholds for class assignment when using the PubMed dataset and our fine-tuned SCIBERT model for multi-label prediction. After obtaining the model's predictions, a simple threshold parameter can be used to obtain the final class labels. On our labelled testing data, we tested 50 evenly spaced thresholds between 0 and 1 in order to obtain these graphs. Here, recall and precision scores in ranges between 0.92 and 0.97 are possible with F1 scores not dropping below 0.84 for the main classes of interest. In practice, the detachment between model predictions and assignment of labels means that a reviewer who wishes to switch between high recall and high precision results can do so very quickly, without obtaining new predictions from the model itself.
More visualizations can be found in this project's GitHub repository , including true class labels and a detailed breakdown of true and false predictions for each class. The highest proportion of false classification appears between the results and conclusion classes.
The fine-tuned multilingual model showed marginally inferior classification scores on the exclusively English testing data. However, this model's contribution is not limited to the English language because its interior weights embed a shared vocabulary of 100 languages, including German and Chinese. Our evaluation of the multilingual model's capacity for language transfer is of a qualitative nature, as there were no labelled Chinese or German data available. Table TABREF24 shows examples of two abstracts, as predicted by the model. Additionally, this table demonstrates how a sentence prediction model can be used to highlight text. With the current infrastructure it is possible to highlight PICOs selectively, to highlight all classes simultaneously, and to adjust thresholds for class assignment in order to increase or decrease the amount of highlighted sentences. When applied to full texts of RCTs and cohort studies, we found that the model retained its ability to identify and highlight key sentences correctly for each class.
We tested various report types, as well as recent and old publications, but remain cautious that large scale testing on labelled data is needed to draw solid conclusions on these model's abilities for transfer learning. For further examples in the English language, we refer to our GitHub repository.
RESULTS ::: Question answering
We trained and evaluated a model for each P, I, and O class. Table TABREF29 shows our results, indicated as QA-BERT, compared with the currently published leader board for the ebm-nlp data BIBREF25 and results reported by the authors of SCIBERT BIBREF18. For the P and I classes, our models outperformed the results on this leader board. The index in our model names indicates the amount of additional SQuAD domains added to the training data. We never used the full SQuAD data in order to reduce time for training but observed increased performance when adding additional data. For classifying I entities, an increase from 20 to 200 additional SQuAD domains resulted in an increase of 8% for the F1 score, whereas the increase for the O domain was less than 1%. After training a model with 200 additional SQuAD domains, we also evaluated it on the original SQuAD development set and obtained a F1 score of 0.72 for this general reading comprehension task.
In this evaluation, the F1 scores represent the overlap of labelled and predicted answer spans on token level. We also obtained scores for the subgroups of sentences that did not contain an answer versus the ones that actually included PICO elements. These results are shown in Table TABREF30.
For the P class, only 30% of all sentences included an entity, whereas its sub-classes age, gender, condition and size averaged 10% each. In the remaining classes, these percentages were higher. F1 scores for correctly detecting that a sentence includes no PICO element exceeded 0.92 in all classes. This indicates that the addition of impossible answer elements was successful, and that the model learned a representation of how to discriminate PICO contexts. The scores for correctly predicting PICOs in positive scenarios are lower. These results are presented in Table TABREF30. Here, two factors could influence this score in a negative way. First, labelled spans can be noisy. Training spans were annotated by crowd workers and the authors of the original dataset noted inter-annotator disagreement. Often, these spans include full stops, other punctuation or different levels of detail describing a PICO. The F1 score decreases if the model predicts a PICO, but the predicted span includes marginal differences that were not marked up by the experts who annotated the testing set. Second, some spans include multiple PICOs, sometimes across sentence boundaries. Other spans mark up single PICOS in succession. In these cases the model might find multiple PICOs in a row, and annotate them as one or vice versa.
DISCUSSION
In this work, we have shown possibilities for sentence classification and data extraction of PICO characteristics from abstracts of RCTs.
For sentence classification, models based on transformers can predict multiple labels per sentence, even if trained on a corpus that assigns a single label only. Additionally, these architectures show a great level of flexibility with respect to adjusting precision and recall scores. Recall is an important metric in SR tasks and the architectures proposed in this paper enable a post-classification trade-off setting that can be adjusted in the process of supporting reviewers in real-world reviewing tasks.
However, tagging whole sentences with respect to populations, interventions and outcomes might not be an ideal method to advance systematic review automation. Identifying a sentence's tag could be helpful for highlighting abstracts from literature searches. This focuses the reader's attention on sentences, but is less helpful for automatically determining whether a specific entity (e.g. the drug aspirin) is mentioned.
Our implementation of the question answering task has shown that a substantial amount of PICO entities can be identified in abstracts on a token level. This is an important step towards reliable systematic review automation. With our provided code and data, the QA-BERT model can be switched with more advanced transformer architectures, including XLM, XLNet, DistilBERT and ALBERT pre-trained models. More detailed investigations into multilingual predictions BIBREF26 pre-processing and predicting more than one PICO per sentence are reserved for future work.
DISCUSSION ::: Limitations
Limitations in the automatically annotated PubMed training data mostly consist of incomplete detection or noise P, I, and O entities due to the single labelling. We did not have access to multilingual annotated PICO corpora for testing, and therefore tested the model on German abstracts found on PubMed, as well as Chinese data provided by the Cochrane Schizophrenia Group.
For the question answering, we limited the use of original SQuAD domains to enrich our data. This was done in order to save computing resources, as an addition of 100 SQuAD domains resulted in training time increases of two hours, depending on various other parameter settings. Adjusted parameters include increased batch size, and decreased maximal context length in order to reduce training time.
CONCLUSION
With this paper we aimed to explore state-of-the-art NLP methods to advance systematic review (semi)automation. Both of the presented fine-tuning approaches for transformers demonstrated flexibility and high performance. We contributed an approach to deal with ambiguity in whole-sentence predictions, and proposed the usage of a completely different approach to entity recognition in settings where training data are sparse.
In conclusion we wish to emphasize our argument that for future applications, interoperability is important. Instead of developing yet another stand-alone organizational interface with a machine learning classifier that works on limited data only, the focus should be to develop and train cross-domain and neural models that can be integrated into the backend of existing platforms. The performance of these models should be comparable on standardized datasets, evaluation scripts and leader boards.
The logical next step, which remains less explored in the current literature because of its complexity, is the task of predicting an RCT's included or excluded status on the basis of PICOs identified in its text. For this task, more complex architectures that include drug or intervention ontologies could be integrated. Additionally, information from already completed reviews could be re-used as training data.
ACKNOWLEDGEMENTS
We would like to thank Clive Adams for providing testing data and feedback for this project. We thank Vincent Cheng for the Chinese translation. Furthermore, we thank the BERT team at Google Research and Allenai for making their pre-trained model weights available. Finally, we acknowledge the Huggingface team and thank them for implementing the SQuAD classes for Transformers.
FUNDING
LS was funded by the National Institute for Health Research (NIHR Systematic Review Fellowship, RM-SR-2017-09-028). The views expressed in this publication are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health and Social Care.
Availability of the code and data
Scripts and supplementary material, as well as further illustrations are available from https://github.com/L-ENA/HealthINF2020. Training data for sentence classification and question answering are freely available from the cited sources.
Additionally, the Cochrane Schizophrenia Group extracted, annotated and made available data from studies included in over 200 systematic reviews. This aims at supporting the development of methods for reviewing tasks, and to increase the re-use of their data. These data include risk-of-bias assessment, results including all clean and published outcome data extracted by reviewers, data on PICOs, methods, and identifiers such as PubMed ID and a link to their study-based register. Additionally, a senior reviewer recently carried out a manual analysis of all 33,000 outcome names in these reviews, parsed and allocated to 15,000 unique outcomes in eight main categories BIBREF27. | LSTM, SCIBERT |
e5a965e7a109ae17a42dd22eddbf167be47fca75 | e5a965e7a109ae17a42dd22eddbf167be47fca75_0 | Q: What are the problems related to ambiguity in PICO sentence prediction tasks?
Text: INTRODUCTION
Systematic reviews (SR) of randomized controlled trials (RCTs) are regarded as the gold standard for providing information about the effects of interventions to healthcare practitioners, policy makers and members of the public. The quality of these reviews is ensured through a strict methodology that seeks to include all relevant information on the review topic BIBREF0.
A SR, as produced by the quality standards of Cochrane, is conducted to appraise and synthesize all research for a specific research question, therefore providing access to the best available medical evidence where needed BIBREF1. The research question is specified using the PICO (population; intervention; comparator; outcomes) framework. The researchers conduct very broad literature searches in order to retrieve every piece of clinical evidence that meets their review's inclusion criteria, commonly all RCTs of a particular healthcare intervention in a specific population. In a search, no piece of relevant information should be missed. In other words, the aim is to achieve a recall score of one. This implies that the searches are broad BIBREF2, and authors are often left to screen a large number of abstracts manually in order to identify a small fraction of relevant publications for inclusion in the SR BIBREF3.
The number of RCTs is increasing, and with it increases the potential number of reviews and the amount of workload that is implied for each. Research on the basis of PubMed entries shows that both the number of publications and the number of SRs increased rapidly in the last ten years BIBREF4, which is why acceleration of the systematic reviewing process is of interest in order to decrease working hours of highly trained researchers and to make the process more efficient.
In this work, we focus on the detection and annotation of information about the PICO elements of RCTs described in English PubMed abstracts. In practice, the comparators involved in the C of PICO are just additional interventions, so we often refer to PIO (populations; interventions; outcomes) rather than PICO. Focus points for the investigation are the problems of ambiguity in labelled PIO data, integration of training data from different tasks and sources and assessing our model's capacity for transfer learning and domain adaptation.
Recent advances in natural language processing (NLP) offer the potential to be able to automate or semi-automate the process of identifying information to be included in a SR. For example, an automated system might attempt to PICO-annotate large corpora of abstracts, such as RCTs indexed on PubMed, or assess the results retrieved in a literature search and predict which abstract or full text article fits the inclusion criteria of a review. Such systems need to be able to classify and extract data of interest. We show that transformer models perform well on complex data-extraction tasks. Language models are moving away from the semantic, but static representation of words as in Word2Vec BIBREF5, hence providing a richer and more flexible contextualized representation of input features within sentences or long sequences of text.
The rest of this paper is organized as follows. The remainder of this section introduces related work and the contributions of our work. Section 2 describes the process of preparing training data, and introduces approaches to fine-tuning for sentence classification and question answering tasks. Results are presented in section 3, and section 4 includes a critical evaluation and implications for practice.
INTRODUCTION ::: Tools for SR automation and PICO classification
The website systematicreviewtools.com BIBREF6 lists 36 software tools for study selection to date. Some tools are intended for organisational purposes and do not employ PICO classification, such as Covidence BIBREF7. The tool Rayyan uses support vector machines BIBREF8. RobotReviewer uses neural networks, word embeddings and recently also a transformer for named entity recognition (NER) BIBREF9. Question answering systems for PICO data extraction exist based on matching words from knowledge bases, hand-crafted rules and naïve Bayes classification, both on entity and sentence level BIBREF10, BIBREF11, but commonly focus on providing information to practicing clinicians rather than systematic reviewers BIBREF12.
In the following we introduce models related to our sentence and entity classification tasks and the data on which our experiments are based. We made use of previously published training and testing data in order to ensure comparability between models.
INTRODUCTION ::: Sentence classification data
In the context of systematic review (semi)automation, sentence classification can be used in the screening process, by highlighting relevant pieces of text. A long short-term memory (LSTM) neural network trained with sentences of structured abstracts from PubMed was published in 2018 BIBREF13. It uses a pre-trained Word2Vec embedding in order to represent each input word as a fixed vector. Due to the costs associated with labelling, its authors acquired sentence labels via automated annotation. Seven classes were assigned on the basis of structured headings within the text of each abstract. Table TABREF4 provides an overview of class abbreviations and their meaning.In the following we refer to it as the PubMed data.
The LSTM itself yields impressive results with F1 scores for annotation of up to 0.85 for PIO elements, it generalizes across domains and assigns one label per sentence. We were able to confirm these scores by replicating a local version of this model.
INTRODUCTION ::: Question answering data ::: SQuAD
The Stanford Question Answering Dataset (SQuAD) is a reading-comprehension dataset for machine learning tasks. It contains question contexts, questions and answers and is available in two versions. The older version contains only questions that can be answered based on the given context. In its newer version, the dataset also contains questions which can not be answered on the basis of the given context. The SQuAD creators provide an evaluation script, as well as a public leader board to compare model performances BIBREF14.
INTRODUCTION ::: Question answering data ::: Ebm-nlp
In the PICO domain, the potential of NER was shown by Nye and colleagues in using transformers, as well as LSTM and conditional random fields. In the following, we refer to these data as the ebm-nlp corpus. BIBREF15. The ebm-nlp corpus provided us with 5000 tokenized and annotated RCT abstracts for training, and 190 expert-annotated abstracts for testing. Annotation in this corpus include PIO classes, as well as more detailed information such as age, gender or medical condition. We adapted the human-annotated ebm-nlp corpus of abstracts for training our QA-BERT question answering system.
INTRODUCTION ::: Introduction to transformers
In the following, the bidirectional encoder representations from transformers (BERT) architecture is introduced BIBREF16. This architecture's key strengths are rooted in both feature representation and training. A good feature representation is essential to ensure any model's performance, but often data sparsity in the unsupervised training of embedding mechanisms leads to losses in overall performance. By employing a word piece vocabulary, BERT eliminated the problem of previously unseen words. Any word that is not present in the initial vocabulary is split into a sub-word vocabulary. Especially in the biomedical domain this enables richer semantic representations of words describing rare chemical compounds or conditions. A relevant example is the phrase ’two drops of ketorolac tromethamine’, where the initial three words stay intact, while the last words are tokenized to ’ket’, ’#oro’, ’#lac’, ’tro’, ’#meth’, ’#amine’, hence enabling the following model to focus on relevant parts of the input sequence, such as syllables that indicate chemical compounds. When obtaining a numerical representation for its inputs, transformers apply a ’self-attention’ mechanism, which leads to a contextualized representation of each word with respect to its surrounding words.
BERT's weights are pre-trained in an unsupervised manner, based on large corpora of unlabelled text and two pre-training objectives. To achieve bidirectionality, its first pre-training objective includes prediction of randomly masked words. Secondly, a next-sentence prediction task trains the model to capture long-term dependencies. Pre-training is computationally expensive but needs to be carried out only once before sharing the weights together with the vocabulary. Fine-tuning to various downstream tasks can be carried out on the basis of comparably small amounts of labelled data, by changing the upper layers of the neural network to classification layers for different tasks.
SCIBERT is a model based on the BERT-base architecture, with further pre-trained weights based on texts from the Semantic Scholar search engine BIBREF17. We used these weights as one of our three starting points for fine-tuning a sentence classification architecture BIBREF18. Furthermore, BERT-base (uncased) and Bert multilingual (cased, base architecture) were included in the comparison BIBREF16.
INTRODUCTION ::: Weaknesses in the previous sentence classification approach
In the following, we discuss weaknesses in the PubMed data, and LSTM models trained on this type of labelled data. LSTM architectures commonly employ a trimmed version of Word2Vec embeddings as embedding layer. In our case, this leads to 20% of the input data being represented by generic `Unknown' tokens. These words are missing because they occur so rarely that no embedding vector was trained for them. Trimming means that the available embedding vocabulary is then further reduced to the known words of the training, development and testing data, in order to save memory and increase speed. The percentage of unknown tokens is likely to increase when predicting on previously unseen and unlabelled data. We tested our locally trained LSTM on 5000 abstracts from a study-based register BIBREF19 and found that 36% of all unique input features did not have a known representation.
In the case of the labelled training and testing data itself, automatic annotation carries the risk of producing wrongly labelled data. But it also enables the training of neural networks in the first place because manual gold standard annotations for a project on the scale of a LSTM are expensive and time-consuming to produce. As we show later, the automated annotation technique causes noise in the evaluation because as the network learns, it can assign correct tags to wrongly labelled data. We also show that sentence labels are often ambiguous, and that the assignment of a single label limits the quality of the predictions for their use in real-world reviewing tasks.
We acknowledge that the assignment of classes such as `Results' or `Conclusions' to sentences is potentially valuable for many use-cases. However, those sentences can contain additional information related to the PICO classes of interest. In the original LSTM-based model the A, M, R, and C data classes in Table TABREF4 are utilized for sequence optimization, which leads to increased classification scores. Their potential PICO content is neglected, although it represents crucial information in real-world reviewing tasks.
A general weakness of predicting labels for whole sentences is the practical usability of the predictions. We will show sentence highlighting as a potential use-case for focusing reader's attention to passages of interest. However, the data obtained through this method are not fine-grained enough for usage in data extraction, or for the use in pipelines for automated evidence synthesis. Therefore, we expand our experiments to include QA-BERT, a question-answering model that predicts the locations of PICO entities within sentences.
INTRODUCTION ::: Contributions of this research
In this work we investigate state-of-the-art methods for language modelling and sentence classification. Our contributions are centred around developing transformer-based fine-tuning approaches tailored to SR tasks. We compare our sentence classification with the LSTM baseline and evaluate the biggest set of PICO sentence data available at this point BIBREF13. We demonstrate that models based on the BERT architecture solve problems related to ambiguous sentence labels by learning to predict multiple labels reliably. Further, we show that the improved feature representation and contextualization of embeddings lead to improved performance in biomedical data extraction tasks. These fine-tuned models show promising results while providing a level of flexibility to suit reviewing tasks, such as the screening of studies for inclusion in reviews. By predicting on multilingual and full text contexts we showed that the model's capabilities for transfer learning can be useful when dealing with diverse, real-world data.
In the second fine-tuning approach, we apply a question answering architecture to the task of data extraction. Previous models for PICO question answering relied on vast knowledge bases and hand-crafted rules. Our fine-tuning approach shows that an abstract as context, together with a combination of annotated PICO entities and SQuAD data can result in a system that outperforms contemporary entity recognition systems, while retaining general reading comprehension capabilities.
METHODOLOGY ::: Feature representation and advantages of contextualization
A language processing model's performance is limited by its capability of representing linguistic concepts numerically. In this preliminary experiment, we used the PubMed corpus for sentence classification to show the quality of PICO sentence embeddings retrieved from BERT. We mapped a random selection of 3000 population, intervention, and outcome sentences from the PubMed corpus to BERT-base uncased and SCIBERT. This resulted in each sentence being represented by a fixed length vector of 768 dimensions in each layer respectively, as defined by the model architecture's hidden size. These vectors can be obtained for each of the network's layers, and multiple layers can be represented together by concatenation and pooling. We used the t-distributed Stochastic Neighbour Embedding (t-SNE) algorithm to reduce each layer-embedding into two-dimensional space, and plotted the resulting values. Additionally, we computed adjusted rand scores in order to evaluate how well each layer (or concatenation thereof, always using reduce_mean pooling) represents our input sequence. The rand scores quantify the extent to which a naïve K-means (N=3) clustering algorithm in different layers alone led to correct grouping of the input sentences.
METHODOLOGY ::: Sentence classification ::: Preparation of the data
We used the PubMed corpus to fine-tune a sentence classification architecture. Class names and abbreviations are displayed in Table TABREF4. The corpus was supplied in pre-processed form, comprising 24,668 abstracts. For more information about the original dataset we refer to its original publication BIBREF13. Because of the PICO framework, methods for systematic review semi(automation) commonly focus on P, I, and O detection. A, M, R, and C classes are an additional feature of this corpus. They were included in the following experiment because they represent important information in abstracts and they occur in a vast majority of published trial text. Their exclusion can lead to false classification of sentences in full abstracts. In a preliminary experiment we summarized A, M, R, and C sentences as a generic class named ’Other’ in order to shift the model's focus to PIO classes. This resulted in high class imbalance, inferior classification scores and a loss of ability to predict these classes when supporting systematic reviewers during the screening process.
In the following, abstracts that did not include a P, I, and O label were excluded. This left a total of 129,095 sentences for training, and 14,344 for testing (90:10 split).
METHODOLOGY ::: Sentence classification ::: Fine-tuning
We carried out fine-tuning for sentence classification based on BERT-base (uncased), multilingual BERT (cased), and on SCIBERT. We changed the classification layer on top of the original BERT model. It remains as linear, fully connected layer but now employs the sigmoid cross-entropy loss with logits function for optimization. During training, this layer is optimised for predicting probabilities over all seven possible sentence labels. Therefore, this architecture enables multi-class, multi-label predictions. In comparison, the original BERT fine-tuning approach for sentence classification employed a softmax layer in order to obtain multi-class, single-label predictions of the most probable class only. During the training process the model then predicts class labels from Table 1 for each sentence. After each training step, backpropagation then adjusts the model's internal weights. To save GPU resources, a maximal sequence length of 64, batch size 32, learning rate of $2\times 10^{-5}$, a warm-up proportion of 0.1 and two epochs for training were used.
METHODOLOGY ::: Sentence classification ::: Post-training assignment of classes
In the scope of the experiments for this paper, the model returns probabilities for the assignment of each class for every sentence. These probabilities were used to show effects of different probability thresholds (or simply assignment to the most probable class) on recall, precision and F1 scores. The number of classes was set to 7, thereby making use of the full PubMed dataset.
METHODOLOGY ::: Question answering ::: Preparation of the data
Both the training and testing subsets from the ebm-nlp data were adapted to fit the SQuAD format. We merged both datasets in order to train a model which firstly correctly answers PICO questions on the basis of being trained with labelled ebm-nlp data, and secondly retains the flexibility of general-purpose question answering on the basis of SQuAD. We created sets of general, differently phrased P, I, and O questions for the purpose of training a broad representation of each PICO element question.
In this section we describe the process of adapting the ebm-nlp data to the second version of the SQuAD format, and then augmenting the training data with some of the original SQuAD data. Figure FIGREF19 shows an example of the converted data, together with a high-level software architecture description for our QA-BERT model. We created a conversion script to automate this task. To reduce context length, it first split each ebm-nlp abstract into sentences. For each P, I, and O class it checked the presence of annotated entity spans in the ebm-nlp source files. Then, a question was randomly drawn from our set of general questions for this class, to complete a context and a span-answer pair in forming a new SQuAD-like question element. In cases where a sentence did not contain a span, a question was still chosen, but the answer was marked as impossible, with the plausible answer span set to begin at character 0. In the absence of impossible answers, the model would always return some part of the context as answer, and hence be of no use for rarer entities such as P, which only occurs in only 30% of all context sentences.
For the training data, each context can contain one possible answer, whereas for testing multiple question-answer pairs are permitted. An abstract is represented as a domain, subsuming its sentences and question answer-text pairs. In this format, our adapted data are compatible with the original SQuAD v.2 dataset, so we chose varying numbers of original SQuAD items and shuffled them into the training data. This augmentation of the training data aims to reduce the dependency on large labelled corpora for PICO entity extraction. Testing data can optionally be enriched in the same way, but for the presentation of our results we aimed to be comparable with previously published models and therefore chose to evaluate only on the subset of expert-annotated ebm-nlp testing data.
METHODOLOGY ::: Question answering ::: Fine-tuning
The python Huggingface Transformers library was used for fine-tuning the question-answering models. This classification works by adding a span-classification head on top of a pre-trained transformer model. The span-classification mechanism learns to predict the most probable start and end positions of potential answers within a given context BIBREF22.
The Transformers library offers classes for tokenizers, BERT and other transformer models and provides methods for feature representation and optimization. We used BertForQuestionAnswering. Training was carried out on Google's Colab, using the GPU runtime option. We used a batch size of 18 per GPU and a learning rate of $3^{-5}$. Training lasted for 2 epochs, context length was limited to 150. To reduce the time needed to train, we only used BERT-base (uncased) weights as starting points, and used a maximum of 200 out of the 442 SQuAD domains.
To date, the Transformers library includes several BERT, XLM, XLNet, DistilBERT and ALBERT question answering models that can be fine-tuned with the scripts and data that we describe in this paper.
RESULTS ::: Feature representation and contextualization
Figure FIGREF23 shows the dimensionality-reduced vectors for 3000 sentences in BERT-base, along with the positions of three exemplary sentences. All three examples were labelled as 'P' in the gold standard. This visualization highlights overlaps between the sentence data and ambiguity or noise in the labels.
UTF8bsmi
Sentences 1 and 2 are labelled incorrectly, and clearly appear far away from the population class centroid. Sentence 3 is an example of an ambiguous case. It appears very close to the population centroid, but neither its label nor its position reflect the intervention content. This supports a need for multiple tags per sentence, and the fine-tuning of weights within the network.
Figure FIGREF23 shows the same set of sentences, represented by concatenations of SCIBERT outputs. SCIBERT was chosen as an additional baseline model for fine-tuning because it provided the best representation of embedded PICO sentences. When clustered, its embeddings yielded an adjusted rand score of 0.57 for a concatenation of the two layers, compared with 0.25 for BERT-base.
RESULTS ::: Sentence classification
Precision, recall, and F1 scores, including a comparison with the LSTM, are summarized in Table TABREF22. Underlined scores represent the top score across all models, and scores in bold are the best results for single- and multi-label cases respectively. The LSTM assigns one label only and was outperformed in all classes of main interest (P, I, and O).
A potential pitfall of turning this task into multi-label classification is an increase of false-positive predictions, as more labels are assigned than given in the single-labelled testing data in the first place. However, the fine-tuned BERT models achieved high F1 scores, and large improvements in terms of recall and precision. In its last row, Table TABREF22 shows different probability thresholds for class assignment when using the PubMed dataset and our fine-tuned SCIBERT model for multi-label prediction. After obtaining the model's predictions, a simple threshold parameter can be used to obtain the final class labels. On our labelled testing data, we tested 50 evenly spaced thresholds between 0 and 1 in order to obtain these graphs. Here, recall and precision scores in ranges between 0.92 and 0.97 are possible with F1 scores not dropping below 0.84 for the main classes of interest. In practice, the detachment between model predictions and assignment of labels means that a reviewer who wishes to switch between high recall and high precision results can do so very quickly, without obtaining new predictions from the model itself.
More visualizations can be found in this project's GitHub repository , including true class labels and a detailed breakdown of true and false predictions for each class. The highest proportion of false classification appears between the results and conclusion classes.
The fine-tuned multilingual model showed marginally inferior classification scores on the exclusively English testing data. However, this model's contribution is not limited to the English language because its interior weights embed a shared vocabulary of 100 languages, including German and Chinese. Our evaluation of the multilingual model's capacity for language transfer is of a qualitative nature, as there were no labelled Chinese or German data available. Table TABREF24 shows examples of two abstracts, as predicted by the model. Additionally, this table demonstrates how a sentence prediction model can be used to highlight text. With the current infrastructure it is possible to highlight PICOs selectively, to highlight all classes simultaneously, and to adjust thresholds for class assignment in order to increase or decrease the amount of highlighted sentences. When applied to full texts of RCTs and cohort studies, we found that the model retained its ability to identify and highlight key sentences correctly for each class.
We tested various report types, as well as recent and old publications, but remain cautious that large scale testing on labelled data is needed to draw solid conclusions on these model's abilities for transfer learning. For further examples in the English language, we refer to our GitHub repository.
RESULTS ::: Question answering
We trained and evaluated a model for each P, I, and O class. Table TABREF29 shows our results, indicated as QA-BERT, compared with the currently published leader board for the ebm-nlp data BIBREF25 and results reported by the authors of SCIBERT BIBREF18. For the P and I classes, our models outperformed the results on this leader board. The index in our model names indicates the amount of additional SQuAD domains added to the training data. We never used the full SQuAD data in order to reduce time for training but observed increased performance when adding additional data. For classifying I entities, an increase from 20 to 200 additional SQuAD domains resulted in an increase of 8% for the F1 score, whereas the increase for the O domain was less than 1%. After training a model with 200 additional SQuAD domains, we also evaluated it on the original SQuAD development set and obtained a F1 score of 0.72 for this general reading comprehension task.
In this evaluation, the F1 scores represent the overlap of labelled and predicted answer spans on token level. We also obtained scores for the subgroups of sentences that did not contain an answer versus the ones that actually included PICO elements. These results are shown in Table TABREF30.
For the P class, only 30% of all sentences included an entity, whereas its sub-classes age, gender, condition and size averaged 10% each. In the remaining classes, these percentages were higher. F1 scores for correctly detecting that a sentence includes no PICO element exceeded 0.92 in all classes. This indicates that the addition of impossible answer elements was successful, and that the model learned a representation of how to discriminate PICO contexts. The scores for correctly predicting PICOs in positive scenarios are lower. These results are presented in Table TABREF30. Here, two factors could influence this score in a negative way. First, labelled spans can be noisy. Training spans were annotated by crowd workers and the authors of the original dataset noted inter-annotator disagreement. Often, these spans include full stops, other punctuation or different levels of detail describing a PICO. The F1 score decreases if the model predicts a PICO, but the predicted span includes marginal differences that were not marked up by the experts who annotated the testing set. Second, some spans include multiple PICOs, sometimes across sentence boundaries. Other spans mark up single PICOS in succession. In these cases the model might find multiple PICOs in a row, and annotate them as one or vice versa.
DISCUSSION
In this work, we have shown possibilities for sentence classification and data extraction of PICO characteristics from abstracts of RCTs.
For sentence classification, models based on transformers can predict multiple labels per sentence, even if trained on a corpus that assigns a single label only. Additionally, these architectures show a great level of flexibility with respect to adjusting precision and recall scores. Recall is an important metric in SR tasks and the architectures proposed in this paper enable a post-classification trade-off setting that can be adjusted in the process of supporting reviewers in real-world reviewing tasks.
However, tagging whole sentences with respect to populations, interventions and outcomes might not be an ideal method to advance systematic review automation. Identifying a sentence's tag could be helpful for highlighting abstracts from literature searches. This focuses the reader's attention on sentences, but is less helpful for automatically determining whether a specific entity (e.g. the drug aspirin) is mentioned.
Our implementation of the question answering task has shown that a substantial amount of PICO entities can be identified in abstracts on a token level. This is an important step towards reliable systematic review automation. With our provided code and data, the QA-BERT model can be switched with more advanced transformer architectures, including XLM, XLNet, DistilBERT and ALBERT pre-trained models. More detailed investigations into multilingual predictions BIBREF26 pre-processing and predicting more than one PICO per sentence are reserved for future work.
DISCUSSION ::: Limitations
Limitations in the automatically annotated PubMed training data mostly consist of incomplete detection or noise P, I, and O entities due to the single labelling. We did not have access to multilingual annotated PICO corpora for testing, and therefore tested the model on German abstracts found on PubMed, as well as Chinese data provided by the Cochrane Schizophrenia Group.
For the question answering, we limited the use of original SQuAD domains to enrich our data. This was done in order to save computing resources, as an addition of 100 SQuAD domains resulted in training time increases of two hours, depending on various other parameter settings. Adjusted parameters include increased batch size, and decreased maximal context length in order to reduce training time.
CONCLUSION
With this paper we aimed to explore state-of-the-art NLP methods to advance systematic review (semi)automation. Both of the presented fine-tuning approaches for transformers demonstrated flexibility and high performance. We contributed an approach to deal with ambiguity in whole-sentence predictions, and proposed the usage of a completely different approach to entity recognition in settings where training data are sparse.
In conclusion we wish to emphasize our argument that for future applications, interoperability is important. Instead of developing yet another stand-alone organizational interface with a machine learning classifier that works on limited data only, the focus should be to develop and train cross-domain and neural models that can be integrated into the backend of existing platforms. The performance of these models should be comparable on standardized datasets, evaluation scripts and leader boards.
The logical next step, which remains less explored in the current literature because of its complexity, is the task of predicting an RCT's included or excluded status on the basis of PICOs identified in its text. For this task, more complex architectures that include drug or intervention ontologies could be integrated. Additionally, information from already completed reviews could be re-used as training data.
ACKNOWLEDGEMENTS
We would like to thank Clive Adams for providing testing data and feedback for this project. We thank Vincent Cheng for the Chinese translation. Furthermore, we thank the BERT team at Google Research and Allenai for making their pre-trained model weights available. Finally, we acknowledge the Huggingface team and thank them for implementing the SQuAD classes for Transformers.
FUNDING
LS was funded by the National Institute for Health Research (NIHR Systematic Review Fellowship, RM-SR-2017-09-028). The views expressed in this publication are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health and Social Care.
Availability of the code and data
Scripts and supplementary material, as well as further illustrations are available from https://github.com/L-ENA/HealthINF2020. Training data for sentence classification and question answering are freely available from the cited sources.
Additionally, the Cochrane Schizophrenia Group extracted, annotated and made available data from studies included in over 200 systematic reviews. This aims at supporting the development of methods for reviewing tasks, and to increase the re-use of their data. These data include risk-of-bias assessment, results including all clean and published outcome data extracted by reviewers, data on PICOs, methods, and identifiers such as PubMed ID and a link to their study-based register. Additionally, a senior reviewer recently carried out a manual analysis of all 33,000 outcome names in these reviews, parsed and allocated to 15,000 unique outcomes in eight main categories BIBREF27. | Some sentences are associated to ambiguous dimensions in the hidden state output |
7d59374d9301a0c09ea5d023a22ceb6ce07fb490 | 7d59374d9301a0c09ea5d023a22ceb6ce07fb490_0 | Q: How do they measure the diversity of inferences?
Text: Introduction
Recently, event-centered commonsense knowledge has attracted much attention BIBREF0, BIBREF1, BIBREF2, BIBREF3, because of understanding events is an important component of NLP. Given a daily-life event, human can easily understand it and reason about its causes, effects, and so on. However, it still remains a challenging task for NLP systems. This is partly due to most of them are trained for task-specific datasets or objectives, which results in models that are adapt at finding task-specific underlying correlation patterns but have limited capability in simple and explainable commonsense reasoning BIBREF4.
To facilitate this, BIBREF5 (BIBREF5) build the Event2Mind dataset and BIBREF4 (BIBREF4) present the Atomic dataset, mainly focus on nine If-Then reasoning types to describe causes, effects, intents and participant characteristic about events. Together with these datasets, a simple RNN-based encoder-decoder framework is proposed to conduct the If-Then reasoning.
However, there still remains two challenging problems. First, as illustrated in Figure FIGREF1, given an event “PersonX finds a new job”, the plausible feeling of PersonX about that event could be multiple (such as “needy/stressed out” and “relieved/joyful”). Previous work showed that for the one-to-many problem, conventional RNN-based encoder-decoder models tend to generate generic responses, rather than meaningful and specific answers BIBREF6, BIBREF7.
Second, as a commonsense reasoning problem, rich background knowledge is necessary for generating reasonable inferences. For example, as shown in Figure FIGREF1, the feeling of PersonX upon the event “PersonX finds a new job” could be multiple. However, after given a context “PersonX was fired”, the plausible inferences would be narrowed down to “needy” or “stressed out”.
To better solve these problems, we propose a context-aware variational autoencoder (CWVAE) together with a two-stage training procedure. Variational Autoencoder (VAE) based models have shown great potential in modeling the one-to-many problem and generate diversified inferences BIBREF8, BIBREF9.
In addition to the traditional VAE structure, we introduces an extra context-aware latent variable in CWVAE to learn the event background knowledge. In the pretrain stage, CWVAE is trained on an auxiliary dataset (consists of three narrative story corpora and contains rich event background knowledge), to learn the event background information by using the context-aware latent variable. Subsequently, in the finetune stage, CWVAE is trained on the task-specific dataset to adapt the event background information to each specific aspect of If-Then inferential target (e.g., intents, reactions, etc.).
Experiments on the Event2Mind and Atomic dataset show that our proposed approach outperforms baseline methods in both the accuracy and diversity of inferences. The code is released at https://github.com/sjcfr/CWVAE.
Background
Before specifically describing two dataset —- Event2Mind and Atomic used in this paper as well as the If-Then reasoning task, for clarity, we define the following terminologies:
Base event: the prerequisite event in If-Then reasoning, organized as a verb phrase with a predicate and its arguments, such as the event “PersonX finds a new job” shown in Figure FIGREF1.
Inference dimension: a particular If-Then reasoning type, e.g., intents, effects of the base event. Details are shown in Table TABREF2 and Table TABREF3.
Target: the inferential results. For example, as shown in Figure FIGREF1, given a base event “PersonX finds a new job” and one inference dimension “xReact”, the targets could be “relieved” or “needy”. Notice that each inference dimension can have multiple targets.
Event2Mind Dataset contains 25K base events and 300K targets, annotated through crowdsourcing. Event2Mind is organized in a hierarchical form: each base event has three types of inference dimensions, and given a base event, under one of inference dimensions, several targets may simultaneously exist. Table TABREF2 shows the (base event-inference dimension-target) hierarchical structure through an example from Event2Mind.
Atomic Dataset Inspired by Event2Mind, the Atomic dataset shares the same hierarchical structure as Event2Mind, while scales up the size of dataset and expands the scope to nine types of inference dimensions. Table TABREF3 shows the (base event-inference dimension-target) hierarchical structure through an example from Atomic. Though Atomic covers the inference dimensions of Event2Mind, the base event collection of Event2Mind is nonidentical to that of Atomic.
Problem Definition The If-Then reasoning task could be formally defined as a conditional one-to-many generation problem: given a base event $x$ and one inference dimension $d$, the model is required to generate targets $y=f(x, d)$ as close to the ground truths as possible. Both $x$ and $y$ consist of sequence of words: $x=\lbrace x_1,\dots , x_{m}\rbrace $, and $y=\lbrace y_1,\dots , y_{n}\rbrace $, where $m$ and $n$ denotes the length of $x$ and $y$, respectively.
Conditional Variational Autoencoder The variational autoencoder (VAE) defines a generative framework suited for one-to-many generation problem BIBREF10. While conditional variational autoencoder (CVAE) BIBREF11 is an extension of VAE on the conditional generation problem. As shown in Figure FIGREF5 (a), CVAE characterizes the conditional one-to-many generation problem using three random variables: event $x$, target $y$ and a latent variable $z$, which is used for modeling the latent distribution of semantic over targets given an event. Hence, under a certain inference dimension, with regard to the latent semantic variable $z$, the conditional generation problem could be expressed as $p(y|x)=\int p(y|x,z)p(z|x)dz$. CVAE models $p(y|x,z)$ and $p(z|x)$ using deep neural networks (parameterized by $\theta $) $p_{\theta }(y|x,z)$ and $p_{\theta }(z|x)$. Then as illustrated in Figure FIGREF5 (b), $y$ could be generated from $x$ and $z$.
CVAE is trained to maximize the conditional likelihood $p(y|x)$, which involves an intractable marginalization over the latent variable $z$. Instead, following BIBREF10 (BIBREF10), a practical way is to introduce another deep network (parameterized by $\phi $) $q_{\phi }(z|x,y)$ to approximate the true posterior distribution $p(z|x,y)$ and maximize the evidence lower bound (ELBO) of the log-likelihood function:
Therefore, CVAE is composed of three neural networks in general. We refer to $p_{\theta }(z|x)$ as a prior network, $q_{\phi }(z|x,y)$ as a recognition network, and $p_{\theta }(y|x,z)$ as a neural decoder.
Context-aware Variational Autoencoder
Traditional CVAE can model the event-target relation. In other words, given an observed event, CVAE can generate its corresponding targets. While in this paper we model the If-Then reasoning as a [(background), event]-target process. It means that in addition to the observed event, we also want to involve the event background knowledge (which can be learned from event contexts) to generate the reasonable targets.
To this end, we propose a context-aware variational autoencoder (CWVAE), with two additional latent variables: a context-acquiring latent variable $z_c$ to directly acquire context information, and a context-aware latent variable $z_{c^{\prime }}$ to learn background knowledge from $z_c$, as shown in Figure FIGREF6 (a). However, the event context information is absent in the Event2Mind and Atomic dataset. To learn from the external event context information, we design the following two-stage training procedure for CWVAE.
Pretrain: Learning Event Background Knowledge from Auxiliary Dataset In the pretrain stage, CWVAE is trained on three narrative story corpora with rich event context information. As shown in Figure FIGREF6 (a), context-acquiring latent variable $z_c$ is directly conditioned on the context $c$. Hence, $z_c$ could be employed for acquiring background knowledge from event contexts. Then, we minimize the distance between $z_c$ and the context-aware latent variable $z_{c^{\prime }}$, by which the event background knowledge is transferred from $z_c$ to $z_{c^{\prime }}$.
Finetune: Adapt Event Background Knowledge to Each Inference Dimension In the finetune stage, as shown in Figure FIGREF6 (b), CWVAE is trained on the Event2Mind and Atomic dataset without the event context information. Pretrained CWVAE is finetuned to learn the specific inferential knowledge of each inference dimension. After the training procedure, as shown in Figure FIGREF6 (c), samples of $z$ is generated based on $x$ and samples of $z_{c^{\prime }}$, where $z_{c^{\prime }}$ contains rich event background knowledge helpful for If-Then reasoning.
Context-aware Variational Autoencoder ::: Architecture of CWVAE
As shown in Figure FIGREF8, CWVAE is mainly composed of four parts: a neural encoder that provides distributed representations of base events/targets, a recognition network for inferring $q_{\phi }(z|x,y)$, $q_{\phi }(z_c|x,c)$ and $q_{\phi }(z|z_{c^{\prime }}, x)$, a prior network for modeling $p_{\theta }(z_{c^{\prime }}|x)$ and $p_{\theta }(z|x, z_{c^{\prime }})$, and a neural decoder that integrates the information from $z$ and $z_{c^{\prime }}$ to generate targets.
Neural Encoder We employ a bidirectional GRU as neural encoder, which encodes context $c$, event $x$ and target $y$ into distributed representations $h^c=\lbrace h_1^c,\dots ,h_{l_c}^c\rbrace $, $h^x=\lbrace h_1^x,\dots ,h_{l_x}^x\rbrace $ and $h^y=\lbrace h_1^y,\dots ,h_{l_y}^y\rbrace $, where $l_c$, $l_x$ and $l_y$ is the length of $c$, $x$ and $y$, respectively.
Recognition Network The recognition network models $q_{\phi }(z|x,y)$, $q_{\phi }(z_c|x,c)$, $q_{\phi }(z|z_{c^{\prime }}, x)$ based on $h^x$, $h^y$ and $h^c$.
Following traditional VAE, the above-mentioned three distributions are assumed to be multivariate Gaussian distribution with a diagonal covariance structure:
where $\mu $ denotes the mean of the distribution, $\sigma $ denotes the standard deviation of the distribution, and $I$ denotes the identity matrix.
Given $h^x$, $h^y$ and $h^c$, we propose a novel attention-based inferer (ABI) module to estimate the mean and standard deviation of $q_{\phi }(z_{c}|x,c)$, $q_{\phi }(z_{c^{\prime }}|x,y)$ and $q_{\phi }(z|x,y)$:
Briefly, through the attention mechanism, ABI can capture the semantic interaction between input sequences, and estimate the parameters of distributions based on it. We will introduce the specific structure of ABI in below.
Prior Network Prior Network models $p_{\theta }(z_{c^{\prime }}|x)$ and $p_{\theta }(z|x, z_{c^{\prime }})$ based on $h^x$. The distribution of $p_{\theta }(z_{c^{\prime }}|x)$ and $p_{\theta }(z|x, z_{c^{\prime }})$ are still assumed to be multivariate Gaussian, whereas the parameters are different:
where $\mu ^{^{\prime }}$ denotes the mean of the distribution, $\sigma ^{^{\prime }}$ denotes the standard deviation of the distribution and $I$ denotes the identity matrix.
Then the attention-based inferer module is still employed to estimate parameters of distributions:
Neural Decoder Given the base event $x$, the semantic latent variable $z$, and the context-aware latent variable $z_{c^{\prime }}$, the neural decoder defines the generation probability of $y$ as following:
where $p(y_j|y<j, z, z_{c^{\prime }}, x)=g(y_{j-1}, s_{j-1}, e_j)$, $g(\cdot )$ is an attention-based feed forward model, $e_j=\sum _i \alpha _{ji}h_i^{x}$ is the context vector and $s_{j-1}$ is the hidden state of the decoder. We obtain $g(\cdot )$ and $e_j$ the same way as BIBREF12 (BIBREF12). Whereas our decoder differs from BIBREF12 (BIBREF12) in that our model integrates the context-aware latent variable $z_{c^{\prime }}$ and semantic latent variable $z$ in the computation of $s_j=\mathrm {GRU}([E_{yj};s_{j-1},z,z_{j-1}])$, where $E_{yj}$ is the word embeddings of target words.
Note that through concatenating $z$ and $z_{c^{\prime }}$ with $E_{yj}$ and $s_{j-1}$, $s_j$ could be affected by context-aware latent variable $z_{c^{\prime }}$ and semantic latent variable $z$. This allows model to directly access to the event background knowledge from $z_{c^{\prime }}$. In addition, the randomness of $z$ and $z_{c^{\prime }}$ would increase the diversity of model generation.
Attention-based Inferer Attention mechanism has shown strong ability in capturing semantic interactions BIBREF13. Inspired by the co-attention mechanism BIBREF14, we propose an attention-based inferer (ABI) to estimate the mean and standard deviation of a distribution belongs to $p_{\theta }(\cdot )$ or $q_{\phi }(\cdot )$ by capturing semantic interactions of input sequences.
Specifically, given two input sequences (e.g., representations of contexts and events) $a=\lbrace a_1,\dots ,a_{l_a}\rbrace $ and $b=\lbrace b_1,\dots ,b_{l_b}\rbrace $ with length $l_a$ and $l_b$, we first obtain the attention scores from each side through:
where $W_a \in \mathbb {R}^{d\times d_a}$ and $W_b \in \mathbb {R}^{d\times d_b}$ are parameter weights.
With these attention scores, the context vectors of both sequences are given by:
Then we perform a mean pooling operation on context vectors of both sequences:
To obtain the mean and standard deviation, the pooled context vectors $\bar{c^a}$ and $\bar{c^b}$ which carry semantic interaction between two sequences, are concatenated and projected into a latent semantic space through a nonlinear transformation:
Finally the mean and standard deviation are generated through a nonlinear transformation over $h_z$:
Context-aware Variational Autoencoder ::: Optimizing
With the incorporation of $z_{c^{\prime }}$, the original loglikelihood could be decomposed as:
Then following traditional CVAE, the ELBO of CWVAE is defined as follows:
which is the objective function at the finetune stage.
While in the pretrain stage, as we aim to learn background knowledge through minimizing the distance between $z_c$ and $z_{c^{\prime }}$, in addition to $L^{ELBO}$, a context-aware regulation term is introduced:
where the context aware regularization term is the KL distance between $z$ and $z_{c^{\prime }}$. Through minimizing the context aware regularization term, we aim to pass event context knowledge from $z_c$ to the context aware latent variable $z_{c^{\prime }}$.
Context-aware Variational Autoencoder ::: Training Details
To test the performance of CWVAE, we split the Event2Mind and Atomic dataset into training, development and test sets (80%, 10%, 10%) in the same way as BIBREF5 (BIBREF5) and BIBREF4 (BIBREF4), respectively. We initialize the embedding layer from 300d GloVe word embeddings. The neural encoder is chosen to be biGRU with 300 hidden units. For the ABI module, size of $W_a$ and $W_b$ is set to be $100 \times d_a$ and $100 \times d_b$ respectively. The dimension of $z_c$, $z_{c^{\prime }}$ and $z$ is all set as 40. The neural decoder is set to be GRU with 300d hidden state. Regulation coefficient $\lambda $ of context-aware regulation term is set to be 0.1. Models are trained using an Adam optimizer BIBREF15 with a learning rate of 0.001.
Experiments ::: Auxiliary Dataset
The auxiliary dataset is built upon three human-written story corpora: ROCStories BIBREF16, VIST BIBREF17 and WritingPrompts BIBREF18. ROCStories and VIST are composed of short stories with five sentences. We filter out stories of more than 1,000 words in WritingPrompts, and cut the remaining stories into five-sentence-paragraphs.
For each five-sentence-paragraph, we define the first three sentences as contexts of the base event, the fourth sentence as the base event, and the fifth sentence as the inference target. For example, as shown in Table TABREF25, the first three sentences describe a context that Jason was unsatisfied about his job and applied for a new job. Hence, after happening the event “he got the job”, a plausible react about the event could be “jason was much happier at his new job”. In total, the auxiliary dataset contains 192,316 $(context, event, target)$ triples.
Experiments ::: Baselines
We compared our proposed model with the following four baseline methods:
RNN-based Seq2Seq proposed by BIBREF4 (BIBREF4) for the If-Then reasoning on Atomic.
Variational Seq2Seq combines a latent variable with the encoder-decoder structure through converting the last hidden state of RNN encoder into a Gaussian distributed latent variable BIBREF8.
VRNMT Propose by BIBREF19 (BIBREF19), VRNMT combines CVAE with attention-based encoder-decoder framework through introduces a latent variable to model the semantic distribution of targets.
CWVAE-Unpretrained refers to the CWVAE model without the pretrain stage.
Note that, for each baseline method, we train distinct models for each distinct inference dimension, respectively.
Experiments ::: Evaluation Metrics ::: Automatic Evaluation
We first compare the perplexity of CWVAE with baseline methods. Perplexity measures the probability of model to regenerate the exact targets, which is particular suitable for evaluating the model performance on one-to-many problem BIBREF20. Further, we employ BLEU score to evaluate the accuracy of generations BIBREF21, and the number of distinct n-gram to evaluate the diversity of generations BIBREF6. The distinct is normalized to $[0, 1]$ by dividing the total number of generated tokens.
Experiments ::: Evaluation Metrics ::: Human Evaluation
Since automatic evaluation of generations is still a challenging task BIBREF22, we also conduct human evaluations on the model performance. Five human experts are employed to evaluate the coherence, diversity and fluency of generated targets. Experts are asked to vote for if a generation is fluent or coherent for each generated target, and give a 1-5 score for the diversity of generations. For both Event2Mind and Atomic datasets, 100 events are randomly selected from the test set. For each method, top 10 generated targets of each base event are used for evaluation. Finally we report three overall averaged scores of coherence, diversity and fluency on both datasets, respectively.
Experiments ::: Overall Results
We list the perplexity and BLEU score of CWVAE and baseline methods on Event2Mind and Atomic in Table TABREF31 and Table TABREF33, respectively, and show the distinct-1 and distinct-2 score on Event2Mind and Atomic in Table TABREF32 and Table TABREF34, respectively. We find that:
(1) As shown in Table TABREF32 and Table TABREF34, comparison between RNN-based Seq2Seq and variational-based methods, including Variational Seq2Seq, VRNMT, CWVAE-unpretrained and CWVAE shows that, variational-based methods could increase the diversity of generations. This confirms one of our motivations that variational-based methods could capture the latent semantic distribution within targets and increase the diversity of If-Then reasoning.
(2) Comparing CWVAE-unpretrained with other baseline methods shows that, in general CWVAE improves the accuracy and diversity on both dataset. These results indicate the efficiency of CWVAE in capturing the latent semantic distribution of targets, and generate more reasonable inferential results.
(3) Comparison between CWVAE and CWVAE-unpretrained shows that the pretrain stage could enhance the performance of CWVAE in both the accuracy and diversity. This is mainly because event knowledge could offer the guidance for If-Then reasoning. In the pretrain stage, CWVAE could capture the event background knowledge through context-aware latent variable, and such knowledge could be be adapted to our task through the fintune stage.
To further evaluate the effectiveness of our proposed approach, we also conduct human evaluations, the results of which are shown in Table TABREF39 and Table TABREF40. On both datasets, CWVAE-based methods achieve consistent better coherence, diversity and fluency performances. While comparing with CWVAE-Unpretrained, the pretrain procedure could improves the performance on coherence and fluency. The main reasons are twofold: first, the CWVAE has advantage in capturing the semantic distribution of targets; second, event background learned from the pretrain stage is helpful for the If-Then reasoning.
Experiments ::: Case Study
Table TABREF41 provides an example of model generations given the base event “PersonX works tirelessly” and the inference dimension “xIntent”. The generations under CWVAE mainly contain four kinds of semantics: (1) be productive, (2) finish his work soon, (3) accomplish goal, (4) earn more money. While the semantics of generations using baseline RNN-based Seq2Seq model is relatively limited. Furthermore, the first three kinds of semantic overlap the three ground truth targets, and the fourth kind of semantic is in accordance with daily-life commonsense. Compared to RNN-based Seq2Seq model, our approach can increase the diversity and rationality of generations, meanwhile keep the accuracy.
Related Work ::: Event-Centered Commonsense Reasoning
Understanding events and constructing event-centered commonsense knowledge are crucial to many NLP applications, such as intention recognition BIBREF23 and dialog generation BIBREF24. Recently a growing number of studies focus on event-centered commonsense reasoning, which mainly concentrates on two areas, script event prediction and story ending generation/choosing.
Script event prediction concerns with the temporal relationships between script events BIBREF25, which requires models to choose a correct subsequent triple-organized event among the candidates BIBREF2. Prior work mainly focused on modeling event pairs BIBREF25, event chains BIBREF2 and event graph BIBREF3 to predict the subsequent event. Story ending generation focuses on generating plausible story endings BIBREF16, which requires models to understand the story context, and keep generated endings logically consistent with it BIBREF26, BIBREF27. The above tasks mainly investigate the logical orders of events, whereas the If-Then reasoning task focuses on inferring the mental state of event participants.
Related Work ::: Variational AutoEncoder-Decoder Based Natural Language Generation
VAE BIBREF10 has been widely applied in various of text generation tasks, such as dialogue and machine translation. In dialogue generation, BIBREF9 (BIBREF9) adapts VAE with encoder-decoder framework to model the latent semantic distribution of answers, which can increase the diversity of generations. For the task of machine translation, BIBREF19 (BIBREF19) and BIBREF28 (BIBREF28) employ a latent variable to capture the semantic interaction between the source and target sentence, and regard the latent variable as a supplementation of attention mechanism. While BIBREF29 (BIBREF29) use the latent variable to model topic distributions in text generation. In this paper, we introduce an additional context-aware latent variable to effectively learn background knowledge and conduct If-Then reasoning on the guidance of it.
Conclusion
In this paper, we propose a novel context-aware VAE (CWVAE) framework with two training stages for If-Then commonsense reasoning. By introducing an additional context-aware latent variable, CWVAE is able to learn external background knowledge, and conduct If-Then reasoning under its guidance. In the pretrain stage, CWVAE learns event background knowledge, then in the finetune stage CWVAE adapts such knowledge to each inference dimension. Experimental results demonstrate that CWVAE outperforms baseline methods in both the accuracy and diversity of generations.
Acknowledgments
We thank the anonymous reviewers for their constructive comments, and gratefully acknowledge the support of the National Key Research and Development Program of China (SQ2018AAA010010), the National Key Research and Development Program of China (2018YFB1005103), the National Natural Science Foundation of China (NSFC) via Grant 61702137. | by number of distinct n-grams |
8e2b125426d1220691cceaeaf1875f76a6049cbd | 8e2b125426d1220691cceaeaf1875f76a6049cbd_0 | Q: By how much do they improve the accuracy of inferences over state-of-the-art methods?
Text: Introduction
Recently, event-centered commonsense knowledge has attracted much attention BIBREF0, BIBREF1, BIBREF2, BIBREF3, because of understanding events is an important component of NLP. Given a daily-life event, human can easily understand it and reason about its causes, effects, and so on. However, it still remains a challenging task for NLP systems. This is partly due to most of them are trained for task-specific datasets or objectives, which results in models that are adapt at finding task-specific underlying correlation patterns but have limited capability in simple and explainable commonsense reasoning BIBREF4.
To facilitate this, BIBREF5 (BIBREF5) build the Event2Mind dataset and BIBREF4 (BIBREF4) present the Atomic dataset, mainly focus on nine If-Then reasoning types to describe causes, effects, intents and participant characteristic about events. Together with these datasets, a simple RNN-based encoder-decoder framework is proposed to conduct the If-Then reasoning.
However, there still remains two challenging problems. First, as illustrated in Figure FIGREF1, given an event “PersonX finds a new job”, the plausible feeling of PersonX about that event could be multiple (such as “needy/stressed out” and “relieved/joyful”). Previous work showed that for the one-to-many problem, conventional RNN-based encoder-decoder models tend to generate generic responses, rather than meaningful and specific answers BIBREF6, BIBREF7.
Second, as a commonsense reasoning problem, rich background knowledge is necessary for generating reasonable inferences. For example, as shown in Figure FIGREF1, the feeling of PersonX upon the event “PersonX finds a new job” could be multiple. However, after given a context “PersonX was fired”, the plausible inferences would be narrowed down to “needy” or “stressed out”.
To better solve these problems, we propose a context-aware variational autoencoder (CWVAE) together with a two-stage training procedure. Variational Autoencoder (VAE) based models have shown great potential in modeling the one-to-many problem and generate diversified inferences BIBREF8, BIBREF9.
In addition to the traditional VAE structure, we introduces an extra context-aware latent variable in CWVAE to learn the event background knowledge. In the pretrain stage, CWVAE is trained on an auxiliary dataset (consists of three narrative story corpora and contains rich event background knowledge), to learn the event background information by using the context-aware latent variable. Subsequently, in the finetune stage, CWVAE is trained on the task-specific dataset to adapt the event background information to each specific aspect of If-Then inferential target (e.g., intents, reactions, etc.).
Experiments on the Event2Mind and Atomic dataset show that our proposed approach outperforms baseline methods in both the accuracy and diversity of inferences. The code is released at https://github.com/sjcfr/CWVAE.
Background
Before specifically describing two dataset —- Event2Mind and Atomic used in this paper as well as the If-Then reasoning task, for clarity, we define the following terminologies:
Base event: the prerequisite event in If-Then reasoning, organized as a verb phrase with a predicate and its arguments, such as the event “PersonX finds a new job” shown in Figure FIGREF1.
Inference dimension: a particular If-Then reasoning type, e.g., intents, effects of the base event. Details are shown in Table TABREF2 and Table TABREF3.
Target: the inferential results. For example, as shown in Figure FIGREF1, given a base event “PersonX finds a new job” and one inference dimension “xReact”, the targets could be “relieved” or “needy”. Notice that each inference dimension can have multiple targets.
Event2Mind Dataset contains 25K base events and 300K targets, annotated through crowdsourcing. Event2Mind is organized in a hierarchical form: each base event has three types of inference dimensions, and given a base event, under one of inference dimensions, several targets may simultaneously exist. Table TABREF2 shows the (base event-inference dimension-target) hierarchical structure through an example from Event2Mind.
Atomic Dataset Inspired by Event2Mind, the Atomic dataset shares the same hierarchical structure as Event2Mind, while scales up the size of dataset and expands the scope to nine types of inference dimensions. Table TABREF3 shows the (base event-inference dimension-target) hierarchical structure through an example from Atomic. Though Atomic covers the inference dimensions of Event2Mind, the base event collection of Event2Mind is nonidentical to that of Atomic.
Problem Definition The If-Then reasoning task could be formally defined as a conditional one-to-many generation problem: given a base event $x$ and one inference dimension $d$, the model is required to generate targets $y=f(x, d)$ as close to the ground truths as possible. Both $x$ and $y$ consist of sequence of words: $x=\lbrace x_1,\dots , x_{m}\rbrace $, and $y=\lbrace y_1,\dots , y_{n}\rbrace $, where $m$ and $n$ denotes the length of $x$ and $y$, respectively.
Conditional Variational Autoencoder The variational autoencoder (VAE) defines a generative framework suited for one-to-many generation problem BIBREF10. While conditional variational autoencoder (CVAE) BIBREF11 is an extension of VAE on the conditional generation problem. As shown in Figure FIGREF5 (a), CVAE characterizes the conditional one-to-many generation problem using three random variables: event $x$, target $y$ and a latent variable $z$, which is used for modeling the latent distribution of semantic over targets given an event. Hence, under a certain inference dimension, with regard to the latent semantic variable $z$, the conditional generation problem could be expressed as $p(y|x)=\int p(y|x,z)p(z|x)dz$. CVAE models $p(y|x,z)$ and $p(z|x)$ using deep neural networks (parameterized by $\theta $) $p_{\theta }(y|x,z)$ and $p_{\theta }(z|x)$. Then as illustrated in Figure FIGREF5 (b), $y$ could be generated from $x$ and $z$.
CVAE is trained to maximize the conditional likelihood $p(y|x)$, which involves an intractable marginalization over the latent variable $z$. Instead, following BIBREF10 (BIBREF10), a practical way is to introduce another deep network (parameterized by $\phi $) $q_{\phi }(z|x,y)$ to approximate the true posterior distribution $p(z|x,y)$ and maximize the evidence lower bound (ELBO) of the log-likelihood function:
Therefore, CVAE is composed of three neural networks in general. We refer to $p_{\theta }(z|x)$ as a prior network, $q_{\phi }(z|x,y)$ as a recognition network, and $p_{\theta }(y|x,z)$ as a neural decoder.
Context-aware Variational Autoencoder
Traditional CVAE can model the event-target relation. In other words, given an observed event, CVAE can generate its corresponding targets. While in this paper we model the If-Then reasoning as a [(background), event]-target process. It means that in addition to the observed event, we also want to involve the event background knowledge (which can be learned from event contexts) to generate the reasonable targets.
To this end, we propose a context-aware variational autoencoder (CWVAE), with two additional latent variables: a context-acquiring latent variable $z_c$ to directly acquire context information, and a context-aware latent variable $z_{c^{\prime }}$ to learn background knowledge from $z_c$, as shown in Figure FIGREF6 (a). However, the event context information is absent in the Event2Mind and Atomic dataset. To learn from the external event context information, we design the following two-stage training procedure for CWVAE.
Pretrain: Learning Event Background Knowledge from Auxiliary Dataset In the pretrain stage, CWVAE is trained on three narrative story corpora with rich event context information. As shown in Figure FIGREF6 (a), context-acquiring latent variable $z_c$ is directly conditioned on the context $c$. Hence, $z_c$ could be employed for acquiring background knowledge from event contexts. Then, we minimize the distance between $z_c$ and the context-aware latent variable $z_{c^{\prime }}$, by which the event background knowledge is transferred from $z_c$ to $z_{c^{\prime }}$.
Finetune: Adapt Event Background Knowledge to Each Inference Dimension In the finetune stage, as shown in Figure FIGREF6 (b), CWVAE is trained on the Event2Mind and Atomic dataset without the event context information. Pretrained CWVAE is finetuned to learn the specific inferential knowledge of each inference dimension. After the training procedure, as shown in Figure FIGREF6 (c), samples of $z$ is generated based on $x$ and samples of $z_{c^{\prime }}$, where $z_{c^{\prime }}$ contains rich event background knowledge helpful for If-Then reasoning.
Context-aware Variational Autoencoder ::: Architecture of CWVAE
As shown in Figure FIGREF8, CWVAE is mainly composed of four parts: a neural encoder that provides distributed representations of base events/targets, a recognition network for inferring $q_{\phi }(z|x,y)$, $q_{\phi }(z_c|x,c)$ and $q_{\phi }(z|z_{c^{\prime }}, x)$, a prior network for modeling $p_{\theta }(z_{c^{\prime }}|x)$ and $p_{\theta }(z|x, z_{c^{\prime }})$, and a neural decoder that integrates the information from $z$ and $z_{c^{\prime }}$ to generate targets.
Neural Encoder We employ a bidirectional GRU as neural encoder, which encodes context $c$, event $x$ and target $y$ into distributed representations $h^c=\lbrace h_1^c,\dots ,h_{l_c}^c\rbrace $, $h^x=\lbrace h_1^x,\dots ,h_{l_x}^x\rbrace $ and $h^y=\lbrace h_1^y,\dots ,h_{l_y}^y\rbrace $, where $l_c$, $l_x$ and $l_y$ is the length of $c$, $x$ and $y$, respectively.
Recognition Network The recognition network models $q_{\phi }(z|x,y)$, $q_{\phi }(z_c|x,c)$, $q_{\phi }(z|z_{c^{\prime }}, x)$ based on $h^x$, $h^y$ and $h^c$.
Following traditional VAE, the above-mentioned three distributions are assumed to be multivariate Gaussian distribution with a diagonal covariance structure:
where $\mu $ denotes the mean of the distribution, $\sigma $ denotes the standard deviation of the distribution, and $I$ denotes the identity matrix.
Given $h^x$, $h^y$ and $h^c$, we propose a novel attention-based inferer (ABI) module to estimate the mean and standard deviation of $q_{\phi }(z_{c}|x,c)$, $q_{\phi }(z_{c^{\prime }}|x,y)$ and $q_{\phi }(z|x,y)$:
Briefly, through the attention mechanism, ABI can capture the semantic interaction between input sequences, and estimate the parameters of distributions based on it. We will introduce the specific structure of ABI in below.
Prior Network Prior Network models $p_{\theta }(z_{c^{\prime }}|x)$ and $p_{\theta }(z|x, z_{c^{\prime }})$ based on $h^x$. The distribution of $p_{\theta }(z_{c^{\prime }}|x)$ and $p_{\theta }(z|x, z_{c^{\prime }})$ are still assumed to be multivariate Gaussian, whereas the parameters are different:
where $\mu ^{^{\prime }}$ denotes the mean of the distribution, $\sigma ^{^{\prime }}$ denotes the standard deviation of the distribution and $I$ denotes the identity matrix.
Then the attention-based inferer module is still employed to estimate parameters of distributions:
Neural Decoder Given the base event $x$, the semantic latent variable $z$, and the context-aware latent variable $z_{c^{\prime }}$, the neural decoder defines the generation probability of $y$ as following:
where $p(y_j|y<j, z, z_{c^{\prime }}, x)=g(y_{j-1}, s_{j-1}, e_j)$, $g(\cdot )$ is an attention-based feed forward model, $e_j=\sum _i \alpha _{ji}h_i^{x}$ is the context vector and $s_{j-1}$ is the hidden state of the decoder. We obtain $g(\cdot )$ and $e_j$ the same way as BIBREF12 (BIBREF12). Whereas our decoder differs from BIBREF12 (BIBREF12) in that our model integrates the context-aware latent variable $z_{c^{\prime }}$ and semantic latent variable $z$ in the computation of $s_j=\mathrm {GRU}([E_{yj};s_{j-1},z,z_{j-1}])$, where $E_{yj}$ is the word embeddings of target words.
Note that through concatenating $z$ and $z_{c^{\prime }}$ with $E_{yj}$ and $s_{j-1}$, $s_j$ could be affected by context-aware latent variable $z_{c^{\prime }}$ and semantic latent variable $z$. This allows model to directly access to the event background knowledge from $z_{c^{\prime }}$. In addition, the randomness of $z$ and $z_{c^{\prime }}$ would increase the diversity of model generation.
Attention-based Inferer Attention mechanism has shown strong ability in capturing semantic interactions BIBREF13. Inspired by the co-attention mechanism BIBREF14, we propose an attention-based inferer (ABI) to estimate the mean and standard deviation of a distribution belongs to $p_{\theta }(\cdot )$ or $q_{\phi }(\cdot )$ by capturing semantic interactions of input sequences.
Specifically, given two input sequences (e.g., representations of contexts and events) $a=\lbrace a_1,\dots ,a_{l_a}\rbrace $ and $b=\lbrace b_1,\dots ,b_{l_b}\rbrace $ with length $l_a$ and $l_b$, we first obtain the attention scores from each side through:
where $W_a \in \mathbb {R}^{d\times d_a}$ and $W_b \in \mathbb {R}^{d\times d_b}$ are parameter weights.
With these attention scores, the context vectors of both sequences are given by:
Then we perform a mean pooling operation on context vectors of both sequences:
To obtain the mean and standard deviation, the pooled context vectors $\bar{c^a}$ and $\bar{c^b}$ which carry semantic interaction between two sequences, are concatenated and projected into a latent semantic space through a nonlinear transformation:
Finally the mean and standard deviation are generated through a nonlinear transformation over $h_z$:
Context-aware Variational Autoencoder ::: Optimizing
With the incorporation of $z_{c^{\prime }}$, the original loglikelihood could be decomposed as:
Then following traditional CVAE, the ELBO of CWVAE is defined as follows:
which is the objective function at the finetune stage.
While in the pretrain stage, as we aim to learn background knowledge through minimizing the distance between $z_c$ and $z_{c^{\prime }}$, in addition to $L^{ELBO}$, a context-aware regulation term is introduced:
where the context aware regularization term is the KL distance between $z$ and $z_{c^{\prime }}$. Through minimizing the context aware regularization term, we aim to pass event context knowledge from $z_c$ to the context aware latent variable $z_{c^{\prime }}$.
Context-aware Variational Autoencoder ::: Training Details
To test the performance of CWVAE, we split the Event2Mind and Atomic dataset into training, development and test sets (80%, 10%, 10%) in the same way as BIBREF5 (BIBREF5) and BIBREF4 (BIBREF4), respectively. We initialize the embedding layer from 300d GloVe word embeddings. The neural encoder is chosen to be biGRU with 300 hidden units. For the ABI module, size of $W_a$ and $W_b$ is set to be $100 \times d_a$ and $100 \times d_b$ respectively. The dimension of $z_c$, $z_{c^{\prime }}$ and $z$ is all set as 40. The neural decoder is set to be GRU with 300d hidden state. Regulation coefficient $\lambda $ of context-aware regulation term is set to be 0.1. Models are trained using an Adam optimizer BIBREF15 with a learning rate of 0.001.
Experiments ::: Auxiliary Dataset
The auxiliary dataset is built upon three human-written story corpora: ROCStories BIBREF16, VIST BIBREF17 and WritingPrompts BIBREF18. ROCStories and VIST are composed of short stories with five sentences. We filter out stories of more than 1,000 words in WritingPrompts, and cut the remaining stories into five-sentence-paragraphs.
For each five-sentence-paragraph, we define the first three sentences as contexts of the base event, the fourth sentence as the base event, and the fifth sentence as the inference target. For example, as shown in Table TABREF25, the first three sentences describe a context that Jason was unsatisfied about his job and applied for a new job. Hence, after happening the event “he got the job”, a plausible react about the event could be “jason was much happier at his new job”. In total, the auxiliary dataset contains 192,316 $(context, event, target)$ triples.
Experiments ::: Baselines
We compared our proposed model with the following four baseline methods:
RNN-based Seq2Seq proposed by BIBREF4 (BIBREF4) for the If-Then reasoning on Atomic.
Variational Seq2Seq combines a latent variable with the encoder-decoder structure through converting the last hidden state of RNN encoder into a Gaussian distributed latent variable BIBREF8.
VRNMT Propose by BIBREF19 (BIBREF19), VRNMT combines CVAE with attention-based encoder-decoder framework through introduces a latent variable to model the semantic distribution of targets.
CWVAE-Unpretrained refers to the CWVAE model without the pretrain stage.
Note that, for each baseline method, we train distinct models for each distinct inference dimension, respectively.
Experiments ::: Evaluation Metrics ::: Automatic Evaluation
We first compare the perplexity of CWVAE with baseline methods. Perplexity measures the probability of model to regenerate the exact targets, which is particular suitable for evaluating the model performance on one-to-many problem BIBREF20. Further, we employ BLEU score to evaluate the accuracy of generations BIBREF21, and the number of distinct n-gram to evaluate the diversity of generations BIBREF6. The distinct is normalized to $[0, 1]$ by dividing the total number of generated tokens.
Experiments ::: Evaluation Metrics ::: Human Evaluation
Since automatic evaluation of generations is still a challenging task BIBREF22, we also conduct human evaluations on the model performance. Five human experts are employed to evaluate the coherence, diversity and fluency of generated targets. Experts are asked to vote for if a generation is fluent or coherent for each generated target, and give a 1-5 score for the diversity of generations. For both Event2Mind and Atomic datasets, 100 events are randomly selected from the test set. For each method, top 10 generated targets of each base event are used for evaluation. Finally we report three overall averaged scores of coherence, diversity and fluency on both datasets, respectively.
Experiments ::: Overall Results
We list the perplexity and BLEU score of CWVAE and baseline methods on Event2Mind and Atomic in Table TABREF31 and Table TABREF33, respectively, and show the distinct-1 and distinct-2 score on Event2Mind and Atomic in Table TABREF32 and Table TABREF34, respectively. We find that:
(1) As shown in Table TABREF32 and Table TABREF34, comparison between RNN-based Seq2Seq and variational-based methods, including Variational Seq2Seq, VRNMT, CWVAE-unpretrained and CWVAE shows that, variational-based methods could increase the diversity of generations. This confirms one of our motivations that variational-based methods could capture the latent semantic distribution within targets and increase the diversity of If-Then reasoning.
(2) Comparing CWVAE-unpretrained with other baseline methods shows that, in general CWVAE improves the accuracy and diversity on both dataset. These results indicate the efficiency of CWVAE in capturing the latent semantic distribution of targets, and generate more reasonable inferential results.
(3) Comparison between CWVAE and CWVAE-unpretrained shows that the pretrain stage could enhance the performance of CWVAE in both the accuracy and diversity. This is mainly because event knowledge could offer the guidance for If-Then reasoning. In the pretrain stage, CWVAE could capture the event background knowledge through context-aware latent variable, and such knowledge could be be adapted to our task through the fintune stage.
To further evaluate the effectiveness of our proposed approach, we also conduct human evaluations, the results of which are shown in Table TABREF39 and Table TABREF40. On both datasets, CWVAE-based methods achieve consistent better coherence, diversity and fluency performances. While comparing with CWVAE-Unpretrained, the pretrain procedure could improves the performance on coherence and fluency. The main reasons are twofold: first, the CWVAE has advantage in capturing the semantic distribution of targets; second, event background learned from the pretrain stage is helpful for the If-Then reasoning.
Experiments ::: Case Study
Table TABREF41 provides an example of model generations given the base event “PersonX works tirelessly” and the inference dimension “xIntent”. The generations under CWVAE mainly contain four kinds of semantics: (1) be productive, (2) finish his work soon, (3) accomplish goal, (4) earn more money. While the semantics of generations using baseline RNN-based Seq2Seq model is relatively limited. Furthermore, the first three kinds of semantic overlap the three ground truth targets, and the fourth kind of semantic is in accordance with daily-life commonsense. Compared to RNN-based Seq2Seq model, our approach can increase the diversity and rationality of generations, meanwhile keep the accuracy.
Related Work ::: Event-Centered Commonsense Reasoning
Understanding events and constructing event-centered commonsense knowledge are crucial to many NLP applications, such as intention recognition BIBREF23 and dialog generation BIBREF24. Recently a growing number of studies focus on event-centered commonsense reasoning, which mainly concentrates on two areas, script event prediction and story ending generation/choosing.
Script event prediction concerns with the temporal relationships between script events BIBREF25, which requires models to choose a correct subsequent triple-organized event among the candidates BIBREF2. Prior work mainly focused on modeling event pairs BIBREF25, event chains BIBREF2 and event graph BIBREF3 to predict the subsequent event. Story ending generation focuses on generating plausible story endings BIBREF16, which requires models to understand the story context, and keep generated endings logically consistent with it BIBREF26, BIBREF27. The above tasks mainly investigate the logical orders of events, whereas the If-Then reasoning task focuses on inferring the mental state of event participants.
Related Work ::: Variational AutoEncoder-Decoder Based Natural Language Generation
VAE BIBREF10 has been widely applied in various of text generation tasks, such as dialogue and machine translation. In dialogue generation, BIBREF9 (BIBREF9) adapts VAE with encoder-decoder framework to model the latent semantic distribution of answers, which can increase the diversity of generations. For the task of machine translation, BIBREF19 (BIBREF19) and BIBREF28 (BIBREF28) employ a latent variable to capture the semantic interaction between the source and target sentence, and regard the latent variable as a supplementation of attention mechanism. While BIBREF29 (BIBREF29) use the latent variable to model topic distributions in text generation. In this paper, we introduce an additional context-aware latent variable to effectively learn background knowledge and conduct If-Then reasoning on the guidance of it.
Conclusion
In this paper, we propose a novel context-aware VAE (CWVAE) framework with two training stages for If-Then commonsense reasoning. By introducing an additional context-aware latent variable, CWVAE is able to learn external background knowledge, and conduct If-Then reasoning under its guidance. In the pretrain stage, CWVAE learns event background knowledge, then in the finetune stage CWVAE adapts such knowledge to each inference dimension. Experimental results demonstrate that CWVAE outperforms baseline methods in both the accuracy and diversity of generations.
Acknowledgments
We thank the anonymous reviewers for their constructive comments, and gratefully acknowledge the support of the National Key Research and Development Program of China (SQ2018AAA010010), the National Key Research and Development Program of China (2018YFB1005103), the National Natural Science Foundation of China (NSFC) via Grant 61702137. | ON Event2Mind, the accuracy of proposed method is improved by absolute BLUE 2.9, 10.87, 1.79 for xIntent, xReact and oReact respectively.
On Atomic dataset, the accuracy of proposed method is improved by absolute BLUE 3.95. 4.11, 4.49 for xIntent, xReact and oReact.respectively. |
42bc4e0cd0f3e238a4891142f1b84ebcd6594bf1 | 42bc4e0cd0f3e238a4891142f1b84ebcd6594bf1_0 | Q: Which models do they use as baselines on the Atomic dataset?
Text: Introduction
Recently, event-centered commonsense knowledge has attracted much attention BIBREF0, BIBREF1, BIBREF2, BIBREF3, because of understanding events is an important component of NLP. Given a daily-life event, human can easily understand it and reason about its causes, effects, and so on. However, it still remains a challenging task for NLP systems. This is partly due to most of them are trained for task-specific datasets or objectives, which results in models that are adapt at finding task-specific underlying correlation patterns but have limited capability in simple and explainable commonsense reasoning BIBREF4.
To facilitate this, BIBREF5 (BIBREF5) build the Event2Mind dataset and BIBREF4 (BIBREF4) present the Atomic dataset, mainly focus on nine If-Then reasoning types to describe causes, effects, intents and participant characteristic about events. Together with these datasets, a simple RNN-based encoder-decoder framework is proposed to conduct the If-Then reasoning.
However, there still remains two challenging problems. First, as illustrated in Figure FIGREF1, given an event “PersonX finds a new job”, the plausible feeling of PersonX about that event could be multiple (such as “needy/stressed out” and “relieved/joyful”). Previous work showed that for the one-to-many problem, conventional RNN-based encoder-decoder models tend to generate generic responses, rather than meaningful and specific answers BIBREF6, BIBREF7.
Second, as a commonsense reasoning problem, rich background knowledge is necessary for generating reasonable inferences. For example, as shown in Figure FIGREF1, the feeling of PersonX upon the event “PersonX finds a new job” could be multiple. However, after given a context “PersonX was fired”, the plausible inferences would be narrowed down to “needy” or “stressed out”.
To better solve these problems, we propose a context-aware variational autoencoder (CWVAE) together with a two-stage training procedure. Variational Autoencoder (VAE) based models have shown great potential in modeling the one-to-many problem and generate diversified inferences BIBREF8, BIBREF9.
In addition to the traditional VAE structure, we introduces an extra context-aware latent variable in CWVAE to learn the event background knowledge. In the pretrain stage, CWVAE is trained on an auxiliary dataset (consists of three narrative story corpora and contains rich event background knowledge), to learn the event background information by using the context-aware latent variable. Subsequently, in the finetune stage, CWVAE is trained on the task-specific dataset to adapt the event background information to each specific aspect of If-Then inferential target (e.g., intents, reactions, etc.).
Experiments on the Event2Mind and Atomic dataset show that our proposed approach outperforms baseline methods in both the accuracy and diversity of inferences. The code is released at https://github.com/sjcfr/CWVAE.
Background
Before specifically describing two dataset —- Event2Mind and Atomic used in this paper as well as the If-Then reasoning task, for clarity, we define the following terminologies:
Base event: the prerequisite event in If-Then reasoning, organized as a verb phrase with a predicate and its arguments, such as the event “PersonX finds a new job” shown in Figure FIGREF1.
Inference dimension: a particular If-Then reasoning type, e.g., intents, effects of the base event. Details are shown in Table TABREF2 and Table TABREF3.
Target: the inferential results. For example, as shown in Figure FIGREF1, given a base event “PersonX finds a new job” and one inference dimension “xReact”, the targets could be “relieved” or “needy”. Notice that each inference dimension can have multiple targets.
Event2Mind Dataset contains 25K base events and 300K targets, annotated through crowdsourcing. Event2Mind is organized in a hierarchical form: each base event has three types of inference dimensions, and given a base event, under one of inference dimensions, several targets may simultaneously exist. Table TABREF2 shows the (base event-inference dimension-target) hierarchical structure through an example from Event2Mind.
Atomic Dataset Inspired by Event2Mind, the Atomic dataset shares the same hierarchical structure as Event2Mind, while scales up the size of dataset and expands the scope to nine types of inference dimensions. Table TABREF3 shows the (base event-inference dimension-target) hierarchical structure through an example from Atomic. Though Atomic covers the inference dimensions of Event2Mind, the base event collection of Event2Mind is nonidentical to that of Atomic.
Problem Definition The If-Then reasoning task could be formally defined as a conditional one-to-many generation problem: given a base event $x$ and one inference dimension $d$, the model is required to generate targets $y=f(x, d)$ as close to the ground truths as possible. Both $x$ and $y$ consist of sequence of words: $x=\lbrace x_1,\dots , x_{m}\rbrace $, and $y=\lbrace y_1,\dots , y_{n}\rbrace $, where $m$ and $n$ denotes the length of $x$ and $y$, respectively.
Conditional Variational Autoencoder The variational autoencoder (VAE) defines a generative framework suited for one-to-many generation problem BIBREF10. While conditional variational autoencoder (CVAE) BIBREF11 is an extension of VAE on the conditional generation problem. As shown in Figure FIGREF5 (a), CVAE characterizes the conditional one-to-many generation problem using three random variables: event $x$, target $y$ and a latent variable $z$, which is used for modeling the latent distribution of semantic over targets given an event. Hence, under a certain inference dimension, with regard to the latent semantic variable $z$, the conditional generation problem could be expressed as $p(y|x)=\int p(y|x,z)p(z|x)dz$. CVAE models $p(y|x,z)$ and $p(z|x)$ using deep neural networks (parameterized by $\theta $) $p_{\theta }(y|x,z)$ and $p_{\theta }(z|x)$. Then as illustrated in Figure FIGREF5 (b), $y$ could be generated from $x$ and $z$.
CVAE is trained to maximize the conditional likelihood $p(y|x)$, which involves an intractable marginalization over the latent variable $z$. Instead, following BIBREF10 (BIBREF10), a practical way is to introduce another deep network (parameterized by $\phi $) $q_{\phi }(z|x,y)$ to approximate the true posterior distribution $p(z|x,y)$ and maximize the evidence lower bound (ELBO) of the log-likelihood function:
Therefore, CVAE is composed of three neural networks in general. We refer to $p_{\theta }(z|x)$ as a prior network, $q_{\phi }(z|x,y)$ as a recognition network, and $p_{\theta }(y|x,z)$ as a neural decoder.
Context-aware Variational Autoencoder
Traditional CVAE can model the event-target relation. In other words, given an observed event, CVAE can generate its corresponding targets. While in this paper we model the If-Then reasoning as a [(background), event]-target process. It means that in addition to the observed event, we also want to involve the event background knowledge (which can be learned from event contexts) to generate the reasonable targets.
To this end, we propose a context-aware variational autoencoder (CWVAE), with two additional latent variables: a context-acquiring latent variable $z_c$ to directly acquire context information, and a context-aware latent variable $z_{c^{\prime }}$ to learn background knowledge from $z_c$, as shown in Figure FIGREF6 (a). However, the event context information is absent in the Event2Mind and Atomic dataset. To learn from the external event context information, we design the following two-stage training procedure for CWVAE.
Pretrain: Learning Event Background Knowledge from Auxiliary Dataset In the pretrain stage, CWVAE is trained on three narrative story corpora with rich event context information. As shown in Figure FIGREF6 (a), context-acquiring latent variable $z_c$ is directly conditioned on the context $c$. Hence, $z_c$ could be employed for acquiring background knowledge from event contexts. Then, we minimize the distance between $z_c$ and the context-aware latent variable $z_{c^{\prime }}$, by which the event background knowledge is transferred from $z_c$ to $z_{c^{\prime }}$.
Finetune: Adapt Event Background Knowledge to Each Inference Dimension In the finetune stage, as shown in Figure FIGREF6 (b), CWVAE is trained on the Event2Mind and Atomic dataset without the event context information. Pretrained CWVAE is finetuned to learn the specific inferential knowledge of each inference dimension. After the training procedure, as shown in Figure FIGREF6 (c), samples of $z$ is generated based on $x$ and samples of $z_{c^{\prime }}$, where $z_{c^{\prime }}$ contains rich event background knowledge helpful for If-Then reasoning.
Context-aware Variational Autoencoder ::: Architecture of CWVAE
As shown in Figure FIGREF8, CWVAE is mainly composed of four parts: a neural encoder that provides distributed representations of base events/targets, a recognition network for inferring $q_{\phi }(z|x,y)$, $q_{\phi }(z_c|x,c)$ and $q_{\phi }(z|z_{c^{\prime }}, x)$, a prior network for modeling $p_{\theta }(z_{c^{\prime }}|x)$ and $p_{\theta }(z|x, z_{c^{\prime }})$, and a neural decoder that integrates the information from $z$ and $z_{c^{\prime }}$ to generate targets.
Neural Encoder We employ a bidirectional GRU as neural encoder, which encodes context $c$, event $x$ and target $y$ into distributed representations $h^c=\lbrace h_1^c,\dots ,h_{l_c}^c\rbrace $, $h^x=\lbrace h_1^x,\dots ,h_{l_x}^x\rbrace $ and $h^y=\lbrace h_1^y,\dots ,h_{l_y}^y\rbrace $, where $l_c$, $l_x$ and $l_y$ is the length of $c$, $x$ and $y$, respectively.
Recognition Network The recognition network models $q_{\phi }(z|x,y)$, $q_{\phi }(z_c|x,c)$, $q_{\phi }(z|z_{c^{\prime }}, x)$ based on $h^x$, $h^y$ and $h^c$.
Following traditional VAE, the above-mentioned three distributions are assumed to be multivariate Gaussian distribution with a diagonal covariance structure:
where $\mu $ denotes the mean of the distribution, $\sigma $ denotes the standard deviation of the distribution, and $I$ denotes the identity matrix.
Given $h^x$, $h^y$ and $h^c$, we propose a novel attention-based inferer (ABI) module to estimate the mean and standard deviation of $q_{\phi }(z_{c}|x,c)$, $q_{\phi }(z_{c^{\prime }}|x,y)$ and $q_{\phi }(z|x,y)$:
Briefly, through the attention mechanism, ABI can capture the semantic interaction between input sequences, and estimate the parameters of distributions based on it. We will introduce the specific structure of ABI in below.
Prior Network Prior Network models $p_{\theta }(z_{c^{\prime }}|x)$ and $p_{\theta }(z|x, z_{c^{\prime }})$ based on $h^x$. The distribution of $p_{\theta }(z_{c^{\prime }}|x)$ and $p_{\theta }(z|x, z_{c^{\prime }})$ are still assumed to be multivariate Gaussian, whereas the parameters are different:
where $\mu ^{^{\prime }}$ denotes the mean of the distribution, $\sigma ^{^{\prime }}$ denotes the standard deviation of the distribution and $I$ denotes the identity matrix.
Then the attention-based inferer module is still employed to estimate parameters of distributions:
Neural Decoder Given the base event $x$, the semantic latent variable $z$, and the context-aware latent variable $z_{c^{\prime }}$, the neural decoder defines the generation probability of $y$ as following:
where $p(y_j|y<j, z, z_{c^{\prime }}, x)=g(y_{j-1}, s_{j-1}, e_j)$, $g(\cdot )$ is an attention-based feed forward model, $e_j=\sum _i \alpha _{ji}h_i^{x}$ is the context vector and $s_{j-1}$ is the hidden state of the decoder. We obtain $g(\cdot )$ and $e_j$ the same way as BIBREF12 (BIBREF12). Whereas our decoder differs from BIBREF12 (BIBREF12) in that our model integrates the context-aware latent variable $z_{c^{\prime }}$ and semantic latent variable $z$ in the computation of $s_j=\mathrm {GRU}([E_{yj};s_{j-1},z,z_{j-1}])$, where $E_{yj}$ is the word embeddings of target words.
Note that through concatenating $z$ and $z_{c^{\prime }}$ with $E_{yj}$ and $s_{j-1}$, $s_j$ could be affected by context-aware latent variable $z_{c^{\prime }}$ and semantic latent variable $z$. This allows model to directly access to the event background knowledge from $z_{c^{\prime }}$. In addition, the randomness of $z$ and $z_{c^{\prime }}$ would increase the diversity of model generation.
Attention-based Inferer Attention mechanism has shown strong ability in capturing semantic interactions BIBREF13. Inspired by the co-attention mechanism BIBREF14, we propose an attention-based inferer (ABI) to estimate the mean and standard deviation of a distribution belongs to $p_{\theta }(\cdot )$ or $q_{\phi }(\cdot )$ by capturing semantic interactions of input sequences.
Specifically, given two input sequences (e.g., representations of contexts and events) $a=\lbrace a_1,\dots ,a_{l_a}\rbrace $ and $b=\lbrace b_1,\dots ,b_{l_b}\rbrace $ with length $l_a$ and $l_b$, we first obtain the attention scores from each side through:
where $W_a \in \mathbb {R}^{d\times d_a}$ and $W_b \in \mathbb {R}^{d\times d_b}$ are parameter weights.
With these attention scores, the context vectors of both sequences are given by:
Then we perform a mean pooling operation on context vectors of both sequences:
To obtain the mean and standard deviation, the pooled context vectors $\bar{c^a}$ and $\bar{c^b}$ which carry semantic interaction between two sequences, are concatenated and projected into a latent semantic space through a nonlinear transformation:
Finally the mean and standard deviation are generated through a nonlinear transformation over $h_z$:
Context-aware Variational Autoencoder ::: Optimizing
With the incorporation of $z_{c^{\prime }}$, the original loglikelihood could be decomposed as:
Then following traditional CVAE, the ELBO of CWVAE is defined as follows:
which is the objective function at the finetune stage.
While in the pretrain stage, as we aim to learn background knowledge through minimizing the distance between $z_c$ and $z_{c^{\prime }}$, in addition to $L^{ELBO}$, a context-aware regulation term is introduced:
where the context aware regularization term is the KL distance between $z$ and $z_{c^{\prime }}$. Through minimizing the context aware regularization term, we aim to pass event context knowledge from $z_c$ to the context aware latent variable $z_{c^{\prime }}$.
Context-aware Variational Autoencoder ::: Training Details
To test the performance of CWVAE, we split the Event2Mind and Atomic dataset into training, development and test sets (80%, 10%, 10%) in the same way as BIBREF5 (BIBREF5) and BIBREF4 (BIBREF4), respectively. We initialize the embedding layer from 300d GloVe word embeddings. The neural encoder is chosen to be biGRU with 300 hidden units. For the ABI module, size of $W_a$ and $W_b$ is set to be $100 \times d_a$ and $100 \times d_b$ respectively. The dimension of $z_c$, $z_{c^{\prime }}$ and $z$ is all set as 40. The neural decoder is set to be GRU with 300d hidden state. Regulation coefficient $\lambda $ of context-aware regulation term is set to be 0.1. Models are trained using an Adam optimizer BIBREF15 with a learning rate of 0.001.
Experiments ::: Auxiliary Dataset
The auxiliary dataset is built upon three human-written story corpora: ROCStories BIBREF16, VIST BIBREF17 and WritingPrompts BIBREF18. ROCStories and VIST are composed of short stories with five sentences. We filter out stories of more than 1,000 words in WritingPrompts, and cut the remaining stories into five-sentence-paragraphs.
For each five-sentence-paragraph, we define the first three sentences as contexts of the base event, the fourth sentence as the base event, and the fifth sentence as the inference target. For example, as shown in Table TABREF25, the first three sentences describe a context that Jason was unsatisfied about his job and applied for a new job. Hence, after happening the event “he got the job”, a plausible react about the event could be “jason was much happier at his new job”. In total, the auxiliary dataset contains 192,316 $(context, event, target)$ triples.
Experiments ::: Baselines
We compared our proposed model with the following four baseline methods:
RNN-based Seq2Seq proposed by BIBREF4 (BIBREF4) for the If-Then reasoning on Atomic.
Variational Seq2Seq combines a latent variable with the encoder-decoder structure through converting the last hidden state of RNN encoder into a Gaussian distributed latent variable BIBREF8.
VRNMT Propose by BIBREF19 (BIBREF19), VRNMT combines CVAE with attention-based encoder-decoder framework through introduces a latent variable to model the semantic distribution of targets.
CWVAE-Unpretrained refers to the CWVAE model without the pretrain stage.
Note that, for each baseline method, we train distinct models for each distinct inference dimension, respectively.
Experiments ::: Evaluation Metrics ::: Automatic Evaluation
We first compare the perplexity of CWVAE with baseline methods. Perplexity measures the probability of model to regenerate the exact targets, which is particular suitable for evaluating the model performance on one-to-many problem BIBREF20. Further, we employ BLEU score to evaluate the accuracy of generations BIBREF21, and the number of distinct n-gram to evaluate the diversity of generations BIBREF6. The distinct is normalized to $[0, 1]$ by dividing the total number of generated tokens.
Experiments ::: Evaluation Metrics ::: Human Evaluation
Since automatic evaluation of generations is still a challenging task BIBREF22, we also conduct human evaluations on the model performance. Five human experts are employed to evaluate the coherence, diversity and fluency of generated targets. Experts are asked to vote for if a generation is fluent or coherent for each generated target, and give a 1-5 score for the diversity of generations. For both Event2Mind and Atomic datasets, 100 events are randomly selected from the test set. For each method, top 10 generated targets of each base event are used for evaluation. Finally we report three overall averaged scores of coherence, diversity and fluency on both datasets, respectively.
Experiments ::: Overall Results
We list the perplexity and BLEU score of CWVAE and baseline methods on Event2Mind and Atomic in Table TABREF31 and Table TABREF33, respectively, and show the distinct-1 and distinct-2 score on Event2Mind and Atomic in Table TABREF32 and Table TABREF34, respectively. We find that:
(1) As shown in Table TABREF32 and Table TABREF34, comparison between RNN-based Seq2Seq and variational-based methods, including Variational Seq2Seq, VRNMT, CWVAE-unpretrained and CWVAE shows that, variational-based methods could increase the diversity of generations. This confirms one of our motivations that variational-based methods could capture the latent semantic distribution within targets and increase the diversity of If-Then reasoning.
(2) Comparing CWVAE-unpretrained with other baseline methods shows that, in general CWVAE improves the accuracy and diversity on both dataset. These results indicate the efficiency of CWVAE in capturing the latent semantic distribution of targets, and generate more reasonable inferential results.
(3) Comparison between CWVAE and CWVAE-unpretrained shows that the pretrain stage could enhance the performance of CWVAE in both the accuracy and diversity. This is mainly because event knowledge could offer the guidance for If-Then reasoning. In the pretrain stage, CWVAE could capture the event background knowledge through context-aware latent variable, and such knowledge could be be adapted to our task through the fintune stage.
To further evaluate the effectiveness of our proposed approach, we also conduct human evaluations, the results of which are shown in Table TABREF39 and Table TABREF40. On both datasets, CWVAE-based methods achieve consistent better coherence, diversity and fluency performances. While comparing with CWVAE-Unpretrained, the pretrain procedure could improves the performance on coherence and fluency. The main reasons are twofold: first, the CWVAE has advantage in capturing the semantic distribution of targets; second, event background learned from the pretrain stage is helpful for the If-Then reasoning.
Experiments ::: Case Study
Table TABREF41 provides an example of model generations given the base event “PersonX works tirelessly” and the inference dimension “xIntent”. The generations under CWVAE mainly contain four kinds of semantics: (1) be productive, (2) finish his work soon, (3) accomplish goal, (4) earn more money. While the semantics of generations using baseline RNN-based Seq2Seq model is relatively limited. Furthermore, the first three kinds of semantic overlap the three ground truth targets, and the fourth kind of semantic is in accordance with daily-life commonsense. Compared to RNN-based Seq2Seq model, our approach can increase the diversity and rationality of generations, meanwhile keep the accuracy.
Related Work ::: Event-Centered Commonsense Reasoning
Understanding events and constructing event-centered commonsense knowledge are crucial to many NLP applications, such as intention recognition BIBREF23 and dialog generation BIBREF24. Recently a growing number of studies focus on event-centered commonsense reasoning, which mainly concentrates on two areas, script event prediction and story ending generation/choosing.
Script event prediction concerns with the temporal relationships between script events BIBREF25, which requires models to choose a correct subsequent triple-organized event among the candidates BIBREF2. Prior work mainly focused on modeling event pairs BIBREF25, event chains BIBREF2 and event graph BIBREF3 to predict the subsequent event. Story ending generation focuses on generating plausible story endings BIBREF16, which requires models to understand the story context, and keep generated endings logically consistent with it BIBREF26, BIBREF27. The above tasks mainly investigate the logical orders of events, whereas the If-Then reasoning task focuses on inferring the mental state of event participants.
Related Work ::: Variational AutoEncoder-Decoder Based Natural Language Generation
VAE BIBREF10 has been widely applied in various of text generation tasks, such as dialogue and machine translation. In dialogue generation, BIBREF9 (BIBREF9) adapts VAE with encoder-decoder framework to model the latent semantic distribution of answers, which can increase the diversity of generations. For the task of machine translation, BIBREF19 (BIBREF19) and BIBREF28 (BIBREF28) employ a latent variable to capture the semantic interaction between the source and target sentence, and regard the latent variable as a supplementation of attention mechanism. While BIBREF29 (BIBREF29) use the latent variable to model topic distributions in text generation. In this paper, we introduce an additional context-aware latent variable to effectively learn background knowledge and conduct If-Then reasoning on the guidance of it.
Conclusion
In this paper, we propose a novel context-aware VAE (CWVAE) framework with two training stages for If-Then commonsense reasoning. By introducing an additional context-aware latent variable, CWVAE is able to learn external background knowledge, and conduct If-Then reasoning under its guidance. In the pretrain stage, CWVAE learns event background knowledge, then in the finetune stage CWVAE adapts such knowledge to each inference dimension. Experimental results demonstrate that CWVAE outperforms baseline methods in both the accuracy and diversity of generations.
Acknowledgments
We thank the anonymous reviewers for their constructive comments, and gratefully acknowledge the support of the National Key Research and Development Program of China (SQ2018AAA010010), the National Key Research and Development Program of China (2018YFB1005103), the National Natural Science Foundation of China (NSFC) via Grant 61702137. | RNN-based Seq2Seq, Variational Seq2Seq, VRNMT , CWVAE-Unpretrained |
fb76e994e2e3fa129f1e94f1b043b274af8fb84c | fb76e994e2e3fa129f1e94f1b043b274af8fb84c_0 | Q: How does the context-aware variational autoencoder learn event background information?
Text: Introduction
Recently, event-centered commonsense knowledge has attracted much attention BIBREF0, BIBREF1, BIBREF2, BIBREF3, because of understanding events is an important component of NLP. Given a daily-life event, human can easily understand it and reason about its causes, effects, and so on. However, it still remains a challenging task for NLP systems. This is partly due to most of them are trained for task-specific datasets or objectives, which results in models that are adapt at finding task-specific underlying correlation patterns but have limited capability in simple and explainable commonsense reasoning BIBREF4.
To facilitate this, BIBREF5 (BIBREF5) build the Event2Mind dataset and BIBREF4 (BIBREF4) present the Atomic dataset, mainly focus on nine If-Then reasoning types to describe causes, effects, intents and participant characteristic about events. Together with these datasets, a simple RNN-based encoder-decoder framework is proposed to conduct the If-Then reasoning.
However, there still remains two challenging problems. First, as illustrated in Figure FIGREF1, given an event “PersonX finds a new job”, the plausible feeling of PersonX about that event could be multiple (such as “needy/stressed out” and “relieved/joyful”). Previous work showed that for the one-to-many problem, conventional RNN-based encoder-decoder models tend to generate generic responses, rather than meaningful and specific answers BIBREF6, BIBREF7.
Second, as a commonsense reasoning problem, rich background knowledge is necessary for generating reasonable inferences. For example, as shown in Figure FIGREF1, the feeling of PersonX upon the event “PersonX finds a new job” could be multiple. However, after given a context “PersonX was fired”, the plausible inferences would be narrowed down to “needy” or “stressed out”.
To better solve these problems, we propose a context-aware variational autoencoder (CWVAE) together with a two-stage training procedure. Variational Autoencoder (VAE) based models have shown great potential in modeling the one-to-many problem and generate diversified inferences BIBREF8, BIBREF9.
In addition to the traditional VAE structure, we introduces an extra context-aware latent variable in CWVAE to learn the event background knowledge. In the pretrain stage, CWVAE is trained on an auxiliary dataset (consists of three narrative story corpora and contains rich event background knowledge), to learn the event background information by using the context-aware latent variable. Subsequently, in the finetune stage, CWVAE is trained on the task-specific dataset to adapt the event background information to each specific aspect of If-Then inferential target (e.g., intents, reactions, etc.).
Experiments on the Event2Mind and Atomic dataset show that our proposed approach outperforms baseline methods in both the accuracy and diversity of inferences. The code is released at https://github.com/sjcfr/CWVAE.
Background
Before specifically describing two dataset —- Event2Mind and Atomic used in this paper as well as the If-Then reasoning task, for clarity, we define the following terminologies:
Base event: the prerequisite event in If-Then reasoning, organized as a verb phrase with a predicate and its arguments, such as the event “PersonX finds a new job” shown in Figure FIGREF1.
Inference dimension: a particular If-Then reasoning type, e.g., intents, effects of the base event. Details are shown in Table TABREF2 and Table TABREF3.
Target: the inferential results. For example, as shown in Figure FIGREF1, given a base event “PersonX finds a new job” and one inference dimension “xReact”, the targets could be “relieved” or “needy”. Notice that each inference dimension can have multiple targets.
Event2Mind Dataset contains 25K base events and 300K targets, annotated through crowdsourcing. Event2Mind is organized in a hierarchical form: each base event has three types of inference dimensions, and given a base event, under one of inference dimensions, several targets may simultaneously exist. Table TABREF2 shows the (base event-inference dimension-target) hierarchical structure through an example from Event2Mind.
Atomic Dataset Inspired by Event2Mind, the Atomic dataset shares the same hierarchical structure as Event2Mind, while scales up the size of dataset and expands the scope to nine types of inference dimensions. Table TABREF3 shows the (base event-inference dimension-target) hierarchical structure through an example from Atomic. Though Atomic covers the inference dimensions of Event2Mind, the base event collection of Event2Mind is nonidentical to that of Atomic.
Problem Definition The If-Then reasoning task could be formally defined as a conditional one-to-many generation problem: given a base event $x$ and one inference dimension $d$, the model is required to generate targets $y=f(x, d)$ as close to the ground truths as possible. Both $x$ and $y$ consist of sequence of words: $x=\lbrace x_1,\dots , x_{m}\rbrace $, and $y=\lbrace y_1,\dots , y_{n}\rbrace $, where $m$ and $n$ denotes the length of $x$ and $y$, respectively.
Conditional Variational Autoencoder The variational autoencoder (VAE) defines a generative framework suited for one-to-many generation problem BIBREF10. While conditional variational autoencoder (CVAE) BIBREF11 is an extension of VAE on the conditional generation problem. As shown in Figure FIGREF5 (a), CVAE characterizes the conditional one-to-many generation problem using three random variables: event $x$, target $y$ and a latent variable $z$, which is used for modeling the latent distribution of semantic over targets given an event. Hence, under a certain inference dimension, with regard to the latent semantic variable $z$, the conditional generation problem could be expressed as $p(y|x)=\int p(y|x,z)p(z|x)dz$. CVAE models $p(y|x,z)$ and $p(z|x)$ using deep neural networks (parameterized by $\theta $) $p_{\theta }(y|x,z)$ and $p_{\theta }(z|x)$. Then as illustrated in Figure FIGREF5 (b), $y$ could be generated from $x$ and $z$.
CVAE is trained to maximize the conditional likelihood $p(y|x)$, which involves an intractable marginalization over the latent variable $z$. Instead, following BIBREF10 (BIBREF10), a practical way is to introduce another deep network (parameterized by $\phi $) $q_{\phi }(z|x,y)$ to approximate the true posterior distribution $p(z|x,y)$ and maximize the evidence lower bound (ELBO) of the log-likelihood function:
Therefore, CVAE is composed of three neural networks in general. We refer to $p_{\theta }(z|x)$ as a prior network, $q_{\phi }(z|x,y)$ as a recognition network, and $p_{\theta }(y|x,z)$ as a neural decoder.
Context-aware Variational Autoencoder
Traditional CVAE can model the event-target relation. In other words, given an observed event, CVAE can generate its corresponding targets. While in this paper we model the If-Then reasoning as a [(background), event]-target process. It means that in addition to the observed event, we also want to involve the event background knowledge (which can be learned from event contexts) to generate the reasonable targets.
To this end, we propose a context-aware variational autoencoder (CWVAE), with two additional latent variables: a context-acquiring latent variable $z_c$ to directly acquire context information, and a context-aware latent variable $z_{c^{\prime }}$ to learn background knowledge from $z_c$, as shown in Figure FIGREF6 (a). However, the event context information is absent in the Event2Mind and Atomic dataset. To learn from the external event context information, we design the following two-stage training procedure for CWVAE.
Pretrain: Learning Event Background Knowledge from Auxiliary Dataset In the pretrain stage, CWVAE is trained on three narrative story corpora with rich event context information. As shown in Figure FIGREF6 (a), context-acquiring latent variable $z_c$ is directly conditioned on the context $c$. Hence, $z_c$ could be employed for acquiring background knowledge from event contexts. Then, we minimize the distance between $z_c$ and the context-aware latent variable $z_{c^{\prime }}$, by which the event background knowledge is transferred from $z_c$ to $z_{c^{\prime }}$.
Finetune: Adapt Event Background Knowledge to Each Inference Dimension In the finetune stage, as shown in Figure FIGREF6 (b), CWVAE is trained on the Event2Mind and Atomic dataset without the event context information. Pretrained CWVAE is finetuned to learn the specific inferential knowledge of each inference dimension. After the training procedure, as shown in Figure FIGREF6 (c), samples of $z$ is generated based on $x$ and samples of $z_{c^{\prime }}$, where $z_{c^{\prime }}$ contains rich event background knowledge helpful for If-Then reasoning.
Context-aware Variational Autoencoder ::: Architecture of CWVAE
As shown in Figure FIGREF8, CWVAE is mainly composed of four parts: a neural encoder that provides distributed representations of base events/targets, a recognition network for inferring $q_{\phi }(z|x,y)$, $q_{\phi }(z_c|x,c)$ and $q_{\phi }(z|z_{c^{\prime }}, x)$, a prior network for modeling $p_{\theta }(z_{c^{\prime }}|x)$ and $p_{\theta }(z|x, z_{c^{\prime }})$, and a neural decoder that integrates the information from $z$ and $z_{c^{\prime }}$ to generate targets.
Neural Encoder We employ a bidirectional GRU as neural encoder, which encodes context $c$, event $x$ and target $y$ into distributed representations $h^c=\lbrace h_1^c,\dots ,h_{l_c}^c\rbrace $, $h^x=\lbrace h_1^x,\dots ,h_{l_x}^x\rbrace $ and $h^y=\lbrace h_1^y,\dots ,h_{l_y}^y\rbrace $, where $l_c$, $l_x$ and $l_y$ is the length of $c$, $x$ and $y$, respectively.
Recognition Network The recognition network models $q_{\phi }(z|x,y)$, $q_{\phi }(z_c|x,c)$, $q_{\phi }(z|z_{c^{\prime }}, x)$ based on $h^x$, $h^y$ and $h^c$.
Following traditional VAE, the above-mentioned three distributions are assumed to be multivariate Gaussian distribution with a diagonal covariance structure:
where $\mu $ denotes the mean of the distribution, $\sigma $ denotes the standard deviation of the distribution, and $I$ denotes the identity matrix.
Given $h^x$, $h^y$ and $h^c$, we propose a novel attention-based inferer (ABI) module to estimate the mean and standard deviation of $q_{\phi }(z_{c}|x,c)$, $q_{\phi }(z_{c^{\prime }}|x,y)$ and $q_{\phi }(z|x,y)$:
Briefly, through the attention mechanism, ABI can capture the semantic interaction between input sequences, and estimate the parameters of distributions based on it. We will introduce the specific structure of ABI in below.
Prior Network Prior Network models $p_{\theta }(z_{c^{\prime }}|x)$ and $p_{\theta }(z|x, z_{c^{\prime }})$ based on $h^x$. The distribution of $p_{\theta }(z_{c^{\prime }}|x)$ and $p_{\theta }(z|x, z_{c^{\prime }})$ are still assumed to be multivariate Gaussian, whereas the parameters are different:
where $\mu ^{^{\prime }}$ denotes the mean of the distribution, $\sigma ^{^{\prime }}$ denotes the standard deviation of the distribution and $I$ denotes the identity matrix.
Then the attention-based inferer module is still employed to estimate parameters of distributions:
Neural Decoder Given the base event $x$, the semantic latent variable $z$, and the context-aware latent variable $z_{c^{\prime }}$, the neural decoder defines the generation probability of $y$ as following:
where $p(y_j|y<j, z, z_{c^{\prime }}, x)=g(y_{j-1}, s_{j-1}, e_j)$, $g(\cdot )$ is an attention-based feed forward model, $e_j=\sum _i \alpha _{ji}h_i^{x}$ is the context vector and $s_{j-1}$ is the hidden state of the decoder. We obtain $g(\cdot )$ and $e_j$ the same way as BIBREF12 (BIBREF12). Whereas our decoder differs from BIBREF12 (BIBREF12) in that our model integrates the context-aware latent variable $z_{c^{\prime }}$ and semantic latent variable $z$ in the computation of $s_j=\mathrm {GRU}([E_{yj};s_{j-1},z,z_{j-1}])$, where $E_{yj}$ is the word embeddings of target words.
Note that through concatenating $z$ and $z_{c^{\prime }}$ with $E_{yj}$ and $s_{j-1}$, $s_j$ could be affected by context-aware latent variable $z_{c^{\prime }}$ and semantic latent variable $z$. This allows model to directly access to the event background knowledge from $z_{c^{\prime }}$. In addition, the randomness of $z$ and $z_{c^{\prime }}$ would increase the diversity of model generation.
Attention-based Inferer Attention mechanism has shown strong ability in capturing semantic interactions BIBREF13. Inspired by the co-attention mechanism BIBREF14, we propose an attention-based inferer (ABI) to estimate the mean and standard deviation of a distribution belongs to $p_{\theta }(\cdot )$ or $q_{\phi }(\cdot )$ by capturing semantic interactions of input sequences.
Specifically, given two input sequences (e.g., representations of contexts and events) $a=\lbrace a_1,\dots ,a_{l_a}\rbrace $ and $b=\lbrace b_1,\dots ,b_{l_b}\rbrace $ with length $l_a$ and $l_b$, we first obtain the attention scores from each side through:
where $W_a \in \mathbb {R}^{d\times d_a}$ and $W_b \in \mathbb {R}^{d\times d_b}$ are parameter weights.
With these attention scores, the context vectors of both sequences are given by:
Then we perform a mean pooling operation on context vectors of both sequences:
To obtain the mean and standard deviation, the pooled context vectors $\bar{c^a}$ and $\bar{c^b}$ which carry semantic interaction between two sequences, are concatenated and projected into a latent semantic space through a nonlinear transformation:
Finally the mean and standard deviation are generated through a nonlinear transformation over $h_z$:
Context-aware Variational Autoencoder ::: Optimizing
With the incorporation of $z_{c^{\prime }}$, the original loglikelihood could be decomposed as:
Then following traditional CVAE, the ELBO of CWVAE is defined as follows:
which is the objective function at the finetune stage.
While in the pretrain stage, as we aim to learn background knowledge through minimizing the distance between $z_c$ and $z_{c^{\prime }}$, in addition to $L^{ELBO}$, a context-aware regulation term is introduced:
where the context aware regularization term is the KL distance between $z$ and $z_{c^{\prime }}$. Through minimizing the context aware regularization term, we aim to pass event context knowledge from $z_c$ to the context aware latent variable $z_{c^{\prime }}$.
Context-aware Variational Autoencoder ::: Training Details
To test the performance of CWVAE, we split the Event2Mind and Atomic dataset into training, development and test sets (80%, 10%, 10%) in the same way as BIBREF5 (BIBREF5) and BIBREF4 (BIBREF4), respectively. We initialize the embedding layer from 300d GloVe word embeddings. The neural encoder is chosen to be biGRU with 300 hidden units. For the ABI module, size of $W_a$ and $W_b$ is set to be $100 \times d_a$ and $100 \times d_b$ respectively. The dimension of $z_c$, $z_{c^{\prime }}$ and $z$ is all set as 40. The neural decoder is set to be GRU with 300d hidden state. Regulation coefficient $\lambda $ of context-aware regulation term is set to be 0.1. Models are trained using an Adam optimizer BIBREF15 with a learning rate of 0.001.
Experiments ::: Auxiliary Dataset
The auxiliary dataset is built upon three human-written story corpora: ROCStories BIBREF16, VIST BIBREF17 and WritingPrompts BIBREF18. ROCStories and VIST are composed of short stories with five sentences. We filter out stories of more than 1,000 words in WritingPrompts, and cut the remaining stories into five-sentence-paragraphs.
For each five-sentence-paragraph, we define the first three sentences as contexts of the base event, the fourth sentence as the base event, and the fifth sentence as the inference target. For example, as shown in Table TABREF25, the first three sentences describe a context that Jason was unsatisfied about his job and applied for a new job. Hence, after happening the event “he got the job”, a plausible react about the event could be “jason was much happier at his new job”. In total, the auxiliary dataset contains 192,316 $(context, event, target)$ triples.
Experiments ::: Baselines
We compared our proposed model with the following four baseline methods:
RNN-based Seq2Seq proposed by BIBREF4 (BIBREF4) for the If-Then reasoning on Atomic.
Variational Seq2Seq combines a latent variable with the encoder-decoder structure through converting the last hidden state of RNN encoder into a Gaussian distributed latent variable BIBREF8.
VRNMT Propose by BIBREF19 (BIBREF19), VRNMT combines CVAE with attention-based encoder-decoder framework through introduces a latent variable to model the semantic distribution of targets.
CWVAE-Unpretrained refers to the CWVAE model without the pretrain stage.
Note that, for each baseline method, we train distinct models for each distinct inference dimension, respectively.
Experiments ::: Evaluation Metrics ::: Automatic Evaluation
We first compare the perplexity of CWVAE with baseline methods. Perplexity measures the probability of model to regenerate the exact targets, which is particular suitable for evaluating the model performance on one-to-many problem BIBREF20. Further, we employ BLEU score to evaluate the accuracy of generations BIBREF21, and the number of distinct n-gram to evaluate the diversity of generations BIBREF6. The distinct is normalized to $[0, 1]$ by dividing the total number of generated tokens.
Experiments ::: Evaluation Metrics ::: Human Evaluation
Since automatic evaluation of generations is still a challenging task BIBREF22, we also conduct human evaluations on the model performance. Five human experts are employed to evaluate the coherence, diversity and fluency of generated targets. Experts are asked to vote for if a generation is fluent or coherent for each generated target, and give a 1-5 score for the diversity of generations. For both Event2Mind and Atomic datasets, 100 events are randomly selected from the test set. For each method, top 10 generated targets of each base event are used for evaluation. Finally we report three overall averaged scores of coherence, diversity and fluency on both datasets, respectively.
Experiments ::: Overall Results
We list the perplexity and BLEU score of CWVAE and baseline methods on Event2Mind and Atomic in Table TABREF31 and Table TABREF33, respectively, and show the distinct-1 and distinct-2 score on Event2Mind and Atomic in Table TABREF32 and Table TABREF34, respectively. We find that:
(1) As shown in Table TABREF32 and Table TABREF34, comparison between RNN-based Seq2Seq and variational-based methods, including Variational Seq2Seq, VRNMT, CWVAE-unpretrained and CWVAE shows that, variational-based methods could increase the diversity of generations. This confirms one of our motivations that variational-based methods could capture the latent semantic distribution within targets and increase the diversity of If-Then reasoning.
(2) Comparing CWVAE-unpretrained with other baseline methods shows that, in general CWVAE improves the accuracy and diversity on both dataset. These results indicate the efficiency of CWVAE in capturing the latent semantic distribution of targets, and generate more reasonable inferential results.
(3) Comparison between CWVAE and CWVAE-unpretrained shows that the pretrain stage could enhance the performance of CWVAE in both the accuracy and diversity. This is mainly because event knowledge could offer the guidance for If-Then reasoning. In the pretrain stage, CWVAE could capture the event background knowledge through context-aware latent variable, and such knowledge could be be adapted to our task through the fintune stage.
To further evaluate the effectiveness of our proposed approach, we also conduct human evaluations, the results of which are shown in Table TABREF39 and Table TABREF40. On both datasets, CWVAE-based methods achieve consistent better coherence, diversity and fluency performances. While comparing with CWVAE-Unpretrained, the pretrain procedure could improves the performance on coherence and fluency. The main reasons are twofold: first, the CWVAE has advantage in capturing the semantic distribution of targets; second, event background learned from the pretrain stage is helpful for the If-Then reasoning.
Experiments ::: Case Study
Table TABREF41 provides an example of model generations given the base event “PersonX works tirelessly” and the inference dimension “xIntent”. The generations under CWVAE mainly contain four kinds of semantics: (1) be productive, (2) finish his work soon, (3) accomplish goal, (4) earn more money. While the semantics of generations using baseline RNN-based Seq2Seq model is relatively limited. Furthermore, the first three kinds of semantic overlap the three ground truth targets, and the fourth kind of semantic is in accordance with daily-life commonsense. Compared to RNN-based Seq2Seq model, our approach can increase the diversity and rationality of generations, meanwhile keep the accuracy.
Related Work ::: Event-Centered Commonsense Reasoning
Understanding events and constructing event-centered commonsense knowledge are crucial to many NLP applications, such as intention recognition BIBREF23 and dialog generation BIBREF24. Recently a growing number of studies focus on event-centered commonsense reasoning, which mainly concentrates on two areas, script event prediction and story ending generation/choosing.
Script event prediction concerns with the temporal relationships between script events BIBREF25, which requires models to choose a correct subsequent triple-organized event among the candidates BIBREF2. Prior work mainly focused on modeling event pairs BIBREF25, event chains BIBREF2 and event graph BIBREF3 to predict the subsequent event. Story ending generation focuses on generating plausible story endings BIBREF16, which requires models to understand the story context, and keep generated endings logically consistent with it BIBREF26, BIBREF27. The above tasks mainly investigate the logical orders of events, whereas the If-Then reasoning task focuses on inferring the mental state of event participants.
Related Work ::: Variational AutoEncoder-Decoder Based Natural Language Generation
VAE BIBREF10 has been widely applied in various of text generation tasks, such as dialogue and machine translation. In dialogue generation, BIBREF9 (BIBREF9) adapts VAE with encoder-decoder framework to model the latent semantic distribution of answers, which can increase the diversity of generations. For the task of machine translation, BIBREF19 (BIBREF19) and BIBREF28 (BIBREF28) employ a latent variable to capture the semantic interaction between the source and target sentence, and regard the latent variable as a supplementation of attention mechanism. While BIBREF29 (BIBREF29) use the latent variable to model topic distributions in text generation. In this paper, we introduce an additional context-aware latent variable to effectively learn background knowledge and conduct If-Then reasoning on the guidance of it.
Conclusion
In this paper, we propose a novel context-aware VAE (CWVAE) framework with two training stages for If-Then commonsense reasoning. By introducing an additional context-aware latent variable, CWVAE is able to learn external background knowledge, and conduct If-Then reasoning under its guidance. In the pretrain stage, CWVAE learns event background knowledge, then in the finetune stage CWVAE adapts such knowledge to each inference dimension. Experimental results demonstrate that CWVAE outperforms baseline methods in both the accuracy and diversity of generations.
Acknowledgments
We thank the anonymous reviewers for their constructive comments, and gratefully acknowledge the support of the National Key Research and Development Program of China (SQ2018AAA010010), the National Key Research and Development Program of China (2018YFB1005103), the National Natural Science Foundation of China (NSFC) via Grant 61702137. | CWVAE is trained on an auxiliary dataset to learn the event background information by using the context-aware latent variable. Then, in finetute stage, CWVAE is trained on the task-specific dataset to adapt the event background information to each specific aspect of If-Then inferential target. |
99ef97336c0112d9f60df108f58c8b04b519a854 | 99ef97336c0112d9f60df108f58c8b04b519a854_0 | Q: What is the size of the Atomic dataset?
Text: Introduction
Recently, event-centered commonsense knowledge has attracted much attention BIBREF0, BIBREF1, BIBREF2, BIBREF3, because of understanding events is an important component of NLP. Given a daily-life event, human can easily understand it and reason about its causes, effects, and so on. However, it still remains a challenging task for NLP systems. This is partly due to most of them are trained for task-specific datasets or objectives, which results in models that are adapt at finding task-specific underlying correlation patterns but have limited capability in simple and explainable commonsense reasoning BIBREF4.
To facilitate this, BIBREF5 (BIBREF5) build the Event2Mind dataset and BIBREF4 (BIBREF4) present the Atomic dataset, mainly focus on nine If-Then reasoning types to describe causes, effects, intents and participant characteristic about events. Together with these datasets, a simple RNN-based encoder-decoder framework is proposed to conduct the If-Then reasoning.
However, there still remains two challenging problems. First, as illustrated in Figure FIGREF1, given an event “PersonX finds a new job”, the plausible feeling of PersonX about that event could be multiple (such as “needy/stressed out” and “relieved/joyful”). Previous work showed that for the one-to-many problem, conventional RNN-based encoder-decoder models tend to generate generic responses, rather than meaningful and specific answers BIBREF6, BIBREF7.
Second, as a commonsense reasoning problem, rich background knowledge is necessary for generating reasonable inferences. For example, as shown in Figure FIGREF1, the feeling of PersonX upon the event “PersonX finds a new job” could be multiple. However, after given a context “PersonX was fired”, the plausible inferences would be narrowed down to “needy” or “stressed out”.
To better solve these problems, we propose a context-aware variational autoencoder (CWVAE) together with a two-stage training procedure. Variational Autoencoder (VAE) based models have shown great potential in modeling the one-to-many problem and generate diversified inferences BIBREF8, BIBREF9.
In addition to the traditional VAE structure, we introduces an extra context-aware latent variable in CWVAE to learn the event background knowledge. In the pretrain stage, CWVAE is trained on an auxiliary dataset (consists of three narrative story corpora and contains rich event background knowledge), to learn the event background information by using the context-aware latent variable. Subsequently, in the finetune stage, CWVAE is trained on the task-specific dataset to adapt the event background information to each specific aspect of If-Then inferential target (e.g., intents, reactions, etc.).
Experiments on the Event2Mind and Atomic dataset show that our proposed approach outperforms baseline methods in both the accuracy and diversity of inferences. The code is released at https://github.com/sjcfr/CWVAE.
Background
Before specifically describing two dataset —- Event2Mind and Atomic used in this paper as well as the If-Then reasoning task, for clarity, we define the following terminologies:
Base event: the prerequisite event in If-Then reasoning, organized as a verb phrase with a predicate and its arguments, such as the event “PersonX finds a new job” shown in Figure FIGREF1.
Inference dimension: a particular If-Then reasoning type, e.g., intents, effects of the base event. Details are shown in Table TABREF2 and Table TABREF3.
Target: the inferential results. For example, as shown in Figure FIGREF1, given a base event “PersonX finds a new job” and one inference dimension “xReact”, the targets could be “relieved” or “needy”. Notice that each inference dimension can have multiple targets.
Event2Mind Dataset contains 25K base events and 300K targets, annotated through crowdsourcing. Event2Mind is organized in a hierarchical form: each base event has three types of inference dimensions, and given a base event, under one of inference dimensions, several targets may simultaneously exist. Table TABREF2 shows the (base event-inference dimension-target) hierarchical structure through an example from Event2Mind.
Atomic Dataset Inspired by Event2Mind, the Atomic dataset shares the same hierarchical structure as Event2Mind, while scales up the size of dataset and expands the scope to nine types of inference dimensions. Table TABREF3 shows the (base event-inference dimension-target) hierarchical structure through an example from Atomic. Though Atomic covers the inference dimensions of Event2Mind, the base event collection of Event2Mind is nonidentical to that of Atomic.
Problem Definition The If-Then reasoning task could be formally defined as a conditional one-to-many generation problem: given a base event $x$ and one inference dimension $d$, the model is required to generate targets $y=f(x, d)$ as close to the ground truths as possible. Both $x$ and $y$ consist of sequence of words: $x=\lbrace x_1,\dots , x_{m}\rbrace $, and $y=\lbrace y_1,\dots , y_{n}\rbrace $, where $m$ and $n$ denotes the length of $x$ and $y$, respectively.
Conditional Variational Autoencoder The variational autoencoder (VAE) defines a generative framework suited for one-to-many generation problem BIBREF10. While conditional variational autoencoder (CVAE) BIBREF11 is an extension of VAE on the conditional generation problem. As shown in Figure FIGREF5 (a), CVAE characterizes the conditional one-to-many generation problem using three random variables: event $x$, target $y$ and a latent variable $z$, which is used for modeling the latent distribution of semantic over targets given an event. Hence, under a certain inference dimension, with regard to the latent semantic variable $z$, the conditional generation problem could be expressed as $p(y|x)=\int p(y|x,z)p(z|x)dz$. CVAE models $p(y|x,z)$ and $p(z|x)$ using deep neural networks (parameterized by $\theta $) $p_{\theta }(y|x,z)$ and $p_{\theta }(z|x)$. Then as illustrated in Figure FIGREF5 (b), $y$ could be generated from $x$ and $z$.
CVAE is trained to maximize the conditional likelihood $p(y|x)$, which involves an intractable marginalization over the latent variable $z$. Instead, following BIBREF10 (BIBREF10), a practical way is to introduce another deep network (parameterized by $\phi $) $q_{\phi }(z|x,y)$ to approximate the true posterior distribution $p(z|x,y)$ and maximize the evidence lower bound (ELBO) of the log-likelihood function:
Therefore, CVAE is composed of three neural networks in general. We refer to $p_{\theta }(z|x)$ as a prior network, $q_{\phi }(z|x,y)$ as a recognition network, and $p_{\theta }(y|x,z)$ as a neural decoder.
Context-aware Variational Autoencoder
Traditional CVAE can model the event-target relation. In other words, given an observed event, CVAE can generate its corresponding targets. While in this paper we model the If-Then reasoning as a [(background), event]-target process. It means that in addition to the observed event, we also want to involve the event background knowledge (which can be learned from event contexts) to generate the reasonable targets.
To this end, we propose a context-aware variational autoencoder (CWVAE), with two additional latent variables: a context-acquiring latent variable $z_c$ to directly acquire context information, and a context-aware latent variable $z_{c^{\prime }}$ to learn background knowledge from $z_c$, as shown in Figure FIGREF6 (a). However, the event context information is absent in the Event2Mind and Atomic dataset. To learn from the external event context information, we design the following two-stage training procedure for CWVAE.
Pretrain: Learning Event Background Knowledge from Auxiliary Dataset In the pretrain stage, CWVAE is trained on three narrative story corpora with rich event context information. As shown in Figure FIGREF6 (a), context-acquiring latent variable $z_c$ is directly conditioned on the context $c$. Hence, $z_c$ could be employed for acquiring background knowledge from event contexts. Then, we minimize the distance between $z_c$ and the context-aware latent variable $z_{c^{\prime }}$, by which the event background knowledge is transferred from $z_c$ to $z_{c^{\prime }}$.
Finetune: Adapt Event Background Knowledge to Each Inference Dimension In the finetune stage, as shown in Figure FIGREF6 (b), CWVAE is trained on the Event2Mind and Atomic dataset without the event context information. Pretrained CWVAE is finetuned to learn the specific inferential knowledge of each inference dimension. After the training procedure, as shown in Figure FIGREF6 (c), samples of $z$ is generated based on $x$ and samples of $z_{c^{\prime }}$, where $z_{c^{\prime }}$ contains rich event background knowledge helpful for If-Then reasoning.
Context-aware Variational Autoencoder ::: Architecture of CWVAE
As shown in Figure FIGREF8, CWVAE is mainly composed of four parts: a neural encoder that provides distributed representations of base events/targets, a recognition network for inferring $q_{\phi }(z|x,y)$, $q_{\phi }(z_c|x,c)$ and $q_{\phi }(z|z_{c^{\prime }}, x)$, a prior network for modeling $p_{\theta }(z_{c^{\prime }}|x)$ and $p_{\theta }(z|x, z_{c^{\prime }})$, and a neural decoder that integrates the information from $z$ and $z_{c^{\prime }}$ to generate targets.
Neural Encoder We employ a bidirectional GRU as neural encoder, which encodes context $c$, event $x$ and target $y$ into distributed representations $h^c=\lbrace h_1^c,\dots ,h_{l_c}^c\rbrace $, $h^x=\lbrace h_1^x,\dots ,h_{l_x}^x\rbrace $ and $h^y=\lbrace h_1^y,\dots ,h_{l_y}^y\rbrace $, where $l_c$, $l_x$ and $l_y$ is the length of $c$, $x$ and $y$, respectively.
Recognition Network The recognition network models $q_{\phi }(z|x,y)$, $q_{\phi }(z_c|x,c)$, $q_{\phi }(z|z_{c^{\prime }}, x)$ based on $h^x$, $h^y$ and $h^c$.
Following traditional VAE, the above-mentioned three distributions are assumed to be multivariate Gaussian distribution with a diagonal covariance structure:
where $\mu $ denotes the mean of the distribution, $\sigma $ denotes the standard deviation of the distribution, and $I$ denotes the identity matrix.
Given $h^x$, $h^y$ and $h^c$, we propose a novel attention-based inferer (ABI) module to estimate the mean and standard deviation of $q_{\phi }(z_{c}|x,c)$, $q_{\phi }(z_{c^{\prime }}|x,y)$ and $q_{\phi }(z|x,y)$:
Briefly, through the attention mechanism, ABI can capture the semantic interaction between input sequences, and estimate the parameters of distributions based on it. We will introduce the specific structure of ABI in below.
Prior Network Prior Network models $p_{\theta }(z_{c^{\prime }}|x)$ and $p_{\theta }(z|x, z_{c^{\prime }})$ based on $h^x$. The distribution of $p_{\theta }(z_{c^{\prime }}|x)$ and $p_{\theta }(z|x, z_{c^{\prime }})$ are still assumed to be multivariate Gaussian, whereas the parameters are different:
where $\mu ^{^{\prime }}$ denotes the mean of the distribution, $\sigma ^{^{\prime }}$ denotes the standard deviation of the distribution and $I$ denotes the identity matrix.
Then the attention-based inferer module is still employed to estimate parameters of distributions:
Neural Decoder Given the base event $x$, the semantic latent variable $z$, and the context-aware latent variable $z_{c^{\prime }}$, the neural decoder defines the generation probability of $y$ as following:
where $p(y_j|y<j, z, z_{c^{\prime }}, x)=g(y_{j-1}, s_{j-1}, e_j)$, $g(\cdot )$ is an attention-based feed forward model, $e_j=\sum _i \alpha _{ji}h_i^{x}$ is the context vector and $s_{j-1}$ is the hidden state of the decoder. We obtain $g(\cdot )$ and $e_j$ the same way as BIBREF12 (BIBREF12). Whereas our decoder differs from BIBREF12 (BIBREF12) in that our model integrates the context-aware latent variable $z_{c^{\prime }}$ and semantic latent variable $z$ in the computation of $s_j=\mathrm {GRU}([E_{yj};s_{j-1},z,z_{j-1}])$, where $E_{yj}$ is the word embeddings of target words.
Note that through concatenating $z$ and $z_{c^{\prime }}$ with $E_{yj}$ and $s_{j-1}$, $s_j$ could be affected by context-aware latent variable $z_{c^{\prime }}$ and semantic latent variable $z$. This allows model to directly access to the event background knowledge from $z_{c^{\prime }}$. In addition, the randomness of $z$ and $z_{c^{\prime }}$ would increase the diversity of model generation.
Attention-based Inferer Attention mechanism has shown strong ability in capturing semantic interactions BIBREF13. Inspired by the co-attention mechanism BIBREF14, we propose an attention-based inferer (ABI) to estimate the mean and standard deviation of a distribution belongs to $p_{\theta }(\cdot )$ or $q_{\phi }(\cdot )$ by capturing semantic interactions of input sequences.
Specifically, given two input sequences (e.g., representations of contexts and events) $a=\lbrace a_1,\dots ,a_{l_a}\rbrace $ and $b=\lbrace b_1,\dots ,b_{l_b}\rbrace $ with length $l_a$ and $l_b$, we first obtain the attention scores from each side through:
where $W_a \in \mathbb {R}^{d\times d_a}$ and $W_b \in \mathbb {R}^{d\times d_b}$ are parameter weights.
With these attention scores, the context vectors of both sequences are given by:
Then we perform a mean pooling operation on context vectors of both sequences:
To obtain the mean and standard deviation, the pooled context vectors $\bar{c^a}$ and $\bar{c^b}$ which carry semantic interaction between two sequences, are concatenated and projected into a latent semantic space through a nonlinear transformation:
Finally the mean and standard deviation are generated through a nonlinear transformation over $h_z$:
Context-aware Variational Autoencoder ::: Optimizing
With the incorporation of $z_{c^{\prime }}$, the original loglikelihood could be decomposed as:
Then following traditional CVAE, the ELBO of CWVAE is defined as follows:
which is the objective function at the finetune stage.
While in the pretrain stage, as we aim to learn background knowledge through minimizing the distance between $z_c$ and $z_{c^{\prime }}$, in addition to $L^{ELBO}$, a context-aware regulation term is introduced:
where the context aware regularization term is the KL distance between $z$ and $z_{c^{\prime }}$. Through minimizing the context aware regularization term, we aim to pass event context knowledge from $z_c$ to the context aware latent variable $z_{c^{\prime }}$.
Context-aware Variational Autoencoder ::: Training Details
To test the performance of CWVAE, we split the Event2Mind and Atomic dataset into training, development and test sets (80%, 10%, 10%) in the same way as BIBREF5 (BIBREF5) and BIBREF4 (BIBREF4), respectively. We initialize the embedding layer from 300d GloVe word embeddings. The neural encoder is chosen to be biGRU with 300 hidden units. For the ABI module, size of $W_a$ and $W_b$ is set to be $100 \times d_a$ and $100 \times d_b$ respectively. The dimension of $z_c$, $z_{c^{\prime }}$ and $z$ is all set as 40. The neural decoder is set to be GRU with 300d hidden state. Regulation coefficient $\lambda $ of context-aware regulation term is set to be 0.1. Models are trained using an Adam optimizer BIBREF15 with a learning rate of 0.001.
Experiments ::: Auxiliary Dataset
The auxiliary dataset is built upon three human-written story corpora: ROCStories BIBREF16, VIST BIBREF17 and WritingPrompts BIBREF18. ROCStories and VIST are composed of short stories with five sentences. We filter out stories of more than 1,000 words in WritingPrompts, and cut the remaining stories into five-sentence-paragraphs.
For each five-sentence-paragraph, we define the first three sentences as contexts of the base event, the fourth sentence as the base event, and the fifth sentence as the inference target. For example, as shown in Table TABREF25, the first three sentences describe a context that Jason was unsatisfied about his job and applied for a new job. Hence, after happening the event “he got the job”, a plausible react about the event could be “jason was much happier at his new job”. In total, the auxiliary dataset contains 192,316 $(context, event, target)$ triples.
Experiments ::: Baselines
We compared our proposed model with the following four baseline methods:
RNN-based Seq2Seq proposed by BIBREF4 (BIBREF4) for the If-Then reasoning on Atomic.
Variational Seq2Seq combines a latent variable with the encoder-decoder structure through converting the last hidden state of RNN encoder into a Gaussian distributed latent variable BIBREF8.
VRNMT Propose by BIBREF19 (BIBREF19), VRNMT combines CVAE with attention-based encoder-decoder framework through introduces a latent variable to model the semantic distribution of targets.
CWVAE-Unpretrained refers to the CWVAE model without the pretrain stage.
Note that, for each baseline method, we train distinct models for each distinct inference dimension, respectively.
Experiments ::: Evaluation Metrics ::: Automatic Evaluation
We first compare the perplexity of CWVAE with baseline methods. Perplexity measures the probability of model to regenerate the exact targets, which is particular suitable for evaluating the model performance on one-to-many problem BIBREF20. Further, we employ BLEU score to evaluate the accuracy of generations BIBREF21, and the number of distinct n-gram to evaluate the diversity of generations BIBREF6. The distinct is normalized to $[0, 1]$ by dividing the total number of generated tokens.
Experiments ::: Evaluation Metrics ::: Human Evaluation
Since automatic evaluation of generations is still a challenging task BIBREF22, we also conduct human evaluations on the model performance. Five human experts are employed to evaluate the coherence, diversity and fluency of generated targets. Experts are asked to vote for if a generation is fluent or coherent for each generated target, and give a 1-5 score for the diversity of generations. For both Event2Mind and Atomic datasets, 100 events are randomly selected from the test set. For each method, top 10 generated targets of each base event are used for evaluation. Finally we report three overall averaged scores of coherence, diversity and fluency on both datasets, respectively.
Experiments ::: Overall Results
We list the perplexity and BLEU score of CWVAE and baseline methods on Event2Mind and Atomic in Table TABREF31 and Table TABREF33, respectively, and show the distinct-1 and distinct-2 score on Event2Mind and Atomic in Table TABREF32 and Table TABREF34, respectively. We find that:
(1) As shown in Table TABREF32 and Table TABREF34, comparison between RNN-based Seq2Seq and variational-based methods, including Variational Seq2Seq, VRNMT, CWVAE-unpretrained and CWVAE shows that, variational-based methods could increase the diversity of generations. This confirms one of our motivations that variational-based methods could capture the latent semantic distribution within targets and increase the diversity of If-Then reasoning.
(2) Comparing CWVAE-unpretrained with other baseline methods shows that, in general CWVAE improves the accuracy and diversity on both dataset. These results indicate the efficiency of CWVAE in capturing the latent semantic distribution of targets, and generate more reasonable inferential results.
(3) Comparison between CWVAE and CWVAE-unpretrained shows that the pretrain stage could enhance the performance of CWVAE in both the accuracy and diversity. This is mainly because event knowledge could offer the guidance for If-Then reasoning. In the pretrain stage, CWVAE could capture the event background knowledge through context-aware latent variable, and such knowledge could be be adapted to our task through the fintune stage.
To further evaluate the effectiveness of our proposed approach, we also conduct human evaluations, the results of which are shown in Table TABREF39 and Table TABREF40. On both datasets, CWVAE-based methods achieve consistent better coherence, diversity and fluency performances. While comparing with CWVAE-Unpretrained, the pretrain procedure could improves the performance on coherence and fluency. The main reasons are twofold: first, the CWVAE has advantage in capturing the semantic distribution of targets; second, event background learned from the pretrain stage is helpful for the If-Then reasoning.
Experiments ::: Case Study
Table TABREF41 provides an example of model generations given the base event “PersonX works tirelessly” and the inference dimension “xIntent”. The generations under CWVAE mainly contain four kinds of semantics: (1) be productive, (2) finish his work soon, (3) accomplish goal, (4) earn more money. While the semantics of generations using baseline RNN-based Seq2Seq model is relatively limited. Furthermore, the first three kinds of semantic overlap the three ground truth targets, and the fourth kind of semantic is in accordance with daily-life commonsense. Compared to RNN-based Seq2Seq model, our approach can increase the diversity and rationality of generations, meanwhile keep the accuracy.
Related Work ::: Event-Centered Commonsense Reasoning
Understanding events and constructing event-centered commonsense knowledge are crucial to many NLP applications, such as intention recognition BIBREF23 and dialog generation BIBREF24. Recently a growing number of studies focus on event-centered commonsense reasoning, which mainly concentrates on two areas, script event prediction and story ending generation/choosing.
Script event prediction concerns with the temporal relationships between script events BIBREF25, which requires models to choose a correct subsequent triple-organized event among the candidates BIBREF2. Prior work mainly focused on modeling event pairs BIBREF25, event chains BIBREF2 and event graph BIBREF3 to predict the subsequent event. Story ending generation focuses on generating plausible story endings BIBREF16, which requires models to understand the story context, and keep generated endings logically consistent with it BIBREF26, BIBREF27. The above tasks mainly investigate the logical orders of events, whereas the If-Then reasoning task focuses on inferring the mental state of event participants.
Related Work ::: Variational AutoEncoder-Decoder Based Natural Language Generation
VAE BIBREF10 has been widely applied in various of text generation tasks, such as dialogue and machine translation. In dialogue generation, BIBREF9 (BIBREF9) adapts VAE with encoder-decoder framework to model the latent semantic distribution of answers, which can increase the diversity of generations. For the task of machine translation, BIBREF19 (BIBREF19) and BIBREF28 (BIBREF28) employ a latent variable to capture the semantic interaction between the source and target sentence, and regard the latent variable as a supplementation of attention mechanism. While BIBREF29 (BIBREF29) use the latent variable to model topic distributions in text generation. In this paper, we introduce an additional context-aware latent variable to effectively learn background knowledge and conduct If-Then reasoning on the guidance of it.
Conclusion
In this paper, we propose a novel context-aware VAE (CWVAE) framework with two training stages for If-Then commonsense reasoning. By introducing an additional context-aware latent variable, CWVAE is able to learn external background knowledge, and conduct If-Then reasoning under its guidance. In the pretrain stage, CWVAE learns event background knowledge, then in the finetune stage CWVAE adapts such knowledge to each inference dimension. Experimental results demonstrate that CWVAE outperforms baseline methods in both the accuracy and diversity of generations.
Acknowledgments
We thank the anonymous reviewers for their constructive comments, and gratefully acknowledge the support of the National Key Research and Development Program of China (SQ2018AAA010010), the National Key Research and Development Program of China (2018YFB1005103), the National Natural Science Foundation of China (NSFC) via Grant 61702137. | Unanswerable |
95d8368b1055d97250df38d1e8c4a2b283d2b57e | 95d8368b1055d97250df38d1e8c4a2b283d2b57e_0 | Q: what standard speech transcription pipeline was used?
Text: Introduction
Automatic speech recognition (ASR) systems have seen remarkable advances over the last half-decade from the use of deep, convolutional and recurrent neural network architectures, enabled by a combination of modeling advances, available training data, and increased computational resources. Given these advances, our research group recently embarked on an effort to reach human-level transcription accuracy using state-of-the-art ASR techniques on one of the genres of speech that has historically served as a difficult benchmark task: conversational telephone speech (CTS). About a decade ago, CTS recognition had served as an evaluation task for government-sponsored work in speech recognition, predating the take-over of deep learning approaches and still largely in the GMM-HMM modeling framework BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . It had proven to be a hard problem, due to the variable nature of conversational pronunciations, speaking styles, and regional accents. Seide at al. BIBREF6 demonstrated that deep networks as acoustic models could achieve significant improvements over GMM-HMM models on CTS data, and more recently researchers at IBM had achieved results on this task that represented a further significant advance BIBREF7 , BIBREF8 over those from a decade ago.
The goal of reaching “human parity” in automatic CTS transcription raises the question of what should be considered human accuracy on this task. We operationalized the question by submitting the chosen test data to the same vendor-based transcription pipeline that is used at Microsoft for production data (for model training and internal evaluation purposes), and then comparing the results to ASR system output under the NIST scoring protocol. Using this methodology, and incorporating state-of-the-art convolutional and recurrent network architectures for both acoustic modeling BIBREF9 , BIBREF10 , BIBREF7 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 and language modeling BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 with extensive use of model combination, we obtained a machine error rate that was very slightly below that of the human transcription process (5.8% versus 5.9% on Switchboard data, and 11.0% versus 11.3% on CallHome English data) BIBREF19 . Since then, Saon et al. have reported even better results, along with a separate transcription experiment that puts the human error rate, on the same test data, at a lower point than measured by us (5.1% for Switchboard, 6.8% for CallHome) BIBREF20 .
In this paper, we address the question whether there are major qualitative differences between the results of human transcriptions of conversational speech and those obtained by ASR systems, based on a detailed analysis of the data and system output from our human parity experiment BIBREF19 . The question becomes important if ASR is to replace humans as the first step in fully automatic speech understanding systems—if machine transcription errors are qualitatively different from humans then we would have to worry about the possible effects on downstream processing, and mitigation techniques so as to still achieve an overall “natural” user experience (e.g., in real-time conversational speech translation, such as in the Skype application).
We start by discussing why human error rate on this task must themselves be considered a moving target. Next we ask whether speech that is difficult for ASR also tends to be hard for humans to transcribe (and vice-versa), and whether the speaker overlap with the training data that is found in a portion of the test data has a noticeable effect on the result, as was suggested in BIBREF20 . We then look at the most frequent word error types exhibited by the two transcription systems (human and machine), and finally report on a very preliminary but still informative experiment to see if humans could tell apart the transcription source (again, human versus machine), based on the errors they make.
Measuring Human Error
The assessment of human transcription error on conversational speech has been somewhat murky. A widely cited figure is 4% word error rate (WER), based on BIBREF21 . However, the reference therein is only a “personal communication” without further data. The Linguistics Data Consortium quantified inter-transcriber disagreement for the NIST 2003 CTS evaluation data at between 4.1% and 4.5% with very careful multiple transcriptions BIBREF22 . For “quick transcription”, the disagreement increased to 9.6%. The CTS data in the NIST study is from the Switchboard (SWB) and Fisher corpora, and is therefore comparable to the SWB portion of our data, i.e., coming from telephone conversations between strangers discussing a general-interest topic. Still, the exact dataset is different, which may account for some of the discrepancy with error rates measured on the NIST 2000 set used by us (5.9%) and IBM (5.1%), although the numbers are remarkably close.
As briefly described in the introduction, we measured human performance by leveraging an existing pipeline in which Microsoft data is transcribed on a weekly basis. This pipeline uses a large commercial vendor to perform two-pass transcription. In the first pass, a transcriber works from scratch to transcribe the data. In the second pass, a second listener monitors the data to do error correction. Dozens of hours of test data are processed in each batch, with no special instructions to the transcribers. The waveform segments, roughly corresponding to utterances, making up the test set are processed separately. This makes the task easier since the speakers are more clearly separated, but also more difficult since the two sides of the conversation are not interleaved and context may be missing. We performed that text normalization on the human transcripts to remove systematic discrepancies with the NIST scoring references. (Since this was done with some amount of trial and error it effectively was “cheating” for the benefit of the human transcribers.) We then applied the NIST scoring tools to obtain word error rates of 5.9% on the SWB portion, and 11.3% on the CallHome (CH) portion of the NIST 2000 test set. The latter corpus, unlike Switchboard, consists of conversations between friends and family, without seed topic, which would account for the much higher overall error rate. Clearly our method was not designed to achieve the highest possible human transcription accuracy; instead, as pointed out in BIBREF19 , our goal was to establish a benchmark corresponding to industry-standard (i.e. high-volume) professional transcript production.
The authors in BIBREF20 undertook to measure human error on the same dataset, but using a more involved process. The major differences were: (1) The transcription vendor was cognizant of the experiment and actively involved. (2) Transcribers were chosen based on past performance and familiarized with the conventions used by LDC in generating the reference transcripts. (3) Three independent, parallel transcribers were used, plus a fourth one for 2nd-pass quality control (QC) of the 1st-pass output. All in all, the transcribers performed roughly 12 to 18 listening passes. (4) The final output was obtained by choosing the transcriber (with QC) who had obtained the lowest WER on the test data. As noted earlier, the resulting WERs were 5.1% and 6.8%, respectively. The considerably lower estimate for CH could be a result of the transcribers having access to the entire conversation (as per personal communication with the authors). This would be especially helpful in transcribing unfamiliar vocabulary and speaking styles (allowing the transcriber to “adapt” to the data more effectively).
Clearly the IBM experiment made a much more thorough effort to probe the boundaries of human accuracy, and may in fact have come close to the inter-transcriber agreement previously measured by LDC on a different data set. However, it is important to realize that further improvements on the human side are no doubt achievable. For example, the number of transcribers could be scaled up further, or they could be allowed to confer with each other, to resolve disagreements. This raises the question of where to draw the line on human effort.
Finally, it is important to realize that conversational speech has a high degree of inherent ambiguity. For example, conversational pronunciations are highly variable and often reduced BIBREF23 . Another source of ambiguity is the lack of context and knowledge shared by the speakers (especially in the case of CH). In the presence of inherent ambiguity, inter-transcriber agreement can be improved by agreed-upon disambiguation rules, although this would not necessarily reflect true agreement based on speech understanding.
Machine Transcription System
The details of our conversational speech recognition system are described elsewhere BIBREF19 , so we only give a brief summary here. The system employs independent decodings by diverse acoustic models, including convolutional neural net (CNN) and bidirectional long short-term memory (BLSTM) models that differ by model architecture, number of senones, amount of training data, and other metaparameters. Decoding uses a pruned 4-gram N-gram language model (LM) to generate lattices, which are then expanded into 500-best lists using a larger N-gram LM. The N-best lists are rescored with multiple LSTM-LMs operating in forward and backward directions. Model scores are combined log-linearly at the utterance level and converted to posterior probabilities represented as word confusion networks. The various subsystems making up the final system are selected in a greedy search, and their weights are optimized via an expectation-maximization algorithm, on development data. The acoustic training data comprises all the publicly available CTS data (about 2000 hours), while the LMs are additionally trained on Broadcast News and Web data from U. Washington. The individual subsystems (based on different acoustic models) achieve word error rates between 6.4% and 7.7% on the Switchboard evaluation set, and between 12.2% and 17.0% on the CallHome portion. Combined, the system achieves 5.8% and 11.0% WER, respectively.
Error Distribution and Correlation
We note in passing that machine and human transcription WERs do not differ significantly according the Wilcoxon and Matched Pairs Sentence Segment Word Error tests as applied by NIST, nor do they differ according to a Sign test comparing error counts at the utterance level.
A first high-level question regarding the relation between word errors by machine and human transcribers is whether difficulty in one predicts difficulty in the other. Figure FIGREF1 shows scatter plots of speaker-level error rates (machine vs. human), separated by corpus. Each corpus subset has 40 conversation sides.
Clearly the errors at that level are correlated, with INLINEFORM0 for SWB and INLINEFORM1 for CH. This suggests that properties of the speech, either as a function of the content, the speaker, or the channel (each speaker occurs in exactly one test conversation), cause errors for both machine and human transcription.
We observe that the CH data has two speakers with outlier machine error rates (37.5% and 64.7% WER, solid red dots in Figure FIGREF1 ). These correspond to secondary speakers in their respective conversation sides, each with only a fraction of the speech of the dominant speaker. Note that the ASR system processes each conversation assuming only a single speaker per side. If we remove these outliers, the machine-human error correlation on CH increases to INLINEFORM0 . With secondary speakers excluded, we can also observe that the machine error rates cluster tighter than the human ones in both corpora (SWB: machine INLINEFORM1 vs. human INLINEFORM2 ; CH: machine INLINEFORM3 vs. human INLINEFORM4 ).
In BIBREF20 it was sugggested that one of the reasons for the much higher error rate on CH compared to SWB was that 36 of the 40 SWB test speakers occur in the portion of the SWB corpus that is used in training (due to what we surmise to be an oversight in the selection of the NIST 2000 test set). To assess this hypothesis we singled out the four speakers in the SWB portion that are not found in the training set; these are shown as solid black circles in Figure FIGREF1 . At first, it seems that the speaker-averaged WER for the “seen” speakers (machine WER 5.9%) is indeed much lower than for the speakers not found in training (7.5%). However, we can safely attribute this to bad luck and small sample size. The average machine WER of 7.5% for “unseen” speakers is well within one standard deviation of the “seen” speakers' WER distribution ( INLINEFORM0 ), and more tellingly, almost exactly the same relative difference in WERs between “seen” and “unseen” speakers is observed for human transcriptions (6.0% versus 7.7%). Clearly the human transcribers did not have the benefit of training on the “seen” speakers, so the difference must be due to the intrinsic difficulty of the speakers, which affects both transcription systems.
Error types
Tables TABREF3 – TABREF5 show the top ten types of substitutions, deletions and insertions for both ASR and human transcripts. Inspections reveals that the same short function words, discourse markers and filled pauses appear in the top ten errors for both systems. There is one notable exception, however. The top substitution error for the ASR system involves misrecognition of filled pauses (“%hesitation”, a word class label covering “uh” and “um” in various spellings) as backchannel acknowledgments (“%bcack”, standing for ”uhhuh”, “mhm”, etc.). The same substitution error is much less frequent in human transcripts.
A possible explanation for this asymmetry lies in the discourse functions of filled pauses and backchannels. Filled pauses serve to either claim or retain the floor, signaling that the speaker wants to either start or continue speaking. Backchannels, on the other hand, acknowledge that the speaker is listening, and that the other speaker should carry on. Since the two classes of words thus have exactly opposite functions in turn management, it stands to reason that humans are keenly aware of their differences and use all available phonetic, prosodic, and contextual cues to distinguish then. Our ASR system, by contrast, uses only its standard acoustic-phonetic and language models. Modeling dialog context in particular would be expected to improve this shortcoming.
A Turing-like Experiment
Having established that human and machine transcriptions are quite similar in several aspects, including the word token types involved, we were wondering if higher-level error patterns could distinguish the two systems. For example, one might expect that human misrecognitions are guided by a strong “human” language and understanding model, whereas machine errors might be more likely to generate syntactic and semantic nonsense. To get at this question we designed a specialized version of the classic Turing test, in the sense that a human judge is asked to interact with a system with the goal of estimating whether it is underpinned by human or artificial “intelligence.” In our case, the task involved inspecting one randomly chosen utterance from the test set at a time, with a side-by-side display of the reference transcript, the human transcript, and the ASR output (after the text normalizations that are part of the scoring protocol). Only utterances having at least one transcription error and a discrepancy between the two versions are presented. Discrepancies between the transcript versions are highlighted, and the error type (substitution, insertion, deletion) is visually coded as well, as shown in Figure FIGREF7 .
We ran this informal experiment during four days on the exhibitor floor of the 2017 IEEE ICASSP conference in New Orleans. The players were not formally recruited or characterized, but consisted of conference attendees who for the most part had some background or experience in speech processing. Subjects were introduced to the test by explaining the research background, and were allowed to play as many trials as they wanted. Out of a total of 353 trials, subjects identified the human transcript correctly 188 times, for an overall success rate of 53%. The successes included occasional gimmes like human misspellings or the asymmetry in the filled pause/backchannel substitution (which we pointed out to the subjects). According to a binomial test, this success rate does not differ signficantly from the 50% chance rate ( INLINEFORM0 , one-tailed). While this result is obviously quite preliminary, it was a good demonstration that it is not easy distinguishing machine from human errors, even for technically sophisticated observers.
Conclusions
We have discussed methodological issues and reported first findings when comparing automatic conversational speech transcriptions to human performance, using data generated by our recent efforts to reach human parity in CTS recognition. While an exact characterization of the human benchmark remains a moving target that is subject to debate, our results so far have shown that machine transcription errors track those made by humans in several important aspects. At the speaker (as well as corpus) level the two error rates are strongly correlated, suggesting that common underlying factors in the speech data determine transcription difficulty for both humans and ASR systems. (A detailed characterization of those factors has precedent in ASR research and should be revisited while also considering human performance.) A partial overlap of Switchboard training and test speakers seems to have no major effect on error rates. We also find that the most frequent error patterns involve the same short function words and discourse particles for both humans and machines. The one notable exception is that ASR tends to confuse filled pauses and backchannels, a functional distinction that humans need to be very good at pragmatically. An informal Turing-like test also demonstrated that error patterns in the two types of transcriptions are not obviously distinguishable. Overall, we conclude that recent advances in ASR technology have not only achieved remarkable levels of accuracy, but also generate results that are qualitatively surprisingly similar to professional human transcriber output.
Acknowledgments
We thank our coauthors and collaborators on the Human Parity project: X. Huang, F. Seide, M. Seltzer, W. Xiong, D. Yu, and G. Zweig. Thanks to K. Riedhammer for sharing metadata on train/test speaker overlap. | pipeline that is used at Microsoft for production data |
a978a1ee73547ff3a80c66e6db3e6c3d3b6512f4 | a978a1ee73547ff3a80c66e6db3e6c3d3b6512f4_0 | Q: How much improvement does their method get over the fine tuning baseline?
Text: Introduction
One of the most attractive features of neural machine translation (NMT) BIBREF0 , BIBREF1 , BIBREF2 is that it is possible to train an end to end system without the need to deal with word alignments, translation rules and complicated decoding algorithms, which are a characteristic of statistical machine translation (SMT) systems. However, it is reported that NMT works better than SMT only when there is an abundance of parallel corpora. In the case of low resource domains, vanilla NMT is either worse than or comparable to SMT BIBREF3 .
Domain adaptation has been shown to be effective for low resource NMT. The conventional domain adaptation method is fine tuning, in which an out-of-domain model is further trained on in-domain data BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . However, fine tuning tends to overfit quickly due to the small size of the in-domain data. On the other hand, multi domain NMT BIBREF8 involves training a single NMT model for multiple domains. This method adds tags “<2domain>" by modifying the parallel corpora to indicate domains without any modifications to the NMT system architecture. However, this method has not been studied for domain adaptation in particular.
Motivated by these two lines of studies, we propose a new domain adaptation method called “mixed fine tuning," where we first train an NMT model on an out-of-domain parallel corpus, and then fine tune it on a parallel corpus that is a mix of the in-domain and out-of-domain corpora. Fine tuning on the mixed corpus instead of the in-domain corpus can address the overfitting problem. All corpora are augmented with artificial tags to indicate specific domains. We tried two different corpora settings:
We observed that “mixed fine tuning" works significantly better than methods that use fine tuning and domain tag based approaches separately. Our contributions are twofold:
Related Work
Besides fine tuning and multi domian NMT using tags, another direction for domain adaptation is using in-domain monolingual data. Either training an in-domain recurrent neural language (RNN) language model for the NMT decoder BIBREF13 or generating synthetic data by back translating target in-domain monolingual data BIBREF5 have been studied.
Methods for Comparison
All the methods that we compare are simple and do not need any modifications to the NMT system.
Fine Tuning
Fine tuning is the conventional way for domain adaptation, and thus serves as a baseline in this study. In this method, we first train an NMT system on a resource rich out-of-domain corpus till convergence, and then fine tune its parameters on a resource poor in-domain corpus (Figure 1 ).
Multi Domain
The multi domain method is originally motivated by BIBREF14 , which uses tags to control the politeness of NMT translations. The overview of this method is shown in the dotted section in Figure 2 . In this method, we simply concatenate the corpora of multiple domains with two small modifications: a. Appending the domain tag “<2domain>" to the source sentences of the respective corpora. This primes the NMT decoder to generate sentences for the specific domain. b. Oversampling the smaller corpus so that the training procedure pays equal attention to each domain.
We can further fine tune the multi domain model on the in-domain data, which is named as “multi domain + fine tuning.”
Mixed Fine Tuning
The proposed mixed fine tuning method is a combination of the above methods (shown in Figure 2 ). The training procedure is as follows:
Train an NMT model on out-of-domain data till convergence.
Resume training the NMT model from step 1 on a mix of in-domain and out-of-domain data (by oversampling the in-domain data) till convergence.
By default, we utilize domain tags, but we also consider settings where we do not use them (i.e., “w/o tags”). We can further fine tune the model from step 2 on the in-domain data, which is named as “mixed fine tuning + fine tuning.”
Note that in the “fine tuning” method, the vocabulary obtained from the out-of-domain data is used for the in-domain data; while for the “multi domain” and “mixed fine tuning” methods, we use a vocabulary obtained from the mixed in-domain and out-of-domain data for all the training stages.
Experimental Settings
We conducted NMT domain adaptation experiments in two different settings as follows:
High Quality In-domain Corpus Setting
Chinese-to-English translation was the focus of the high quality in-domain corpus setting. We utilized the resource rich patent out-of-domain data to augment the resource poor spoken language in-domain data. The patent domain MT was conducted on the Chinese-English subtask (NTCIR-CE) of the patent MT task at the NTCIR-10 workshop BIBREF9 . The NTCIR-CE task uses 1000000, 2000, and 2000 sentences for training, development, and testing, respectively. The spoken domain MT was conducted on the Chinese-English subtask (IWSLT-CE) of the TED talk MT task at the IWSLT 2015 workshop BIBREF10 . The IWSLT-CE task contains 209,491 sentences for training. We used the dev 2010 set for development, containing 887 sentences. We evaluated all methods on the 2010, 2011, 2012, and 2013 test sets, containing 1570, 1245, 1397, and 1261 sentences, respectively.
Low Quality In-domain Corpus Setting
Chinese-to-Japanese translation was the focus of the low quality in-domain corpus setting. We utilized the resource rich scientific out-of-domain data to augment the resource poor Wikipedia (essentially open) in-domain data. The scientific domain MT was conducted on the Chinese-Japanese paper excerpt corpus (ASPEC-CJ) BIBREF11 , which is one subtask of the workshop on Asian translation (WAT) BIBREF15 . The ASPEC-CJ task uses 672315, 2090, and 2107 sentences for training, development, and testing, respectively. The Wikipedia domain task was conducted on a Chinese-Japanese corpus automatically extracted from Wikipedia (WIKI-CJ) BIBREF12 using the ASPEC-CJ corpus as a seed. The WIKI-CJ task contains 136013, 198, and 198 sentences for training, development, and testing, respectively.
MT Systems
For NMT, we used the KyotoNMT system BIBREF16 . The NMT training settings are the same as those of the best systems that participated in WAT 2016. The sizes of the source and target vocabularies, the source and target side embeddings, the hidden states, the attention mechanism hidden states, and the deep softmax output with a 2-maxout layer were set to 32,000, 620, 1000, 1000, and 500, respectively. We used 2-layer LSTMs for both the source and target sides. ADAM was used as the learning algorithm, with a dropout rate of 20% for the inter-layer dropout, and L2 regularization with a weight decay coefficient of 1e-6. The mini batch size was 64, and sentences longer than 80 tokens were discarded. We early stopped the training process when we observed that the BLEU score of the development set converges. For testing, we self ensembled the three parameters of the best development loss, the best development BLEU, and the final parameters. Beam size was set to 100.
For performance comparison, we also conducted experiments on phrase based SMT (PBSMT). We used the Moses PBSMT system BIBREF17 for all of our MT experiments. For the respective tasks, we trained 5-gram language models on the target side of the training data using the KenLM toolkit with interpolated Kneser-Ney discounting, respectively. In all of our experiments, we used the GIZA++ toolkit for word alignment; tuning was performed by minimum error rate training BIBREF18 , and it was re-run for every experiment.
For both MT systems, we preprocessed the data as follows. For Chinese, we used KyotoMorph for segmentation, which was trained on the CTB version 5 (CTB5) and SCTB BIBREF19 . For English, we tokenized and lowercased the sentences using the tokenizer.perl script in Moses. Japanese was segmented using JUMAN BIBREF20 .
For NMT, we further split the words into sub-words using byte pair encoding (BPE) BIBREF21 , which has been shown to be effective for the rare word problem in NMT. Another motivation of using sub-words is making the different domains share more vocabulary, which is important especially for the resource poor domain. For the Chinese-to-English tasks, we trained two BPE models on the Chinese and English vocabularies, respectively. For the Chinese-to-Japanese tasks, we trained a joint BPE model on both of the Chinese and Japanese vocabularies, because Chinese and Japanese could share some vocabularies of Chinese characters. The number of merge operations was set to 30,000 for all the tasks.
Results
Tables 1 and 2 show the translation results on the Chinese-to-English and Chinese-to-Japanese tasks, respectively. The entries with SMT and NMT are the PBSMT and NMT systems, respectively; others are the different methods described in Section "Methods for Comparison" . In both tables, the numbers in bold indicate the best system and all systems that were not significantly different from the best system. The significance tests were performed using the bootstrap resampling method BIBREF22 at $p < 0.05$ .
We can see that without domain adaptation, the SMT systems perform significantly better than the NMT system on the resource poor domains, i.e., IWSLT-CE and WIKI-CJ; while on the resource rich domains, i.e., NTCIR-CE and ASPEC-CJ, NMT outperforms SMT. Directly using the SMT/NMT models trained on the out-of-domain data to translate the in-domain data shows bad performance. With our proposed “Mixed fine tuning" domain adaptation method, NMT significantly outperforms SMT on the in-domain tasks.
Comparing different domain adaptation methods, “Mixed fine tuning” shows the best performance. We believe the reason for this is that “Mixed fine tuning” can address the over-fitting problem of “Fine tuning.” We observed that while “Fine tuning” overfits quickly after only 1 epoch of training, “Mixed fine tuning” only slightly overfits until covergence. In addition, “Mixed fine tuning” does not worsen the quality of out-of-domain translations, while “Fine tuning” and “Multi domain” do. One shortcoming of “Mixed fine tuning” is that compared to “fine tuning,” it took a longer time for the fine tuning process, as the time until convergence is essentially proportional to the size of the data used for fine tuning.
“Multi domain” performs either as well as (IWSLT-CE) or worse than (WIKI-CJ) “Fine tuning,” but “Mixed fine tuning” performs either significantly better than (IWSLT-CE) or is comparable to (WIKI-CJ) “Fine tuning.” We believe the performance difference between the two tasks is due to their unique characteristics. As WIKI-CJ data is of relatively poorer quality, mixing it with out-of-domain data does not have the same level of positive effects as those obtained by the IWSLT-CE data.
The domain tags are helpful for both “Multi domain” and “Mixed fine tuning.” Essentially, further fine tuning on in-domain data does not help for both “Multi domain” and “Mixed fine tuning.” We believe the reason for this is that the “Multi domain” and “Mixed fine tuning” methods already utilize the in-domain data used for fine tuning.
Conclusion
In this paper, we proposed a novel domain adaptation method named “mixed fine tuning” for NMT. We empirically compared our proposed method against fine tuning and multi domain methods, and have shown that it is effective but is sensitive to the quality of the in-domain data used.
In the future, we plan to incorporate an RNN model into our current architecture to leverage abundant in-domain monolingual corpora. We also plan on exploring the effects of synthetic data by back translating large in-domain monolingual corpora. | 0.08 points on the 2011 test set, 0.44 points on the 2012 test set, 0.42 points on the 2013 test set for IWSLT-CE. |
46ee1cbbfbf0067747b28bdf4c8c2f7dc8955650 | 46ee1cbbfbf0067747b28bdf4c8c2f7dc8955650_0 | Q: What kinds of neural networks did they use in this paper?
Text: Introduction
One of the most attractive features of neural machine translation (NMT) BIBREF0 , BIBREF1 , BIBREF2 is that it is possible to train an end to end system without the need to deal with word alignments, translation rules and complicated decoding algorithms, which are a characteristic of statistical machine translation (SMT) systems. However, it is reported that NMT works better than SMT only when there is an abundance of parallel corpora. In the case of low resource domains, vanilla NMT is either worse than or comparable to SMT BIBREF3 .
Domain adaptation has been shown to be effective for low resource NMT. The conventional domain adaptation method is fine tuning, in which an out-of-domain model is further trained on in-domain data BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . However, fine tuning tends to overfit quickly due to the small size of the in-domain data. On the other hand, multi domain NMT BIBREF8 involves training a single NMT model for multiple domains. This method adds tags “<2domain>" by modifying the parallel corpora to indicate domains without any modifications to the NMT system architecture. However, this method has not been studied for domain adaptation in particular.
Motivated by these two lines of studies, we propose a new domain adaptation method called “mixed fine tuning," where we first train an NMT model on an out-of-domain parallel corpus, and then fine tune it on a parallel corpus that is a mix of the in-domain and out-of-domain corpora. Fine tuning on the mixed corpus instead of the in-domain corpus can address the overfitting problem. All corpora are augmented with artificial tags to indicate specific domains. We tried two different corpora settings:
We observed that “mixed fine tuning" works significantly better than methods that use fine tuning and domain tag based approaches separately. Our contributions are twofold:
Related Work
Besides fine tuning and multi domian NMT using tags, another direction for domain adaptation is using in-domain monolingual data. Either training an in-domain recurrent neural language (RNN) language model for the NMT decoder BIBREF13 or generating synthetic data by back translating target in-domain monolingual data BIBREF5 have been studied.
Methods for Comparison
All the methods that we compare are simple and do not need any modifications to the NMT system.
Fine Tuning
Fine tuning is the conventional way for domain adaptation, and thus serves as a baseline in this study. In this method, we first train an NMT system on a resource rich out-of-domain corpus till convergence, and then fine tune its parameters on a resource poor in-domain corpus (Figure 1 ).
Multi Domain
The multi domain method is originally motivated by BIBREF14 , which uses tags to control the politeness of NMT translations. The overview of this method is shown in the dotted section in Figure 2 . In this method, we simply concatenate the corpora of multiple domains with two small modifications: a. Appending the domain tag “<2domain>" to the source sentences of the respective corpora. This primes the NMT decoder to generate sentences for the specific domain. b. Oversampling the smaller corpus so that the training procedure pays equal attention to each domain.
We can further fine tune the multi domain model on the in-domain data, which is named as “multi domain + fine tuning.”
Mixed Fine Tuning
The proposed mixed fine tuning method is a combination of the above methods (shown in Figure 2 ). The training procedure is as follows:
Train an NMT model on out-of-domain data till convergence.
Resume training the NMT model from step 1 on a mix of in-domain and out-of-domain data (by oversampling the in-domain data) till convergence.
By default, we utilize domain tags, but we also consider settings where we do not use them (i.e., “w/o tags”). We can further fine tune the model from step 2 on the in-domain data, which is named as “mixed fine tuning + fine tuning.”
Note that in the “fine tuning” method, the vocabulary obtained from the out-of-domain data is used for the in-domain data; while for the “multi domain” and “mixed fine tuning” methods, we use a vocabulary obtained from the mixed in-domain and out-of-domain data for all the training stages.
Experimental Settings
We conducted NMT domain adaptation experiments in two different settings as follows:
High Quality In-domain Corpus Setting
Chinese-to-English translation was the focus of the high quality in-domain corpus setting. We utilized the resource rich patent out-of-domain data to augment the resource poor spoken language in-domain data. The patent domain MT was conducted on the Chinese-English subtask (NTCIR-CE) of the patent MT task at the NTCIR-10 workshop BIBREF9 . The NTCIR-CE task uses 1000000, 2000, and 2000 sentences for training, development, and testing, respectively. The spoken domain MT was conducted on the Chinese-English subtask (IWSLT-CE) of the TED talk MT task at the IWSLT 2015 workshop BIBREF10 . The IWSLT-CE task contains 209,491 sentences for training. We used the dev 2010 set for development, containing 887 sentences. We evaluated all methods on the 2010, 2011, 2012, and 2013 test sets, containing 1570, 1245, 1397, and 1261 sentences, respectively.
Low Quality In-domain Corpus Setting
Chinese-to-Japanese translation was the focus of the low quality in-domain corpus setting. We utilized the resource rich scientific out-of-domain data to augment the resource poor Wikipedia (essentially open) in-domain data. The scientific domain MT was conducted on the Chinese-Japanese paper excerpt corpus (ASPEC-CJ) BIBREF11 , which is one subtask of the workshop on Asian translation (WAT) BIBREF15 . The ASPEC-CJ task uses 672315, 2090, and 2107 sentences for training, development, and testing, respectively. The Wikipedia domain task was conducted on a Chinese-Japanese corpus automatically extracted from Wikipedia (WIKI-CJ) BIBREF12 using the ASPEC-CJ corpus as a seed. The WIKI-CJ task contains 136013, 198, and 198 sentences for training, development, and testing, respectively.
MT Systems
For NMT, we used the KyotoNMT system BIBREF16 . The NMT training settings are the same as those of the best systems that participated in WAT 2016. The sizes of the source and target vocabularies, the source and target side embeddings, the hidden states, the attention mechanism hidden states, and the deep softmax output with a 2-maxout layer were set to 32,000, 620, 1000, 1000, and 500, respectively. We used 2-layer LSTMs for both the source and target sides. ADAM was used as the learning algorithm, with a dropout rate of 20% for the inter-layer dropout, and L2 regularization with a weight decay coefficient of 1e-6. The mini batch size was 64, and sentences longer than 80 tokens were discarded. We early stopped the training process when we observed that the BLEU score of the development set converges. For testing, we self ensembled the three parameters of the best development loss, the best development BLEU, and the final parameters. Beam size was set to 100.
For performance comparison, we also conducted experiments on phrase based SMT (PBSMT). We used the Moses PBSMT system BIBREF17 for all of our MT experiments. For the respective tasks, we trained 5-gram language models on the target side of the training data using the KenLM toolkit with interpolated Kneser-Ney discounting, respectively. In all of our experiments, we used the GIZA++ toolkit for word alignment; tuning was performed by minimum error rate training BIBREF18 , and it was re-run for every experiment.
For both MT systems, we preprocessed the data as follows. For Chinese, we used KyotoMorph for segmentation, which was trained on the CTB version 5 (CTB5) and SCTB BIBREF19 . For English, we tokenized and lowercased the sentences using the tokenizer.perl script in Moses. Japanese was segmented using JUMAN BIBREF20 .
For NMT, we further split the words into sub-words using byte pair encoding (BPE) BIBREF21 , which has been shown to be effective for the rare word problem in NMT. Another motivation of using sub-words is making the different domains share more vocabulary, which is important especially for the resource poor domain. For the Chinese-to-English tasks, we trained two BPE models on the Chinese and English vocabularies, respectively. For the Chinese-to-Japanese tasks, we trained a joint BPE model on both of the Chinese and Japanese vocabularies, because Chinese and Japanese could share some vocabularies of Chinese characters. The number of merge operations was set to 30,000 for all the tasks.
Results
Tables 1 and 2 show the translation results on the Chinese-to-English and Chinese-to-Japanese tasks, respectively. The entries with SMT and NMT are the PBSMT and NMT systems, respectively; others are the different methods described in Section "Methods for Comparison" . In both tables, the numbers in bold indicate the best system and all systems that were not significantly different from the best system. The significance tests were performed using the bootstrap resampling method BIBREF22 at $p < 0.05$ .
We can see that without domain adaptation, the SMT systems perform significantly better than the NMT system on the resource poor domains, i.e., IWSLT-CE and WIKI-CJ; while on the resource rich domains, i.e., NTCIR-CE and ASPEC-CJ, NMT outperforms SMT. Directly using the SMT/NMT models trained on the out-of-domain data to translate the in-domain data shows bad performance. With our proposed “Mixed fine tuning" domain adaptation method, NMT significantly outperforms SMT on the in-domain tasks.
Comparing different domain adaptation methods, “Mixed fine tuning” shows the best performance. We believe the reason for this is that “Mixed fine tuning” can address the over-fitting problem of “Fine tuning.” We observed that while “Fine tuning” overfits quickly after only 1 epoch of training, “Mixed fine tuning” only slightly overfits until covergence. In addition, “Mixed fine tuning” does not worsen the quality of out-of-domain translations, while “Fine tuning” and “Multi domain” do. One shortcoming of “Mixed fine tuning” is that compared to “fine tuning,” it took a longer time for the fine tuning process, as the time until convergence is essentially proportional to the size of the data used for fine tuning.
“Multi domain” performs either as well as (IWSLT-CE) or worse than (WIKI-CJ) “Fine tuning,” but “Mixed fine tuning” performs either significantly better than (IWSLT-CE) or is comparable to (WIKI-CJ) “Fine tuning.” We believe the performance difference between the two tasks is due to their unique characteristics. As WIKI-CJ data is of relatively poorer quality, mixing it with out-of-domain data does not have the same level of positive effects as those obtained by the IWSLT-CE data.
The domain tags are helpful for both “Multi domain” and “Mixed fine tuning.” Essentially, further fine tuning on in-domain data does not help for both “Multi domain” and “Mixed fine tuning.” We believe the reason for this is that the “Multi domain” and “Mixed fine tuning” methods already utilize the in-domain data used for fine tuning.
Conclusion
In this paper, we proposed a novel domain adaptation method named “mixed fine tuning” for NMT. We empirically compared our proposed method against fine tuning and multi domain methods, and have shown that it is effective but is sensitive to the quality of the in-domain data used.
In the future, we plan to incorporate an RNN model into our current architecture to leverage abundant in-domain monolingual corpora. We also plan on exploring the effects of synthetic data by back translating large in-domain monolingual corpora. | LSTMs |
4f12b41bd3bb2610abf7d7835291496aa69fb78c | 4f12b41bd3bb2610abf7d7835291496aa69fb78c_0 | Q: How did they use the domain tags?
Text: Introduction
One of the most attractive features of neural machine translation (NMT) BIBREF0 , BIBREF1 , BIBREF2 is that it is possible to train an end to end system without the need to deal with word alignments, translation rules and complicated decoding algorithms, which are a characteristic of statistical machine translation (SMT) systems. However, it is reported that NMT works better than SMT only when there is an abundance of parallel corpora. In the case of low resource domains, vanilla NMT is either worse than or comparable to SMT BIBREF3 .
Domain adaptation has been shown to be effective for low resource NMT. The conventional domain adaptation method is fine tuning, in which an out-of-domain model is further trained on in-domain data BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . However, fine tuning tends to overfit quickly due to the small size of the in-domain data. On the other hand, multi domain NMT BIBREF8 involves training a single NMT model for multiple domains. This method adds tags “<2domain>" by modifying the parallel corpora to indicate domains without any modifications to the NMT system architecture. However, this method has not been studied for domain adaptation in particular.
Motivated by these two lines of studies, we propose a new domain adaptation method called “mixed fine tuning," where we first train an NMT model on an out-of-domain parallel corpus, and then fine tune it on a parallel corpus that is a mix of the in-domain and out-of-domain corpora. Fine tuning on the mixed corpus instead of the in-domain corpus can address the overfitting problem. All corpora are augmented with artificial tags to indicate specific domains. We tried two different corpora settings:
We observed that “mixed fine tuning" works significantly better than methods that use fine tuning and domain tag based approaches separately. Our contributions are twofold:
Related Work
Besides fine tuning and multi domian NMT using tags, another direction for domain adaptation is using in-domain monolingual data. Either training an in-domain recurrent neural language (RNN) language model for the NMT decoder BIBREF13 or generating synthetic data by back translating target in-domain monolingual data BIBREF5 have been studied.
Methods for Comparison
All the methods that we compare are simple and do not need any modifications to the NMT system.
Fine Tuning
Fine tuning is the conventional way for domain adaptation, and thus serves as a baseline in this study. In this method, we first train an NMT system on a resource rich out-of-domain corpus till convergence, and then fine tune its parameters on a resource poor in-domain corpus (Figure 1 ).
Multi Domain
The multi domain method is originally motivated by BIBREF14 , which uses tags to control the politeness of NMT translations. The overview of this method is shown in the dotted section in Figure 2 . In this method, we simply concatenate the corpora of multiple domains with two small modifications: a. Appending the domain tag “<2domain>" to the source sentences of the respective corpora. This primes the NMT decoder to generate sentences for the specific domain. b. Oversampling the smaller corpus so that the training procedure pays equal attention to each domain.
We can further fine tune the multi domain model on the in-domain data, which is named as “multi domain + fine tuning.”
Mixed Fine Tuning
The proposed mixed fine tuning method is a combination of the above methods (shown in Figure 2 ). The training procedure is as follows:
Train an NMT model on out-of-domain data till convergence.
Resume training the NMT model from step 1 on a mix of in-domain and out-of-domain data (by oversampling the in-domain data) till convergence.
By default, we utilize domain tags, but we also consider settings where we do not use them (i.e., “w/o tags”). We can further fine tune the model from step 2 on the in-domain data, which is named as “mixed fine tuning + fine tuning.”
Note that in the “fine tuning” method, the vocabulary obtained from the out-of-domain data is used for the in-domain data; while for the “multi domain” and “mixed fine tuning” methods, we use a vocabulary obtained from the mixed in-domain and out-of-domain data for all the training stages.
Experimental Settings
We conducted NMT domain adaptation experiments in two different settings as follows:
High Quality In-domain Corpus Setting
Chinese-to-English translation was the focus of the high quality in-domain corpus setting. We utilized the resource rich patent out-of-domain data to augment the resource poor spoken language in-domain data. The patent domain MT was conducted on the Chinese-English subtask (NTCIR-CE) of the patent MT task at the NTCIR-10 workshop BIBREF9 . The NTCIR-CE task uses 1000000, 2000, and 2000 sentences for training, development, and testing, respectively. The spoken domain MT was conducted on the Chinese-English subtask (IWSLT-CE) of the TED talk MT task at the IWSLT 2015 workshop BIBREF10 . The IWSLT-CE task contains 209,491 sentences for training. We used the dev 2010 set for development, containing 887 sentences. We evaluated all methods on the 2010, 2011, 2012, and 2013 test sets, containing 1570, 1245, 1397, and 1261 sentences, respectively.
Low Quality In-domain Corpus Setting
Chinese-to-Japanese translation was the focus of the low quality in-domain corpus setting. We utilized the resource rich scientific out-of-domain data to augment the resource poor Wikipedia (essentially open) in-domain data. The scientific domain MT was conducted on the Chinese-Japanese paper excerpt corpus (ASPEC-CJ) BIBREF11 , which is one subtask of the workshop on Asian translation (WAT) BIBREF15 . The ASPEC-CJ task uses 672315, 2090, and 2107 sentences for training, development, and testing, respectively. The Wikipedia domain task was conducted on a Chinese-Japanese corpus automatically extracted from Wikipedia (WIKI-CJ) BIBREF12 using the ASPEC-CJ corpus as a seed. The WIKI-CJ task contains 136013, 198, and 198 sentences for training, development, and testing, respectively.
MT Systems
For NMT, we used the KyotoNMT system BIBREF16 . The NMT training settings are the same as those of the best systems that participated in WAT 2016. The sizes of the source and target vocabularies, the source and target side embeddings, the hidden states, the attention mechanism hidden states, and the deep softmax output with a 2-maxout layer were set to 32,000, 620, 1000, 1000, and 500, respectively. We used 2-layer LSTMs for both the source and target sides. ADAM was used as the learning algorithm, with a dropout rate of 20% for the inter-layer dropout, and L2 regularization with a weight decay coefficient of 1e-6. The mini batch size was 64, and sentences longer than 80 tokens were discarded. We early stopped the training process when we observed that the BLEU score of the development set converges. For testing, we self ensembled the three parameters of the best development loss, the best development BLEU, and the final parameters. Beam size was set to 100.
For performance comparison, we also conducted experiments on phrase based SMT (PBSMT). We used the Moses PBSMT system BIBREF17 for all of our MT experiments. For the respective tasks, we trained 5-gram language models on the target side of the training data using the KenLM toolkit with interpolated Kneser-Ney discounting, respectively. In all of our experiments, we used the GIZA++ toolkit for word alignment; tuning was performed by minimum error rate training BIBREF18 , and it was re-run for every experiment.
For both MT systems, we preprocessed the data as follows. For Chinese, we used KyotoMorph for segmentation, which was trained on the CTB version 5 (CTB5) and SCTB BIBREF19 . For English, we tokenized and lowercased the sentences using the tokenizer.perl script in Moses. Japanese was segmented using JUMAN BIBREF20 .
For NMT, we further split the words into sub-words using byte pair encoding (BPE) BIBREF21 , which has been shown to be effective for the rare word problem in NMT. Another motivation of using sub-words is making the different domains share more vocabulary, which is important especially for the resource poor domain. For the Chinese-to-English tasks, we trained two BPE models on the Chinese and English vocabularies, respectively. For the Chinese-to-Japanese tasks, we trained a joint BPE model on both of the Chinese and Japanese vocabularies, because Chinese and Japanese could share some vocabularies of Chinese characters. The number of merge operations was set to 30,000 for all the tasks.
Results
Tables 1 and 2 show the translation results on the Chinese-to-English and Chinese-to-Japanese tasks, respectively. The entries with SMT and NMT are the PBSMT and NMT systems, respectively; others are the different methods described in Section "Methods for Comparison" . In both tables, the numbers in bold indicate the best system and all systems that were not significantly different from the best system. The significance tests were performed using the bootstrap resampling method BIBREF22 at $p < 0.05$ .
We can see that without domain adaptation, the SMT systems perform significantly better than the NMT system on the resource poor domains, i.e., IWSLT-CE and WIKI-CJ; while on the resource rich domains, i.e., NTCIR-CE and ASPEC-CJ, NMT outperforms SMT. Directly using the SMT/NMT models trained on the out-of-domain data to translate the in-domain data shows bad performance. With our proposed “Mixed fine tuning" domain adaptation method, NMT significantly outperforms SMT on the in-domain tasks.
Comparing different domain adaptation methods, “Mixed fine tuning” shows the best performance. We believe the reason for this is that “Mixed fine tuning” can address the over-fitting problem of “Fine tuning.” We observed that while “Fine tuning” overfits quickly after only 1 epoch of training, “Mixed fine tuning” only slightly overfits until covergence. In addition, “Mixed fine tuning” does not worsen the quality of out-of-domain translations, while “Fine tuning” and “Multi domain” do. One shortcoming of “Mixed fine tuning” is that compared to “fine tuning,” it took a longer time for the fine tuning process, as the time until convergence is essentially proportional to the size of the data used for fine tuning.
“Multi domain” performs either as well as (IWSLT-CE) or worse than (WIKI-CJ) “Fine tuning,” but “Mixed fine tuning” performs either significantly better than (IWSLT-CE) or is comparable to (WIKI-CJ) “Fine tuning.” We believe the performance difference between the two tasks is due to their unique characteristics. As WIKI-CJ data is of relatively poorer quality, mixing it with out-of-domain data does not have the same level of positive effects as those obtained by the IWSLT-CE data.
The domain tags are helpful for both “Multi domain” and “Mixed fine tuning.” Essentially, further fine tuning on in-domain data does not help for both “Multi domain” and “Mixed fine tuning.” We believe the reason for this is that the “Multi domain” and “Mixed fine tuning” methods already utilize the in-domain data used for fine tuning.
Conclusion
In this paper, we proposed a novel domain adaptation method named “mixed fine tuning” for NMT. We empirically compared our proposed method against fine tuning and multi domain methods, and have shown that it is effective but is sensitive to the quality of the in-domain data used.
In the future, we plan to incorporate an RNN model into our current architecture to leverage abundant in-domain monolingual corpora. We also plan on exploring the effects of synthetic data by back translating large in-domain monolingual corpora. | Appending the domain tag “<2domain>" to the source sentences of the respective corpora |
65e6a1cc2590b139729e7e44dce6d9af5dd2c3b5 | 65e6a1cc2590b139729e7e44dce6d9af5dd2c3b5_0 | Q: Why mixed initiative multi-turn dialogs are the greatest challenge in building open-domain conversational agents?
Text: Introduction
The Alexa Prize funded 12 international teams to compete to create a conversational agent that can discuss any topic for at least 20 minutes. UCSC's Slugbot was one of these funded teams. The greatest challenges with the competition arise directly from the potential for ongoing mixed-initiative multi-turn dialogues, which do not follow a particular plan or pursue a particular fixed information need. This paper describes some of the lessons we learned building SlugBot for the 2017 Alexa Prize, particularly focusing on the challenges of integrating content found via search with content from structured data in order to carry on an ongoing, coherent, open-domain, mixed-initiative conversation. SlugBot's conversations over the semi-finals user evaluation averaged 8:17 minutes.
Unlike much previous work on conversational AI, SlugBot could not and did not assume that the user had an “information need” BIBREF0 , BIBREF1 , BIBREF2 . Rather, the design of the Alexa Prize was aimed at open conversations that could engage the user, through any type of dialogue or chitchat, discussing films and books, gossiping about celebrities, playing verbal games, telling stories or sharing experiences, or any other of many different types of activities that conversation is often used for.
This open design foregrounds many longstanding challenges that have not been solved even for task-oriented dialogue systems. These include:
This paper is structured around the “lessons learned” with respect to these challenges from our experience building SlugBot. To be clear, we are not offering a solution to these problems: instead our intention is simply to highlight the difficulties with developing adequate computational models of these phenomena that particularly arise in the context of open-domain conversations, where users cannot be assumed to be pursuing a particular task or information need. We will attempt to motivate our hypothesis that a comprehensive solution to these challenges for open-domain dialogue requires a much deeper understanding and utilization of the semantic relations that underly dialogue coherence.
For example, consider dialogue focused on content related to the movie domain. This should be one of the easiest domains because it is well-structured, and there are existing systems handling conversations where there is a specified user information need or task, such as finding films with particular properties, finding out what is playing and where, or booking a movie ticket BIBREF3 , BIBREF4 , BIBREF5 . Moreover, the Internet Movie Database (IMDB) BIBREF6 provides information on plot, rating, and actors that can be leveraged to support conversations. IMDB also makes use of the Schema.org BIBREF7 structure to connect common entities to their related attribute types (such as Actor $\rightarrow $ Person $\rightarrow $ birthDate), allowing the system to retrieve a large set of possible next topics and related facts and entities.
However, remember that SlugBot is based on the assumption that the user might simply enjoy talking about films and related entities and therefore may freely move the conversational focus among different movie entities, along with the vast array of semantically-associated movie attributes: movies have actors, genres, plots, and awards; actors have names, affiliations, other movies they were in, awards, etc. Actors are people, who have spouses, families and friends, and engage in other life activities besides acting, such as political advocacy.
A potential dialogue is shown in Table 1 . The interaction might appear to be simple enough: the user chooses to discuss movies, and selects Jason Bourne as the specific movie she is interested in, the system finds the movie in IMDB, and then provides information on its rating, lead actor, and plot. The user then changes the topic to other movies with the same actor, and the conversation continues.
Even with the availability of IMDB, however, the interaction is not totally straightforward. The RHS of Table 1 describes some of the required competencies and decisions SlugBot must make. First, Slugbot must be able to perform coreference resolution and recognize that the movie and it in turns U6 and U8 are coreferential. We estimate the accuracy of noun-phrase coreference resolution to only be about 70% for off-the-shelf tools applied to dialogue, since most of them are targeted to text BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 .
More challenging is that at each system turn, there are a large number of conversational moves that are possible. Making good decisions about what to say next requires balancing a dialogue policy as to what dialogue acts might be good in this context, with real-time information as to what types of content might be possible to use in this context. Slugbot could offer an opinion as in turn S3, ask a follow-on question as in S3, take the initiative to provide unasked for information, as in S5, or decide, e.g. in the case of the user's request for plot information, to use search to retrieve some relevant content. Search cannot be used effectively here without constructing an appropriate query, or knowing in advance where plot information might be available. In a real-time system, live search may not be able to achieve the required speed and efficiency, so preprocessing or caching of relevant information may be necessary. Finally, most other domains do not have such high quality structured data available, leaving us to develop or try to rely on more general models of discourse coherence.
Modeling Discourse Coherence
In open-domain conversation, dialogue coherence between related turns must be maintained. What underlies dialogue coherence goes beyond simple word overlap or similarity, and its clear that neural models of open-domain conversational dialogue do not yet capture it. Theories of discourse posit that there are a small number of semantic relations that can hold between adjacent turns: at the most general level these are contingency, comparison, expansion, and temporal order BIBREF16 , BIBREF17 , BIBREF18 . We posit that one way to allow SlugBot to take the initiative and produce a turn that maintains discourse coherence is to find content to use in Slugbot's next turn that instantiates a valid semantic relation between the current user turn and SlugBot's next turn. One of the strongest bases for such semantic relations are the relations captured by ontologies or frames, which give us related entities, e.g. movies have actors and directors BIBREF4 , BIBREF21 . These types of relations can be used to instantiate the expansion relation, which basically captures moving to strongly related subtopics, often by chaining off a particular discourse entity. To find content to instantiate the expansion relation to use in Slugbot's next turn (taking the initiative), we carry out the following pipeline:
In the case of movies, the structure of IMDB, as discussed above, allows us to link between related entities and attributes using various DB keys. However other conversational domains do not have freely available richly structured information such as this. It is rare for a single resource to aggregate all the information that might be useful, so SlugBot must be able to leverage information and integrate information from multiple sources. But state-of-the-art knowledge bases and ontologies are still limited. Table 2 lists some of the resources that we have found to be most useful for search and structured information.
Like movies, sports is another domain that has rich structure, and in which there is broad user interest. Search results for a query about "Madison Bumgarner" are in Figure 1 , showcasing a sample of the different information retrievable from each source (Step 2 of the pipeline).
From the Google Knowledge Graph (Figure 1 result we are able to ascertain the entity type, a brief description, and a relevant Wikipedia page (Figure 1 ) which we can use to find accurate structured information. We may further augment our knowledge by using the information returned by the Google Knowledge Graph as parameters to our YAGO or DBpedia query which can more easily extract specific relationships between an entity-attribute. For example, the results returned by YAGO for the "Madison Bumgarner" query contains a connection to the headline Struggling MadBum might not garner next start, which is contextually relevant data not encapsulated anywhere in the previously examined results.
There, however, exists a disconnect between the resources, i.e. some entities are available in one resource and not another, or there may be inconsistent information across resources. While it would be nice not to have to anticipate the types of integration that are needed, our take-away from this, is that at present, it appears we have to accomplish the steps in our pipeline by integrating knowledge from different resources in advance, even though projects such as YAGO have already been working on such integration for at least ten years.
Other discourse coherence relations besides expansion are also viable candidates for selecting content for next turns, but finding content that instantiates these relations can be a challenging problem in itself. For example, in casual conversation, it is common to provide opinions and then perhaps further take the initiative and justify them. The justification of an opinion is a type of contingency relation: we describe how we curate content to provide justifications in Section "Mixed Initiative Dialogue" .
We have also been able to use the temporal relation in a limited way by drawing on narratively structured sources, such as personal stories in blogs. Since these stories are told in temporal order, we can repurpose the content of these blogs to tell stories, maintaining pre-existing narrative coherence when the system produces a sequence of turns BIBREF33 . However, we posit that there is much more that could be done to make better use of deep semantic discourse relations for recognizing discourse relations and generating coherent conversational turns.
Mixed Initiative Dialogue
Mixed Initiative dialogue is key to a natural conversational interaction BIBREF34 , BIBREF35 , BIBREF36 , BIBREF37 , BIBREF38 , BIBREF2 , and this is even more important for open domain dialogue than it is for task-oriented or information seeking dialogue. One of our primary hypotheses, as described above, is that good models of discourse coherence will help SlugBot identify content that can be used to take the initiative. However, models of discourse coherence have been rarely applied to conversation BIBREF39 , BIBREF40 , BIBREF41 and thus there is considerable work to be done simply in understanding how these relations can be instantiated in dialogue.
In addition, a further challenge arises from the fact that both system and user options for dialogue acts are extremely varied at each turn, e.g. user intents can be to provide opinions, give or solicit information, contrast two possibilities, request the system to perform an action, and more. One reasonable taxonomy for the types of dialogue acts that might be available to SlugBot could be based for example on the dialogue act annotations in the Switchboard corpus BIBREF42 .
Here, we consider a simple case combining discourse relations and dialogue acts that we have implemented in Slugbot in order to take the initiative in a way that we hoped the user would find interesting. Our aim was to utilize the contingency discourse relation to connect a statement of opinion and its justification. We designed a template containing both arguments of the contingency relation, namely I think $\lbrace entity\rbrace $ is $\lbrace sentiment\rbrace $ because $\lbrace justification\rbrace $ . We construct a table of argument pairs that can instantiate this relation, as shown in Table 3 . This table can be populated by crowd-sourcing or by using search as a pre-processing step.
Table 4 illustrates how this is used in our conversations about comics. At Line 6, when the user asks Who is your favorite character?, it is most appropriate to provide an opinion. It is difficult to imagine retrieving search-based data which contains a contextually relevant opinion, but it is even more difficult to imagine that if search had returned such an opinion, that search could be used a second time in order to retrieve a justification for the provided opinion and answer the user's follow-up question in Line 8, Okay why?. The source text for the search would have to be annotated for the type of content that could be used to provide justifications, and search would have to support these types of semantic relations.
Natural Language Generation
The current challenges for natural language generation, in our view, arise from the need to combine information from structured and unstructured sources when producing conversational utterances. SlugBot currently uses a combination of pre-written templates, sentence selection, and techniques for telling stories that are based on converting monologic stories to dialogic sequences BIBREF33 .
Structured data, when available, can do more than structure a search result: it can also be easier to use within a conversation because it provides the necessary structure needed for high precision natural language generation BIBREF22 , BIBREF43 . More precisely, a small set of generic templates with various slots can be filled with information from structured data sources to insure high quality, accurate responses. These generic templates can be hand crafted, or prepared in advance by learning natural language generation templates automatically from appropriate conversational domain sources such as different types of user-generated content BIBREF44 , BIBREF23 , as illustrated in our justification initiatives above in Section "Mixed Initiative Dialogue" .
For general fact-based questions, on the other hand, search content can be used directly. For example, at line 14 in Table 5 when the user asks What was the first movie to feature a vampire?, search provides us with a good response. This introduces however the challenge of updating the discourse context with the right representation of the two movies under discussion, so that they can then be available for follow-on coreference. This is an open problem.
It is clear that in order to use a semi-structured approach, we need to determine when to utilize each source. Structured data can be easier to formulate into system responses and can often more easily handle on-topic follow-up questions, but is more limited in scope. An obvious approach, also used in the Watson Jeopardy system BIBREF45 , is to pool responses from both sources and rank them. We have not, to date, collected enough data to build a ranker.
Our plan is to apply a combination of reinforcement learning and learning of ranking functions for utterance variants in a particular context to SlugBot conversations as we move forward with our own data collection, outside of the Alexa Prize competition BIBREF46 , BIBREF47 , BIBREF48 , BIBREF49 , BIBREF50 . The first step however is to use the Alexa Prize competition data to learn a Paradise-Open-Domain evaluation function, with additional metrics relevant to open-domain dialogue, e.g. independent variable metrics that predict overall dialogue quality such as response delay, vocabulary diversity, dialogue act sequence n-grams BIBREF51 , conversational depth, number of reprompts BIBREF52 , and other measures that can be automatically logged. Many of the required measures have been used over the last 20 years in Paradise to evaluate task-oriented dialogue systems and they remain highly relevant to overall dialogue quality in open-domain dialogue systems BIBREF53 , BIBREF54 , BIBREF55 . We predict this can potentially improve the overall performance of the system as demonstrated in Table 6 . Here, the structured data is sparse, resulting in an uninteresting response, while search returns a very robust answer. Our Paradise-Open-Domain evaluation function would need to learn to place priority on the result returned by search, through ranking, despite having structured data.
For open domain NLG, we have also conducted experiments with neural sequence to sequence approaches using open domain corpora such as film dialogue, Big Bang theory scripts, and open subtitles. These approaches to date do not produce interesting utterances that maintain discourse coherence. It is possible that further curation and semantic annotation of these resources, e.g. by labelling semantic roles and identifying dialogue acts and discourse relations might be helpful, but this could also introduce data sparsity. For example in Switchboard the dialogue act distribution is highly skewed. Integrating information across multiple sources could also be further explored BIBREF33 . Recent work on hybrid neural generation approaches that use knowledge of sentence and discourse planning structures also seem promising BIBREF24 , BIBREF48 , BIBREF56 .
Conclusions
In this paper, we describe some of the challenges we encountered building SlugBot, an open domain conversational agent funded by the Amazon Alexa Prize. We have introduced more problems than we have solved, and we have attempted to support our hypothesis that we need richer models of discourse coherence and discourse semantics to allow a conversational agent to take the initiative in open domain conversations. We illustrated how search and structured information can be combined in order for SlugBot to find content to use to take the initiative and respond to the user's utterances. We propose a hybrid approach for language generation that which combines templates to generate responses with sentence selection from search, and we show examples in different domains to demonstrate real-world use cases that make use of our approach. For future work, we plan to bring together resources that provide structured data from different sources into a single, accessible framework, to supply personal assistants with scalable knowledge bases that will power more natural, mixed initiative, and engaging conversations. We believe that it will be possible in the next few years to build conversational agents that can carry on a conversation for 20 minutes about many different topics. | do not follow a particular plan or pursue a particular fixed information need, integrating content found via search with content from structured data, at each system turn, there are a large number of conversational moves that are possible, most other domains do not have such high quality structured data available, live search may not be able to achieve the required speed and efficiency |
b54fc86dc2cc6994e10c1819b6405de08c496c7b | b54fc86dc2cc6994e10c1819b6405de08c496c7b_0 | Q: How is speed measured?
Text: Introduction
As the reliance on social media as a source of news increases and the reliability of sources is increasingly debated, it is important to understand how users react to various sources of news. Most studies that investigate misinformation spread in social media focus on individual events and the role of the network structure in the spread BIBREF0 , BIBREF1 , BIBREF2 or detection of false information BIBREF3 . These studies have found that the size and shape of misinformation cascades within a social network depends heavily on the initial reactions of the users. Other work has focused on the language of misinformation in social media BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 to detect types of deceptive news.
As an alternative to studying newsworthy events one at a time BIBREF10 , the current work applies linguistically-infused models to predict user reactions to deceptive and trusted news sources. Our analysis reveals differences in reaction types and speed across two social media platforms — Twitter and Reddit.
The first metric we report is the reaction type. Recent studies have found that 59% of bitly-URLs on Twitter are shared without ever being read BIBREF11 , and 73% of Reddit posts were voted on without reading the linked article BIBREF12 . Instead, users tend to rely on the commentary added to retweets or the comments section of Reddit-posts for information on the content and its credibility. Faced with this reality, we ask: what kind of reactions do users find when they browse sources of varying credibility? Discourse acts, or speech acts, can be used to identify the use of language within a conversation, e.g., agreement, question, or answer. Recent work by Zhang et al. zhang2017characterizing classified Reddit comments by their primary discourse act (e.g., question, agreement, humor), and further analyzed patterns from these discussions.
The second metric we report is reaction speed. A study by Jin et al. jin2013epidemiological found that trusted news stories spread faster than misinformation or rumor; Zeng et al. zeng2016rumors found that tweets which deny rumors had shorter delays than tweets of support. Our second goal is to determine if these trends are maintained for various types of news sources on Twitter and Reddit.
Hence, the contributions of this work are two-fold: (1) we develop a linguistically-infused neural network model to classify reactions in social media posts, and (2) we apply our model to label 10.8M Twitter posts and 6.2M Reddit comments in order to evaluate the speed and type of user reactions to various news sources.
Reaction Type Classification
In this section, we describe our approach to classify user reactions into one of eight types of discourse: agreement, answer, appreciation, disagreement, elaboration, humor, negative reaction, or question, or as none of the given labels, which we call “other”, using linguistically-infused neural network models.
Reddit Data
We use a manually annotated Reddit dataset from Zhang et al. zhang2017characterizing to train our reaction classification model. Annotations from 25 crowd-workers labelled the primary discourse act for 101,525 comments within 9,131 comment threads on Reddit. The Reddit IDs, but not the text content of the comments themselves, were released with the annotations. So we collected the content of Reddit posts and comments from a public archive of Reddit posts and comments. Some content was deleted prior to archival, so the dataset shown in Table TABREF3 is a subset of the original content. Despite the inability to capture all of the original dataset, Table TABREF3 shows a similar distribution between our dataset and the original.
Model
We develop a neural network architecture that relies on content and other linguistic signals extracted from reactions and parent posts, and takes advantage of a “late fusion” approach previously used effectively in vision tasks BIBREF13 , BIBREF14 . More specifically, we combine a text sequence sub-network with a vector representation sub-network as shown in Figure FIGREF5 . The text sequence sub-network consists of an embedding layer initialized with 200-dimensional GloVe embeddings BIBREF15 followed by two 1-dimensional convolution layers, then a max-pooling layer followed by a dense layer. The vector representation sub-network consists of two dense layers. We incorporate information from both sub-networks through concatenated padded text sequences and vector representations of normalized Linguistic Inquiry and Word Count (LIWC) features BIBREF16 for the text of each post and its parent.
Reaction Type Classification Results
As shown in Figure FIGREF7 , our linguistically-infused neural network model that relies solely on the content of the reaction and its parent has comparable performance to the more-complex CRF model by Zhang et al. zhang2017characterizing, which relies on content as well as additional metadata like the author, thread (e.g., the size of the the thread, the number of branches), structure (e.g., the position within the thread), and community (i.e., the subreddit in which the comment is posted).
Measuring Reactions to Trusted and Deceptive News Sources
In this section, we present key results of our analysis of how often and how quickly users react to content from sources of varying credibility using the reaction types predicted by our linguistically-infused neural network model.
Twitter and Reddit News Data
We focus on trusted news sources that provide factual information with no intent to deceive and deceptive news sources. Deceptive sources are ranked by their intent to deceive as follows: clickbait (attention-grabbing, misleading, or vague headlines to attract an audience), conspiracy theory (uncorroborated or unreliable information to explain events or circumstances), propaganda (intentionally misleading information to advance a social or political agenda), and disinformation (fabricated or factually incorrect information meant to intentionally deceive readers).
Trusted, clickbait, conspiracy, and propaganda sources were previously compiled by Volkova et al. volkova2017separating through a combination of crowd-sourcing and public resources. Trusted news sources with Twitter-verified accounts were manually labeled and clickbait, conspiracy, and propaganda news sources were collected from several public resources that annotate suspicious news accounts. We collected news sources identified as spreading disinformation by the European Union's East Strategic Communications Task Force from euvsdisinfo.eu. In total, there were 467 news sources: 251 trusted and 216 deceptive.
We collected reaction data for two popular platforms, Reddit and Twitter, using public APIs over the 13 month period from January 2016 through January 2017. For our Reddit dataset, we collected all Reddit posts submitted during the 13 month period that linked to domains associated with one of our labelled news sources. Then we collected all comments that directly responded to those posts. For our Twitter dataset, we collected all tweets posted in the 13 month period that explicitly @mentioned or directly retweeted content from a source and then assigned a label to each tweet based on the class of the source @mentioned or retweeted. A breakdown of each dataset by source type is shown in Table TABREF10 . Figure FIGREF11 illustrates the distribution of deceptive news sources and reactions across the four sub-categories of deceptive news sources. In our analysis, we consider the set of all deceptive sources and the set excluding the most extreme (disinformation).
Methodology
We use the linguistically-infused neural network model from Figure FIGREF5 to label the reaction type of each tweet or comment. Using these labels, we examine how often response types occur when users react to each type of news source. For clarity, we report the five most frequently occurring reaction types (expressed in at least 5% of reactions within each source type) and compare the distributions of reaction types for each type of news source.
To examine whether users react to content from trusted sources differently than from deceptive sources, we measure the reaction delay, which we define as the time elapsed between the moment the link or content was posted/tweeted and the moment that the reaction comment or tweet occurred. We report the cumulative distribution functions (CDFs) for each source type and use Mann Whitney U (MWU) tests to compare whether users respond with a given reaction type with significantly different delays to news sources of different levels of credibility.
Results and Discussion
For both Twitter and Reddit datasets, we found that the primary reaction types were answer, appreciation, elaboration, question, or “other” (no label was predicted). Figure FIGREF13 illustrates the distribution of reaction types among Reddit comments (top plot) or tweets (bottom plot) responding to each type of source, as a percentage of all comments/tweets reacting to sources of the given type (i.e., trusted, all deceptive, and deceptive excluding disinformation sources).
For Twitter, we report clear differences in user reactions to trusted vs. deceptive sources. Deceptive (including disinformation) sources have a much higher rate of appreciation reactions and a lower rate of elaboration responses, compared to trusted news sources. Differences are still significant ( INLINEFORM0 ) but the trends reverse if we do not include disinformation sources. We also see an increase in the rate of question-reactions compared to trusted news sources if we exclude disinformation sources.
For Reddit, there appears to be a very similar distribution across reaction types for trusted and deceptive sources. However, MWU tests still found that the differences between trusted and deceptive news sources were statistically significant ( INLINEFORM0 ) — regardless of whether we include or exclude disinformation sources. Posts that link to deceptive sources have higher rates of question, appreciation, and answering reactions, while posts that link to trusted sources have higher rates of elaboration, agreement, and disagreement.
Next, we compared the speed with which users reacted to posts of sources of varying credibility. Our original hypothesis was that users react to posts of trusted sources faster than posts of deceptive sources. The CDFs for each source type and platform (solid and dashed lines represent Reddit and Twitter respectively) are shown in Figure FIGREF14 . We observe that the lifetime of direct reactions to news sources on Twitter is often more extended than for sources on Reddit. One exception is answer reactions which almost always occur within the first hour after the Twitter new source originally posted the tweet being answered. This may be due to the different ways that users consume content on the two platforms. Users follow accounts on Twitter, whereas on Reddit users “follow” topics through their subscriptions to various subreddits. Users can view the news feeds of individual sources on Twitter and view all of the sources' posts. Reddit, on the other hand, is not designed to highlight individual users or news sources; instead new posts (regardless of the source) are viewed based on their hotness score within each subreddit.
In addition, we observe that reactions to posts linked to trusted sources are less heavily concentrated within the first 12 to 15 hours of the post's lifetime on Reddit. The opposite is found on Twitter. Twitter sources may have a larger range of reaction delays, but they are also more heavily concentrated in the lower end of that range ( INLINEFORM0 ).
Related Work
As we noted above, most studies that examine misinformation spread focus on individual events such as natural disasters BIBREF17 , political elections BIBREF18 , or crises BIBREF19 and examine the response to the event on social media. A recent study by Vosoughi et al. vosoughi2018spread found that news stories that were fact-checked and found to be false spread faster and to more people than news items found to be true. In contrast, our methodology considers immediate reactions to news sources of varying credibility, so we can determine whether certain reactions or reactions to trusted or deceptive news sources evoke more or faster responses from social media users.
Conclusion
In the current work, we have presented a content-based model that classifies user reactions into one of nine types, such as answer, elaboration, and question, etc., and a large-scale analysis of Twitter posts and Reddit comments in response to content from news sources of varying credibility.
Our analysis of user reactions to trusted and deceptive sources on Twitter and Reddit shows significant differences in the distribution of reaction types for trusted versus deceptive news. However, due to differences in the user interface, algorithmic design, or user-base, we find that Twitter users react to trusted and deceptive sources very differently than Reddit users. For instance, Twitter users questioned disinformation sources less often and more slowly than they did trusted news sources; Twitter users also expressed appreciation towards disinformation sources more often and faster than towards trusted sources. Results from Reddit show similar, but far less pronounced, reaction results.
Future work may focus on analysis of reaction behavior from automated (i.e., 'bot'), individual, or organization accounts; on additional social media platforms and languages; or between more fine-grained categories of news source credibility.
Acknowledgments
The research described in this paper is based on Twitter and Reddit data collected by the University of Notre Dame using public APIs. The research was supported by the Laboratory Directed Research and Development Program at Pacific Northwest National Laboratory, a multiprogram national laboratory operated by Battelle for the U.S. Department of Energy. This research is also supported by the Defense Advanced Research Projects Agency (DARPA), contract W911NF-17-C-0094. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government. | time elapsed between the moment the link or content was posted/tweeted and the moment that the reaction comment or tweet occurred |
b43a8a0f4b8496b23c89730f0070172cd5dca06a | b43a8a0f4b8496b23c89730f0070172cd5dca06a_0 | Q: What is the architecture of their model?
Text: Introduction
As the reliance on social media as a source of news increases and the reliability of sources is increasingly debated, it is important to understand how users react to various sources of news. Most studies that investigate misinformation spread in social media focus on individual events and the role of the network structure in the spread BIBREF0 , BIBREF1 , BIBREF2 or detection of false information BIBREF3 . These studies have found that the size and shape of misinformation cascades within a social network depends heavily on the initial reactions of the users. Other work has focused on the language of misinformation in social media BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 to detect types of deceptive news.
As an alternative to studying newsworthy events one at a time BIBREF10 , the current work applies linguistically-infused models to predict user reactions to deceptive and trusted news sources. Our analysis reveals differences in reaction types and speed across two social media platforms — Twitter and Reddit.
The first metric we report is the reaction type. Recent studies have found that 59% of bitly-URLs on Twitter are shared without ever being read BIBREF11 , and 73% of Reddit posts were voted on without reading the linked article BIBREF12 . Instead, users tend to rely on the commentary added to retweets or the comments section of Reddit-posts for information on the content and its credibility. Faced with this reality, we ask: what kind of reactions do users find when they browse sources of varying credibility? Discourse acts, or speech acts, can be used to identify the use of language within a conversation, e.g., agreement, question, or answer. Recent work by Zhang et al. zhang2017characterizing classified Reddit comments by their primary discourse act (e.g., question, agreement, humor), and further analyzed patterns from these discussions.
The second metric we report is reaction speed. A study by Jin et al. jin2013epidemiological found that trusted news stories spread faster than misinformation or rumor; Zeng et al. zeng2016rumors found that tweets which deny rumors had shorter delays than tweets of support. Our second goal is to determine if these trends are maintained for various types of news sources on Twitter and Reddit.
Hence, the contributions of this work are two-fold: (1) we develop a linguistically-infused neural network model to classify reactions in social media posts, and (2) we apply our model to label 10.8M Twitter posts and 6.2M Reddit comments in order to evaluate the speed and type of user reactions to various news sources.
Reaction Type Classification
In this section, we describe our approach to classify user reactions into one of eight types of discourse: agreement, answer, appreciation, disagreement, elaboration, humor, negative reaction, or question, or as none of the given labels, which we call “other”, using linguistically-infused neural network models.
Reddit Data
We use a manually annotated Reddit dataset from Zhang et al. zhang2017characterizing to train our reaction classification model. Annotations from 25 crowd-workers labelled the primary discourse act for 101,525 comments within 9,131 comment threads on Reddit. The Reddit IDs, but not the text content of the comments themselves, were released with the annotations. So we collected the content of Reddit posts and comments from a public archive of Reddit posts and comments. Some content was deleted prior to archival, so the dataset shown in Table TABREF3 is a subset of the original content. Despite the inability to capture all of the original dataset, Table TABREF3 shows a similar distribution between our dataset and the original.
Model
We develop a neural network architecture that relies on content and other linguistic signals extracted from reactions and parent posts, and takes advantage of a “late fusion” approach previously used effectively in vision tasks BIBREF13 , BIBREF14 . More specifically, we combine a text sequence sub-network with a vector representation sub-network as shown in Figure FIGREF5 . The text sequence sub-network consists of an embedding layer initialized with 200-dimensional GloVe embeddings BIBREF15 followed by two 1-dimensional convolution layers, then a max-pooling layer followed by a dense layer. The vector representation sub-network consists of two dense layers. We incorporate information from both sub-networks through concatenated padded text sequences and vector representations of normalized Linguistic Inquiry and Word Count (LIWC) features BIBREF16 for the text of each post and its parent.
Reaction Type Classification Results
As shown in Figure FIGREF7 , our linguistically-infused neural network model that relies solely on the content of the reaction and its parent has comparable performance to the more-complex CRF model by Zhang et al. zhang2017characterizing, which relies on content as well as additional metadata like the author, thread (e.g., the size of the the thread, the number of branches), structure (e.g., the position within the thread), and community (i.e., the subreddit in which the comment is posted).
Measuring Reactions to Trusted and Deceptive News Sources
In this section, we present key results of our analysis of how often and how quickly users react to content from sources of varying credibility using the reaction types predicted by our linguistically-infused neural network model.
Twitter and Reddit News Data
We focus on trusted news sources that provide factual information with no intent to deceive and deceptive news sources. Deceptive sources are ranked by their intent to deceive as follows: clickbait (attention-grabbing, misleading, or vague headlines to attract an audience), conspiracy theory (uncorroborated or unreliable information to explain events or circumstances), propaganda (intentionally misleading information to advance a social or political agenda), and disinformation (fabricated or factually incorrect information meant to intentionally deceive readers).
Trusted, clickbait, conspiracy, and propaganda sources were previously compiled by Volkova et al. volkova2017separating through a combination of crowd-sourcing and public resources. Trusted news sources with Twitter-verified accounts were manually labeled and clickbait, conspiracy, and propaganda news sources were collected from several public resources that annotate suspicious news accounts. We collected news sources identified as spreading disinformation by the European Union's East Strategic Communications Task Force from euvsdisinfo.eu. In total, there were 467 news sources: 251 trusted and 216 deceptive.
We collected reaction data for two popular platforms, Reddit and Twitter, using public APIs over the 13 month period from January 2016 through January 2017. For our Reddit dataset, we collected all Reddit posts submitted during the 13 month period that linked to domains associated with one of our labelled news sources. Then we collected all comments that directly responded to those posts. For our Twitter dataset, we collected all tweets posted in the 13 month period that explicitly @mentioned or directly retweeted content from a source and then assigned a label to each tweet based on the class of the source @mentioned or retweeted. A breakdown of each dataset by source type is shown in Table TABREF10 . Figure FIGREF11 illustrates the distribution of deceptive news sources and reactions across the four sub-categories of deceptive news sources. In our analysis, we consider the set of all deceptive sources and the set excluding the most extreme (disinformation).
Methodology
We use the linguistically-infused neural network model from Figure FIGREF5 to label the reaction type of each tweet or comment. Using these labels, we examine how often response types occur when users react to each type of news source. For clarity, we report the five most frequently occurring reaction types (expressed in at least 5% of reactions within each source type) and compare the distributions of reaction types for each type of news source.
To examine whether users react to content from trusted sources differently than from deceptive sources, we measure the reaction delay, which we define as the time elapsed between the moment the link or content was posted/tweeted and the moment that the reaction comment or tweet occurred. We report the cumulative distribution functions (CDFs) for each source type and use Mann Whitney U (MWU) tests to compare whether users respond with a given reaction type with significantly different delays to news sources of different levels of credibility.
Results and Discussion
For both Twitter and Reddit datasets, we found that the primary reaction types were answer, appreciation, elaboration, question, or “other” (no label was predicted). Figure FIGREF13 illustrates the distribution of reaction types among Reddit comments (top plot) or tweets (bottom plot) responding to each type of source, as a percentage of all comments/tweets reacting to sources of the given type (i.e., trusted, all deceptive, and deceptive excluding disinformation sources).
For Twitter, we report clear differences in user reactions to trusted vs. deceptive sources. Deceptive (including disinformation) sources have a much higher rate of appreciation reactions and a lower rate of elaboration responses, compared to trusted news sources. Differences are still significant ( INLINEFORM0 ) but the trends reverse if we do not include disinformation sources. We also see an increase in the rate of question-reactions compared to trusted news sources if we exclude disinformation sources.
For Reddit, there appears to be a very similar distribution across reaction types for trusted and deceptive sources. However, MWU tests still found that the differences between trusted and deceptive news sources were statistically significant ( INLINEFORM0 ) — regardless of whether we include or exclude disinformation sources. Posts that link to deceptive sources have higher rates of question, appreciation, and answering reactions, while posts that link to trusted sources have higher rates of elaboration, agreement, and disagreement.
Next, we compared the speed with which users reacted to posts of sources of varying credibility. Our original hypothesis was that users react to posts of trusted sources faster than posts of deceptive sources. The CDFs for each source type and platform (solid and dashed lines represent Reddit and Twitter respectively) are shown in Figure FIGREF14 . We observe that the lifetime of direct reactions to news sources on Twitter is often more extended than for sources on Reddit. One exception is answer reactions which almost always occur within the first hour after the Twitter new source originally posted the tweet being answered. This may be due to the different ways that users consume content on the two platforms. Users follow accounts on Twitter, whereas on Reddit users “follow” topics through their subscriptions to various subreddits. Users can view the news feeds of individual sources on Twitter and view all of the sources' posts. Reddit, on the other hand, is not designed to highlight individual users or news sources; instead new posts (regardless of the source) are viewed based on their hotness score within each subreddit.
In addition, we observe that reactions to posts linked to trusted sources are less heavily concentrated within the first 12 to 15 hours of the post's lifetime on Reddit. The opposite is found on Twitter. Twitter sources may have a larger range of reaction delays, but they are also more heavily concentrated in the lower end of that range ( INLINEFORM0 ).
Related Work
As we noted above, most studies that examine misinformation spread focus on individual events such as natural disasters BIBREF17 , political elections BIBREF18 , or crises BIBREF19 and examine the response to the event on social media. A recent study by Vosoughi et al. vosoughi2018spread found that news stories that were fact-checked and found to be false spread faster and to more people than news items found to be true. In contrast, our methodology considers immediate reactions to news sources of varying credibility, so we can determine whether certain reactions or reactions to trusted or deceptive news sources evoke more or faster responses from social media users.
Conclusion
In the current work, we have presented a content-based model that classifies user reactions into one of nine types, such as answer, elaboration, and question, etc., and a large-scale analysis of Twitter posts and Reddit comments in response to content from news sources of varying credibility.
Our analysis of user reactions to trusted and deceptive sources on Twitter and Reddit shows significant differences in the distribution of reaction types for trusted versus deceptive news. However, due to differences in the user interface, algorithmic design, or user-base, we find that Twitter users react to trusted and deceptive sources very differently than Reddit users. For instance, Twitter users questioned disinformation sources less often and more slowly than they did trusted news sources; Twitter users also expressed appreciation towards disinformation sources more often and faster than towards trusted sources. Results from Reddit show similar, but far less pronounced, reaction results.
Future work may focus on analysis of reaction behavior from automated (i.e., 'bot'), individual, or organization accounts; on additional social media platforms and languages; or between more fine-grained categories of news source credibility.
Acknowledgments
The research described in this paper is based on Twitter and Reddit data collected by the University of Notre Dame using public APIs. The research was supported by the Laboratory Directed Research and Development Program at Pacific Northwest National Laboratory, a multiprogram national laboratory operated by Battelle for the U.S. Department of Energy. This research is also supported by the Defense Advanced Research Projects Agency (DARPA), contract W911NF-17-C-0094. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government. | we combine a text sequence sub-network with a vector representation sub-network as shown in Figure FIGREF5 . The text sequence sub-network consists of an embedding layer initialized with 200-dimensional GloVe embeddings BIBREF15 followed by two 1-dimensional convolution layers, then a max-pooling layer followed by a dense layer. The vector representation sub-network consists of two dense layers. We incorporate information from both sub-networks through concatenated padded text sequences and vector representations of normalized Linguistic Inquiry and Word Count (LIWC) features BIBREF16 for the text of each post and its parent. |
b161febf86cdd58bd247a934120410068b24b7d1 | b161febf86cdd58bd247a934120410068b24b7d1_0 | Q: What are the nine types?
Text: Introduction
As the reliance on social media as a source of news increases and the reliability of sources is increasingly debated, it is important to understand how users react to various sources of news. Most studies that investigate misinformation spread in social media focus on individual events and the role of the network structure in the spread BIBREF0 , BIBREF1 , BIBREF2 or detection of false information BIBREF3 . These studies have found that the size and shape of misinformation cascades within a social network depends heavily on the initial reactions of the users. Other work has focused on the language of misinformation in social media BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 to detect types of deceptive news.
As an alternative to studying newsworthy events one at a time BIBREF10 , the current work applies linguistically-infused models to predict user reactions to deceptive and trusted news sources. Our analysis reveals differences in reaction types and speed across two social media platforms — Twitter and Reddit.
The first metric we report is the reaction type. Recent studies have found that 59% of bitly-URLs on Twitter are shared without ever being read BIBREF11 , and 73% of Reddit posts were voted on without reading the linked article BIBREF12 . Instead, users tend to rely on the commentary added to retweets or the comments section of Reddit-posts for information on the content and its credibility. Faced with this reality, we ask: what kind of reactions do users find when they browse sources of varying credibility? Discourse acts, or speech acts, can be used to identify the use of language within a conversation, e.g., agreement, question, or answer. Recent work by Zhang et al. zhang2017characterizing classified Reddit comments by their primary discourse act (e.g., question, agreement, humor), and further analyzed patterns from these discussions.
The second metric we report is reaction speed. A study by Jin et al. jin2013epidemiological found that trusted news stories spread faster than misinformation or rumor; Zeng et al. zeng2016rumors found that tweets which deny rumors had shorter delays than tweets of support. Our second goal is to determine if these trends are maintained for various types of news sources on Twitter and Reddit.
Hence, the contributions of this work are two-fold: (1) we develop a linguistically-infused neural network model to classify reactions in social media posts, and (2) we apply our model to label 10.8M Twitter posts and 6.2M Reddit comments in order to evaluate the speed and type of user reactions to various news sources.
Reaction Type Classification
In this section, we describe our approach to classify user reactions into one of eight types of discourse: agreement, answer, appreciation, disagreement, elaboration, humor, negative reaction, or question, or as none of the given labels, which we call “other”, using linguistically-infused neural network models.
Reddit Data
We use a manually annotated Reddit dataset from Zhang et al. zhang2017characterizing to train our reaction classification model. Annotations from 25 crowd-workers labelled the primary discourse act for 101,525 comments within 9,131 comment threads on Reddit. The Reddit IDs, but not the text content of the comments themselves, were released with the annotations. So we collected the content of Reddit posts and comments from a public archive of Reddit posts and comments. Some content was deleted prior to archival, so the dataset shown in Table TABREF3 is a subset of the original content. Despite the inability to capture all of the original dataset, Table TABREF3 shows a similar distribution between our dataset and the original.
Model
We develop a neural network architecture that relies on content and other linguistic signals extracted from reactions and parent posts, and takes advantage of a “late fusion” approach previously used effectively in vision tasks BIBREF13 , BIBREF14 . More specifically, we combine a text sequence sub-network with a vector representation sub-network as shown in Figure FIGREF5 . The text sequence sub-network consists of an embedding layer initialized with 200-dimensional GloVe embeddings BIBREF15 followed by two 1-dimensional convolution layers, then a max-pooling layer followed by a dense layer. The vector representation sub-network consists of two dense layers. We incorporate information from both sub-networks through concatenated padded text sequences and vector representations of normalized Linguistic Inquiry and Word Count (LIWC) features BIBREF16 for the text of each post and its parent.
Reaction Type Classification Results
As shown in Figure FIGREF7 , our linguistically-infused neural network model that relies solely on the content of the reaction and its parent has comparable performance to the more-complex CRF model by Zhang et al. zhang2017characterizing, which relies on content as well as additional metadata like the author, thread (e.g., the size of the the thread, the number of branches), structure (e.g., the position within the thread), and community (i.e., the subreddit in which the comment is posted).
Measuring Reactions to Trusted and Deceptive News Sources
In this section, we present key results of our analysis of how often and how quickly users react to content from sources of varying credibility using the reaction types predicted by our linguistically-infused neural network model.
Twitter and Reddit News Data
We focus on trusted news sources that provide factual information with no intent to deceive and deceptive news sources. Deceptive sources are ranked by their intent to deceive as follows: clickbait (attention-grabbing, misleading, or vague headlines to attract an audience), conspiracy theory (uncorroborated or unreliable information to explain events or circumstances), propaganda (intentionally misleading information to advance a social or political agenda), and disinformation (fabricated or factually incorrect information meant to intentionally deceive readers).
Trusted, clickbait, conspiracy, and propaganda sources were previously compiled by Volkova et al. volkova2017separating through a combination of crowd-sourcing and public resources. Trusted news sources with Twitter-verified accounts were manually labeled and clickbait, conspiracy, and propaganda news sources were collected from several public resources that annotate suspicious news accounts. We collected news sources identified as spreading disinformation by the European Union's East Strategic Communications Task Force from euvsdisinfo.eu. In total, there were 467 news sources: 251 trusted and 216 deceptive.
We collected reaction data for two popular platforms, Reddit and Twitter, using public APIs over the 13 month period from January 2016 through January 2017. For our Reddit dataset, we collected all Reddit posts submitted during the 13 month period that linked to domains associated with one of our labelled news sources. Then we collected all comments that directly responded to those posts. For our Twitter dataset, we collected all tweets posted in the 13 month period that explicitly @mentioned or directly retweeted content from a source and then assigned a label to each tweet based on the class of the source @mentioned or retweeted. A breakdown of each dataset by source type is shown in Table TABREF10 . Figure FIGREF11 illustrates the distribution of deceptive news sources and reactions across the four sub-categories of deceptive news sources. In our analysis, we consider the set of all deceptive sources and the set excluding the most extreme (disinformation).
Methodology
We use the linguistically-infused neural network model from Figure FIGREF5 to label the reaction type of each tweet or comment. Using these labels, we examine how often response types occur when users react to each type of news source. For clarity, we report the five most frequently occurring reaction types (expressed in at least 5% of reactions within each source type) and compare the distributions of reaction types for each type of news source.
To examine whether users react to content from trusted sources differently than from deceptive sources, we measure the reaction delay, which we define as the time elapsed between the moment the link or content was posted/tweeted and the moment that the reaction comment or tweet occurred. We report the cumulative distribution functions (CDFs) for each source type and use Mann Whitney U (MWU) tests to compare whether users respond with a given reaction type with significantly different delays to news sources of different levels of credibility.
Results and Discussion
For both Twitter and Reddit datasets, we found that the primary reaction types were answer, appreciation, elaboration, question, or “other” (no label was predicted). Figure FIGREF13 illustrates the distribution of reaction types among Reddit comments (top plot) or tweets (bottom plot) responding to each type of source, as a percentage of all comments/tweets reacting to sources of the given type (i.e., trusted, all deceptive, and deceptive excluding disinformation sources).
For Twitter, we report clear differences in user reactions to trusted vs. deceptive sources. Deceptive (including disinformation) sources have a much higher rate of appreciation reactions and a lower rate of elaboration responses, compared to trusted news sources. Differences are still significant ( INLINEFORM0 ) but the trends reverse if we do not include disinformation sources. We also see an increase in the rate of question-reactions compared to trusted news sources if we exclude disinformation sources.
For Reddit, there appears to be a very similar distribution across reaction types for trusted and deceptive sources. However, MWU tests still found that the differences between trusted and deceptive news sources were statistically significant ( INLINEFORM0 ) — regardless of whether we include or exclude disinformation sources. Posts that link to deceptive sources have higher rates of question, appreciation, and answering reactions, while posts that link to trusted sources have higher rates of elaboration, agreement, and disagreement.
Next, we compared the speed with which users reacted to posts of sources of varying credibility. Our original hypothesis was that users react to posts of trusted sources faster than posts of deceptive sources. The CDFs for each source type and platform (solid and dashed lines represent Reddit and Twitter respectively) are shown in Figure FIGREF14 . We observe that the lifetime of direct reactions to news sources on Twitter is often more extended than for sources on Reddit. One exception is answer reactions which almost always occur within the first hour after the Twitter new source originally posted the tweet being answered. This may be due to the different ways that users consume content on the two platforms. Users follow accounts on Twitter, whereas on Reddit users “follow” topics through their subscriptions to various subreddits. Users can view the news feeds of individual sources on Twitter and view all of the sources' posts. Reddit, on the other hand, is not designed to highlight individual users or news sources; instead new posts (regardless of the source) are viewed based on their hotness score within each subreddit.
In addition, we observe that reactions to posts linked to trusted sources are less heavily concentrated within the first 12 to 15 hours of the post's lifetime on Reddit. The opposite is found on Twitter. Twitter sources may have a larger range of reaction delays, but they are also more heavily concentrated in the lower end of that range ( INLINEFORM0 ).
Related Work
As we noted above, most studies that examine misinformation spread focus on individual events such as natural disasters BIBREF17 , political elections BIBREF18 , or crises BIBREF19 and examine the response to the event on social media. A recent study by Vosoughi et al. vosoughi2018spread found that news stories that were fact-checked and found to be false spread faster and to more people than news items found to be true. In contrast, our methodology considers immediate reactions to news sources of varying credibility, so we can determine whether certain reactions or reactions to trusted or deceptive news sources evoke more or faster responses from social media users.
Conclusion
In the current work, we have presented a content-based model that classifies user reactions into one of nine types, such as answer, elaboration, and question, etc., and a large-scale analysis of Twitter posts and Reddit comments in response to content from news sources of varying credibility.
Our analysis of user reactions to trusted and deceptive sources on Twitter and Reddit shows significant differences in the distribution of reaction types for trusted versus deceptive news. However, due to differences in the user interface, algorithmic design, or user-base, we find that Twitter users react to trusted and deceptive sources very differently than Reddit users. For instance, Twitter users questioned disinformation sources less often and more slowly than they did trusted news sources; Twitter users also expressed appreciation towards disinformation sources more often and faster than towards trusted sources. Results from Reddit show similar, but far less pronounced, reaction results.
Future work may focus on analysis of reaction behavior from automated (i.e., 'bot'), individual, or organization accounts; on additional social media platforms and languages; or between more fine-grained categories of news source credibility.
Acknowledgments
The research described in this paper is based on Twitter and Reddit data collected by the University of Notre Dame using public APIs. The research was supported by the Laboratory Directed Research and Development Program at Pacific Northwest National Laboratory, a multiprogram national laboratory operated by Battelle for the U.S. Department of Energy. This research is also supported by the Defense Advanced Research Projects Agency (DARPA), contract W911NF-17-C-0094. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government. | agreement, answer, appreciation, disagreement, elaboration, humor, negative reaction, question, other |
d40662236eed26f17dd2a3a9052a4cee1482d7d6 | d40662236eed26f17dd2a3a9052a4cee1482d7d6_0 | Q: How do they represent input features of their model to train embeddings?
Text: Introduction
Many speech processing tasks – such as automatic speech recognition or spoken term detection – hinge on associating segments of speech signals with word labels. In most systems developed for such tasks, words are broken down into sub-word units such as phones, and models are built for the individual units. An alternative, which has been considered by some researchers, is to consider each entire word segment as a single unit, without assigning parts of it to sub-word units. One motivation for the use of whole-word approaches is that they avoid the need for sub-word models. This is helpful since, despite decades of work on sub-word modeling BIBREF0 , BIBREF1 , it still poses significant challenges. For example, speech processing systems are still hampered by differences in conversational pronunciations BIBREF2 . A second motivation is that considering whole words at once allows us to consider a more flexible set of features and reason over longer time spans.
Whole-word approaches typically involve, at some level, template matching. For example, in template-based speech recognition BIBREF3 , BIBREF4 , word scores are computed from dynamic time warping (DTW) distances between an observed segment and training segments of the hypothesized word. In query-by-example search, putative matches are typically found by measuring the DTW distance between the query and segments of the search database BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . In other words, whole-word approaches often boil down to making decisions about whether two segments are examples of the same word or not.
An alternative to DTW that has begun to be explored is the use of acoustic word embeddings (AWEs), or vector representations of spoken word segments. AWEs are representations that can be learned from data, ideally such that the embeddings of two segments corresponding to the same word are close, while embeddings of segments corresponding to different words are far apart. Once word segments are represented via fixed-dimensional embeddings, computing distances is as simple as measuring a cosine or Euclidean distance between two vectors.
There has been some, thus far limited, work on acoustic word embeddings, focused on a number of embedding models, training approaches, and tasks BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 . In this paper we explore new embedding models based on recurrent neural networks (RNNs), applied to a word discrimination task related to query-by-example search. RNNs are a natural model class for acoustic word embeddings, since they can handle arbitrary-length sequences. We compare several types of RNN-based embeddings and analyze their properties. Compared to prior embeddings tested on the same task, our best models achieve sizable improvements in average precision.
Related work
We next briefly describe the most closely related prior work.
Maas et al. BIBREF9 and Bengio and Heigold BIBREF10 used acoustic word embeddings, based on convolutional neural networks (CNNs), to generate scores for word segments in automatic speech recognition. Maas et al. trained CNNs to predict (continuous-valued) embeddings of the word labels, and used the resulting embeddings to define feature functions in a segmental conditional random field BIBREF17 rescoring system. Bengio and Heigold also developed CNN-based embeddings for lattice rescoring, but with a contrastive loss to separate embeddings of a given word from embeddings of other words.
Levin et al. BIBREF11 developed unsupervised embeddings based on representing each word as a vector of DTW distances to a collection of reference word segments. This representation was subsequently used in several applications: a segmental approach for query-by-example search BIBREF12 , lexical clustering BIBREF18 , and unsupervised speech recognition BIBREF19 . Voinea et al. BIBREF15 developed a representation also based on templates, in their case phone templates, designed to be invariant to specific transformations, and showed their robustness on digit classification.
Kamper et al. BIBREF13 compared several types of acoustic word embeddings for a word discrimination task related to query-by-example search, finding that embeddings based on convolutional neural networks (CNNs) trained with a contrastive loss outperformed the reference vector approach of Levin et al. BIBREF11 as well as several other CNN and DNN embeddings and DTW using several feature types. There have now been a number of approaches compared on this same task and data BIBREF11 , BIBREF20 , BIBREF21 , BIBREF22 . For a direct comparison with this prior work, in this paper we use the same task and some of the same training losses as Kamper et al., but develop new embedding models based on RNNs.
The only prior work of which we are aware using RNNs for acoustic word embeddings is that of Chen et al. BIBREF16 and Chung et al. BIBREF14 . Chen et al. learned a long short-term memory (LSTM) RNN for word classification and used the resulting hidden state vectors as a word embedding in a query-by-example task. The setting was quite specific, however, with a small number of queries and speaker-dependent training. Chung et al. BIBREF14 worked in an unsupervised setting and trained single-layer RNN autoencoders to produce embeddings for a word discrimination task. In this paper we focus on the supervised setting, and compare a variety of RNN-based structures trained with different losses.
Approach
An acoustic word embedding is a function that takes as input a speech segment corresponding to a word, INLINEFORM0 , where each INLINEFORM1 is a vector of frame-level acoustic features, and outputs a fixed-dimensional vector representing the segment, INLINEFORM2 . The basic embedding model structure we use is shown in Fig. FIGREF1 . The model consists of a deep RNN with some number INLINEFORM3 of stacked layers, whose final hidden state vector is passed as input to a set of INLINEFORM4 of fully connected layers; the output of the final fully connected layer is the embedding INLINEFORM5 .
The RNN hidden state at each time frame can be viewed as a representation of the input seen thus far, and its value in the last time frame INLINEFORM0 could itself serve as the final word embedding. The fully connected layers are added to account for the fact that some additional transformation may improve the representation. For example, the hidden state may need to be larger than the desired word embedding dimension, in order to be able to "remember" all of the needed intermediate information. Some of that information may not be needed in the final embedding. In addition, the information maintained in the hidden state may not necessarily be discriminative; some additional linear or non-linear transformation may help to learn a discriminative embedding.
Within this class of embedding models, we focus on Long Short-Term Memory (LSTM) networks BIBREF23 and Gated Recurrent Unit (GRU) networks BIBREF24 . These are both types of RNNs that include a mechanism for selectively retaining or discarding information at each time frame when updating the hidden state, in order to better utilize long-term context. Both of these RNN variants have been used successfully in speech recognition BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 .
In an LSTM RNN, at each time frame both the hidden state INLINEFORM0 and an associated “cell memory" vector INLINEFORM1 , are updated and passed on to the next time frame. In other words, each forward edge in Figure FIGREF1 can be viewed as carrying both the cell memory and hidden state vectors. The updates are modulated by the values of several gating vectors, which control the degree to which the cell memory and hidden state are updated in light of new information in the current frame. For a single-layer LSTM network, the updates are as follows:
INLINEFORM0
where INLINEFORM0 , and INLINEFORM1 are all vectors of the same dimensionality, INLINEFORM2 , and INLINEFORM3 are learned weight matrices of the appropriate sizes, INLINEFORM4 and INLINEFORM5 are learned bias vectors, INLINEFORM6 is a componentwise logistic activation, and INLINEFORM7 refers to the Hadamard (componentwise) product.
Similarly, in a GRU network, at each time step a GRU cell determines what components of old information are retained, overwritten, or modified in light of the next step in the input sequence. The output from a GRU cell is only the hidden state vector. A GRU cell uses a reset gate INLINEFORM0 and an update gate INLINEFORM1 as described below for a single-layer network: INLINEFORM2
where INLINEFORM0 , and INLINEFORM1 are all the same dimensionality, INLINEFORM2 , and INLINEFORM3 are learned weight matrices of the appropriate size, and INLINEFORM4 , INLINEFORM5 and INLINEFORM6 are learned bias vectors.
All of the above equations refer to single-layer networks. In a deep network, with multiple stacked layers, the same update equations are used in each layer, with the state, cell, and gate vectors replaced by layer-specific vectors INLINEFORM0 and so on for layer INLINEFORM1 . For all but the first layer, the input INLINEFORM2 is replaced by the hidden state vector from the previous layer INLINEFORM3 .
For the fully connected layers, we use rectified linear unit (ReLU) BIBREF29 activation, except for the final layer which depends on the form of supervision and loss used in training.
Training
We train the RNN-based embedding models using a set of pre-segmented spoken words. We use two main training approaches, inspired by prior work but with some differences in the details. As in BIBREF13 , BIBREF10 , our first approach is to use the word labels of the training segments and train the networks to classify the word. In this case, the final layer of INLINEFORM0 is a log-softmax layer. Here we are limited to the subset of the training set that has a sufficient number of segments per word to train a good classifier, and the output dimensionality is equal to the number of words (but see BIBREF13 for a study of varying the dimensionality in such a classifier-based embedding model by introducing a bottleneck layer). This model is trained end-to-end and is optimized with a cross entropy loss. Although labeled data is necessarily limited, the hope is that the learned models will be useful even when applied to spoken examples of words not previously seen in the training data. For words not seen in training, the embeddings should correspond to some measure of similarity of the word to the training words, measured via the posterior probabilities of the previously seen words. In the experiments below, we examine this assumption by analyzing performance on words that appear in the training data compared to those that do not.
The second training approach, based on earlier work of Kamper et al. BIBREF13 , is to train "Siamese" networks BIBREF30 . In this approach, full supervision is not needed; rather, we use weak supervision in the form of pairs of segments labeled as same or different. The base model remains the same as before—an RNN followed by a set of fully connected layers—but the final layer is no longer a softmax but rather a linear activation layer of arbitrary size. In order to learn the parameters, we simultaneously feed three word segments through three copies of our model (i.e. three networks with shared weights). One input segment is an “anchor", INLINEFORM0 , the second is another segment with the same word label, INLINEFORM1 , and the third is a segment corresponding to a different word label, INLINEFORM2 . Then, the network is trained using a “cos-hinge" loss:
DISPLAYFORM0
where INLINEFORM0 is the cosine distance between INLINEFORM1 . Unlike cross entropy training, here we directly aim to optimize relative (cosine) distance between same and different word pairs. For tasks such as query-by-example search, this training loss better respects our end objective, and can use more data since neither fully labeled data nor any minimum number of examples of each word should be needed.
EXPERIMENTS
Our end goal is to improve performance on downstream tasks requiring accurate word discrimination. In this paper we use an intermediate task that more directly tests whether same- and different-word pairs have the expected relationship. and that allows us to compare to a variety of prior work. Specifically, we use the word discrimination task of Carlin et al. BIBREF20 , which is similar to a query-by-example task where the word segmentations are known. The evaluation consists of determining, for each pair of evaluation segments, whether they are examples of the same or different words, and measuring performance via the average precision (AP). We do this by measuring the cosine similarity between their acoustic word embeddings and declaring them to be the same if the distance is below a threshold. By sweeping the threshold, we obtain a precision-recall curve from which we compute the AP.
The data used for this task is drawn from the Switchboard conversational English corpus BIBREF31 . The word segments range from 50 to 200 frames in length. The acoustic features in each frame (the input to the word embedding models INLINEFORM0 ) are 39-dimensional MFCCs+ INLINEFORM1 + INLINEFORM2 . We use the same train, development, and test partitions as in prior work BIBREF13 , BIBREF11 , and the same acoustic features as in BIBREF13 , for as direct a comparison as possible. The train set contains approximately 10k example segments, while dev and test each contain approximately 11k segments (corresponding to about 60M pairs for computing the dev/test AP). As in BIBREF13 , when training the classification-based embeddings, we use a subset of the training set containing all word types with a minimum of 3 occurrences, reducing the training set size to approximately 9k segments.
When training the Siamese networks, the training data consists of all of the same-word pairs in the full training set (approximately 100k pairs). For each such training pair, we randomly sample a third example belonging to a different word type, as required for the INLINEFORM0 loss.
Classification network details
Our classifier-based embeddings use LSTM or GRU networks with 2–4 stacked layers and 1–3 fully connected layers. The final embedding dimensionality is equal to the number of unique word labels in the training set, which is 1061. The recurrent hidden state dimensionality is fixed at 512 and dropout BIBREF32 between stacked recurrent layers is used with probability INLINEFORM0 . The fully connected hidden layer dimensionality is fixed at 1024. Rectified linear unit (ReLU) non-linearities and dropout with INLINEFORM1 are used between fully-connected layers. However, between the final recurrent hidden state output and the first fully-connected layer no non-linearity or dropout is applied. These settings were determined through experiments on the development set.
The classifier network is trained with a cross entropy loss and optimized using stochastic gradient descent (SGD) with Nesterov momentum BIBREF33 . The learning rate is initialized at 0.1 and is reduced by a factor of 10 according to the following heuristic: If 99% of the current epoch's average batch loss is greater than the running average of batch losses over the last 3 epochs, this is considered a plateau; if there are 3 consecutive plateau epochs, then the learning rate is reduced. Training stops when reducing the learning rate no longer improves dev set AP. Then, the model from the epoch corresponding to the the best dev set AP is chosen. Several other optimizers—Adagrad BIBREF34 , Adadelta BIBREF35 , and Adam BIBREF36 —were explored in initial experiments on the dev set, but all reported results were obtained using SGD with Nesterov momentum.
Siamese network details
For experiments with Siamese networks, we initialize (warm-start) the networks with the tuned classification network, removing the final log-softmax layer and replacing it with a linear layer of size equal to the desired embedding dimensionality. We explored embeddings with dimensionalities between 8 and 2048. We use a margin of 0.4 in the cos-hinge loss.
In training the Siamese networks, each training mini-batch consists of INLINEFORM0 triplets. INLINEFORM1 triplets are of the form INLINEFORM2 where INLINEFORM3 and INLINEFORM4 are examples of the same class (a pair from the 100k same-word pair set) and INLINEFORM5 is a randomly sampled example from a different class. Then, for each of these INLINEFORM6 triplets INLINEFORM7 , an additional triplet INLINEFORM8 is added to the mini-batch to allow all segments to serve as anchors. This is a slight departure from earlier work BIBREF13 , which we found to improve stability in training and performance on the development set.
In preliminary experiments, we compared two methods for choosing the negative examples INLINEFORM0 during training, a uniform sampling approach and a non-uniform one. In the case of uniform sampling, we sample INLINEFORM1 uniformly at random from the full set of training examples with labels different from INLINEFORM2 . This sampling method requires only word-pair supervision. In the case of non-uniform sampling, INLINEFORM3 is sampled in two steps. First, we construct a distribution INLINEFORM4 over word labels INLINEFORM5 and sample a different label from it. Second, we sample an example uniformly from within the subset with the chosen label. The goal of this method is to speed up training by targeting pairs that violate the margin constraint. To construct the multinomial PMF INLINEFORM6 , we maintain an INLINEFORM7 matrix INLINEFORM8 , where INLINEFORM9 is the number of unique word labels in training. Each word label corresponds to an integer INLINEFORM10 INLINEFORM11 [1, INLINEFORM12 ] and therefore a row in INLINEFORM13 . The values in a row of INLINEFORM14 are considered similarity scores, and we can retrieve the desired PMF for each row by normalizing by its sum.
At the start of each epoch, we initialize INLINEFORM0 with 0's along the diagonal and 1's elsewhere (which reduces to uniform sampling). For each training pair INLINEFORM1 , we update INLINEFORM2 for both INLINEFORM3 and INLINEFORM4 :
INLINEFORM0
The PMFs INLINEFORM0 are updated after the forward pass of an entire mini-batch. The constant INLINEFORM1 enforces a potentially stronger constraint than is used in the INLINEFORM2 loss, in order to promote diverse sampling. In all experiments, we set INLINEFORM3 . This is a heuristic approach, and it would be interesting to consider various alternatives. Preliminary experiments showed that the non-uniform sampling method outperformed uniform sampling, and in the following we report results with non-uniform sampling.
We optimize the Siamese network model using SGD with Nesterov momentum for 15 epochs. The learning rate is initialized to 0.001 and dropped every 3 epochs until no improvement is seen on the dev set. The final model is taken from the epoch with the highest dev set AP. All models were implemented in Torch BIBREF37 and used the rnn library of BIBREF38 .
Results
Based on development set results, our final embedding models are LSTM networks with 3 stacked layers and 3 fully connected layers, with output dimensionality of 1024 in the case of Siamese networks. Final test set results are given in Table TABREF7 . We include a comparison with the best prior results on this task from BIBREF13 , as well as the result of using standard DTW on the input MFCCs (reproduced from BIBREF13 ) and the best prior result using DTW, obtained with frame features learned with correlated autoencoders BIBREF21 . Both classifier and Siamese LSTM embedding models outperform all prior results on this task of which we are aware.
We next analyze the effects of model design choices, as well as the learned embeddings themselves.
Effect of model structure
Table TABREF10 shows the effect on development set performance of the number of stacked layers INLINEFORM0 , the number of fully connected layers INLINEFORM1 , and LSTM vs. GRU cells, for classifier-based embeddings. The best performance in this experiment is achieved by the LSTM network with INLINEFORM2 . However, performance still seems to be improving with additional layers, suggesting that we may be able to further improve performance by adding even more layers of either type. However, we fixed the model to INLINEFORM3 in order to allow for more experimentation and analysis within a reasonable time.
Table TABREF10 reveals an interesting trend. When only one fully connected layer is used, the GRU networks outperform the LSTMs given a sufficient number of stacked layers. On the other hand, once we add more fully connected layers, the LSTMs outperform the GRUs. In the first few lines of Table TABREF10 , we use 2, 3, and 4 layer stacks of LSTMs and GRUs while holding fixed the number of fully-connected layers at INLINEFORM0 . There is clear utility in stacking additional layers; however, even with 4 stacked layers the RNNs still underperform the CNN-based embeddings of BIBREF13 until we begin adding fully connected layers.
After exploring a variety of stacked RNNs, we fixed the stack to 3 layers and varied the number of fully connected layers. The value of each additional fully connected layer is clearly greater than that of adding stacked layers. All networks trained with 2 or 3 fully connected layers obtain more than 0.4 AP on the development set, while stacked RNNs with 1 fully connected layer are at around 0.3 AP or less. This may raise the question of whether some simple fully connected model may be all that is needed; however, previous work has shown that this approach is not competitive BIBREF13 , and convolutional or recurrent layers are needed to summarize arbitrary-length segments into a fixed-dimensional representation.
Effect of embedding dimensionality
For the Siamese networks, we varied the output embedding dimensionality, as shown in Fig. FIGREF11 . This analysis shows that the embeddings learned by the Siamese RNN network are quite robust to reduced dimensionality, outperforming the classifier model for all dimensionalities 32 or higher and outperforming previously reported dev set performance with CNN-based embeddings BIBREF13 for all dimensionalities INLINEFORM0 .
Effect of training vocabulary
We might expect the learned embeddings to be more accurate for words that are seen in training than for ones that are not. Fig. FIGREF11 measures this effect by showing performance as a function of the number of occurrences of the dev words in the training set. Indeed, both model types are much more successful for in-vocabulary words, and their performance improves the higher the training frequency of the words. However, performance increases more quickly for the Siamese network than for the classifier as training frequency increases. This may be due to the fact that, if a word type occurs at least INLINEFORM0 times in the classifier training set, then it occurs at least INLINEFORM1 times in the Siamese paired training data.
Visualization of embeddings
In order to gain a better qualitative understanding of the differences between clasiffier and Siamese-based embeddings, and of the learned embedding space more generally, we plot a two-dimensional visualization of some of our learned embeddings via t-SNE BIBREF40 in Fig. FIGREF12 . For both classifier and Siamese embeddings, there is a marked difference in the quality of clusters formed by embeddings of words that were previously seen vs. previously unseen in training. However, the Siamese network embeddings appear to have better relative distances between word clusters with similar and dissimilar pronunciations. For example, the word programs appears equidistant from problems and problem in the classifier-based embedding space, but in the Siamese embedding space problems falls between problem and programs. Similarly, the cluster for democracy shifts with respect to actually and especially to better respect differences in pronunciation. More study of learned embeddings, using more data and word types, is needed to confirm such patterns in general. Improvements in unseen word embeddings from the classifier embedding space to the Siamese embedding space (such as for democracy, morning, and basketball) are a likely result of optimizing the model for relative distances between words.
Conclusion
Our main finding is that RNN-based acoustic word embeddings outperform prior approaches, as measured via a word discrimination task related to query-by-example search. Our best results are obtained with deep LSTM RNNs with a combination of several stacked layers and several fully connected layers, optimized with a contrastive Siamese loss. Siamese networks have the benefit that, for any given training data set, they are effectively trained on a much larger set, in the sense that they measure a loss and gradient for every possible pair of data points. Our experiments suggest that the models could still be improved with additional layers. In addition, we have found that, for the purposes of acoustic word embeddings, fully connected layers are very important and have a more significant effect per layer than stacked layers, particularly when trained with the cross entropy loss function.
These experiments represent an initial exploration of sequential neural models for acoustic word embeddings. There are a number of directions for further work. For example, while our analyses suggest that Siamese networks are better than classifier-based models at embedding previously unseen words, our best embeddings are still much poorer for unseen words. Improvements in this direction may come from larger training sets, or may require new models that better model the shared structure between words. Other directions for future work include additional forms of supervision and training, as well as application to downstream tasks. | a vector of frame-level acoustic features |
1d791713d1aa77358f11501f05c108045f53c8aa | 1d791713d1aa77358f11501f05c108045f53c8aa_0 | Q: Which dimensionality do they use for their embeddings?
Text: Introduction
Many speech processing tasks – such as automatic speech recognition or spoken term detection – hinge on associating segments of speech signals with word labels. In most systems developed for such tasks, words are broken down into sub-word units such as phones, and models are built for the individual units. An alternative, which has been considered by some researchers, is to consider each entire word segment as a single unit, without assigning parts of it to sub-word units. One motivation for the use of whole-word approaches is that they avoid the need for sub-word models. This is helpful since, despite decades of work on sub-word modeling BIBREF0 , BIBREF1 , it still poses significant challenges. For example, speech processing systems are still hampered by differences in conversational pronunciations BIBREF2 . A second motivation is that considering whole words at once allows us to consider a more flexible set of features and reason over longer time spans.
Whole-word approaches typically involve, at some level, template matching. For example, in template-based speech recognition BIBREF3 , BIBREF4 , word scores are computed from dynamic time warping (DTW) distances between an observed segment and training segments of the hypothesized word. In query-by-example search, putative matches are typically found by measuring the DTW distance between the query and segments of the search database BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . In other words, whole-word approaches often boil down to making decisions about whether two segments are examples of the same word or not.
An alternative to DTW that has begun to be explored is the use of acoustic word embeddings (AWEs), or vector representations of spoken word segments. AWEs are representations that can be learned from data, ideally such that the embeddings of two segments corresponding to the same word are close, while embeddings of segments corresponding to different words are far apart. Once word segments are represented via fixed-dimensional embeddings, computing distances is as simple as measuring a cosine or Euclidean distance between two vectors.
There has been some, thus far limited, work on acoustic word embeddings, focused on a number of embedding models, training approaches, and tasks BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 . In this paper we explore new embedding models based on recurrent neural networks (RNNs), applied to a word discrimination task related to query-by-example search. RNNs are a natural model class for acoustic word embeddings, since they can handle arbitrary-length sequences. We compare several types of RNN-based embeddings and analyze their properties. Compared to prior embeddings tested on the same task, our best models achieve sizable improvements in average precision.
Related work
We next briefly describe the most closely related prior work.
Maas et al. BIBREF9 and Bengio and Heigold BIBREF10 used acoustic word embeddings, based on convolutional neural networks (CNNs), to generate scores for word segments in automatic speech recognition. Maas et al. trained CNNs to predict (continuous-valued) embeddings of the word labels, and used the resulting embeddings to define feature functions in a segmental conditional random field BIBREF17 rescoring system. Bengio and Heigold also developed CNN-based embeddings for lattice rescoring, but with a contrastive loss to separate embeddings of a given word from embeddings of other words.
Levin et al. BIBREF11 developed unsupervised embeddings based on representing each word as a vector of DTW distances to a collection of reference word segments. This representation was subsequently used in several applications: a segmental approach for query-by-example search BIBREF12 , lexical clustering BIBREF18 , and unsupervised speech recognition BIBREF19 . Voinea et al. BIBREF15 developed a representation also based on templates, in their case phone templates, designed to be invariant to specific transformations, and showed their robustness on digit classification.
Kamper et al. BIBREF13 compared several types of acoustic word embeddings for a word discrimination task related to query-by-example search, finding that embeddings based on convolutional neural networks (CNNs) trained with a contrastive loss outperformed the reference vector approach of Levin et al. BIBREF11 as well as several other CNN and DNN embeddings and DTW using several feature types. There have now been a number of approaches compared on this same task and data BIBREF11 , BIBREF20 , BIBREF21 , BIBREF22 . For a direct comparison with this prior work, in this paper we use the same task and some of the same training losses as Kamper et al., but develop new embedding models based on RNNs.
The only prior work of which we are aware using RNNs for acoustic word embeddings is that of Chen et al. BIBREF16 and Chung et al. BIBREF14 . Chen et al. learned a long short-term memory (LSTM) RNN for word classification and used the resulting hidden state vectors as a word embedding in a query-by-example task. The setting was quite specific, however, with a small number of queries and speaker-dependent training. Chung et al. BIBREF14 worked in an unsupervised setting and trained single-layer RNN autoencoders to produce embeddings for a word discrimination task. In this paper we focus on the supervised setting, and compare a variety of RNN-based structures trained with different losses.
Approach
An acoustic word embedding is a function that takes as input a speech segment corresponding to a word, INLINEFORM0 , where each INLINEFORM1 is a vector of frame-level acoustic features, and outputs a fixed-dimensional vector representing the segment, INLINEFORM2 . The basic embedding model structure we use is shown in Fig. FIGREF1 . The model consists of a deep RNN with some number INLINEFORM3 of stacked layers, whose final hidden state vector is passed as input to a set of INLINEFORM4 of fully connected layers; the output of the final fully connected layer is the embedding INLINEFORM5 .
The RNN hidden state at each time frame can be viewed as a representation of the input seen thus far, and its value in the last time frame INLINEFORM0 could itself serve as the final word embedding. The fully connected layers are added to account for the fact that some additional transformation may improve the representation. For example, the hidden state may need to be larger than the desired word embedding dimension, in order to be able to "remember" all of the needed intermediate information. Some of that information may not be needed in the final embedding. In addition, the information maintained in the hidden state may not necessarily be discriminative; some additional linear or non-linear transformation may help to learn a discriminative embedding.
Within this class of embedding models, we focus on Long Short-Term Memory (LSTM) networks BIBREF23 and Gated Recurrent Unit (GRU) networks BIBREF24 . These are both types of RNNs that include a mechanism for selectively retaining or discarding information at each time frame when updating the hidden state, in order to better utilize long-term context. Both of these RNN variants have been used successfully in speech recognition BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 .
In an LSTM RNN, at each time frame both the hidden state INLINEFORM0 and an associated “cell memory" vector INLINEFORM1 , are updated and passed on to the next time frame. In other words, each forward edge in Figure FIGREF1 can be viewed as carrying both the cell memory and hidden state vectors. The updates are modulated by the values of several gating vectors, which control the degree to which the cell memory and hidden state are updated in light of new information in the current frame. For a single-layer LSTM network, the updates are as follows:
INLINEFORM0
where INLINEFORM0 , and INLINEFORM1 are all vectors of the same dimensionality, INLINEFORM2 , and INLINEFORM3 are learned weight matrices of the appropriate sizes, INLINEFORM4 and INLINEFORM5 are learned bias vectors, INLINEFORM6 is a componentwise logistic activation, and INLINEFORM7 refers to the Hadamard (componentwise) product.
Similarly, in a GRU network, at each time step a GRU cell determines what components of old information are retained, overwritten, or modified in light of the next step in the input sequence. The output from a GRU cell is only the hidden state vector. A GRU cell uses a reset gate INLINEFORM0 and an update gate INLINEFORM1 as described below for a single-layer network: INLINEFORM2
where INLINEFORM0 , and INLINEFORM1 are all the same dimensionality, INLINEFORM2 , and INLINEFORM3 are learned weight matrices of the appropriate size, and INLINEFORM4 , INLINEFORM5 and INLINEFORM6 are learned bias vectors.
All of the above equations refer to single-layer networks. In a deep network, with multiple stacked layers, the same update equations are used in each layer, with the state, cell, and gate vectors replaced by layer-specific vectors INLINEFORM0 and so on for layer INLINEFORM1 . For all but the first layer, the input INLINEFORM2 is replaced by the hidden state vector from the previous layer INLINEFORM3 .
For the fully connected layers, we use rectified linear unit (ReLU) BIBREF29 activation, except for the final layer which depends on the form of supervision and loss used in training.
Training
We train the RNN-based embedding models using a set of pre-segmented spoken words. We use two main training approaches, inspired by prior work but with some differences in the details. As in BIBREF13 , BIBREF10 , our first approach is to use the word labels of the training segments and train the networks to classify the word. In this case, the final layer of INLINEFORM0 is a log-softmax layer. Here we are limited to the subset of the training set that has a sufficient number of segments per word to train a good classifier, and the output dimensionality is equal to the number of words (but see BIBREF13 for a study of varying the dimensionality in such a classifier-based embedding model by introducing a bottleneck layer). This model is trained end-to-end and is optimized with a cross entropy loss. Although labeled data is necessarily limited, the hope is that the learned models will be useful even when applied to spoken examples of words not previously seen in the training data. For words not seen in training, the embeddings should correspond to some measure of similarity of the word to the training words, measured via the posterior probabilities of the previously seen words. In the experiments below, we examine this assumption by analyzing performance on words that appear in the training data compared to those that do not.
The second training approach, based on earlier work of Kamper et al. BIBREF13 , is to train "Siamese" networks BIBREF30 . In this approach, full supervision is not needed; rather, we use weak supervision in the form of pairs of segments labeled as same or different. The base model remains the same as before—an RNN followed by a set of fully connected layers—but the final layer is no longer a softmax but rather a linear activation layer of arbitrary size. In order to learn the parameters, we simultaneously feed three word segments through three copies of our model (i.e. three networks with shared weights). One input segment is an “anchor", INLINEFORM0 , the second is another segment with the same word label, INLINEFORM1 , and the third is a segment corresponding to a different word label, INLINEFORM2 . Then, the network is trained using a “cos-hinge" loss:
DISPLAYFORM0
where INLINEFORM0 is the cosine distance between INLINEFORM1 . Unlike cross entropy training, here we directly aim to optimize relative (cosine) distance between same and different word pairs. For tasks such as query-by-example search, this training loss better respects our end objective, and can use more data since neither fully labeled data nor any minimum number of examples of each word should be needed.
EXPERIMENTS
Our end goal is to improve performance on downstream tasks requiring accurate word discrimination. In this paper we use an intermediate task that more directly tests whether same- and different-word pairs have the expected relationship. and that allows us to compare to a variety of prior work. Specifically, we use the word discrimination task of Carlin et al. BIBREF20 , which is similar to a query-by-example task where the word segmentations are known. The evaluation consists of determining, for each pair of evaluation segments, whether they are examples of the same or different words, and measuring performance via the average precision (AP). We do this by measuring the cosine similarity between their acoustic word embeddings and declaring them to be the same if the distance is below a threshold. By sweeping the threshold, we obtain a precision-recall curve from which we compute the AP.
The data used for this task is drawn from the Switchboard conversational English corpus BIBREF31 . The word segments range from 50 to 200 frames in length. The acoustic features in each frame (the input to the word embedding models INLINEFORM0 ) are 39-dimensional MFCCs+ INLINEFORM1 + INLINEFORM2 . We use the same train, development, and test partitions as in prior work BIBREF13 , BIBREF11 , and the same acoustic features as in BIBREF13 , for as direct a comparison as possible. The train set contains approximately 10k example segments, while dev and test each contain approximately 11k segments (corresponding to about 60M pairs for computing the dev/test AP). As in BIBREF13 , when training the classification-based embeddings, we use a subset of the training set containing all word types with a minimum of 3 occurrences, reducing the training set size to approximately 9k segments.
When training the Siamese networks, the training data consists of all of the same-word pairs in the full training set (approximately 100k pairs). For each such training pair, we randomly sample a third example belonging to a different word type, as required for the INLINEFORM0 loss.
Classification network details
Our classifier-based embeddings use LSTM or GRU networks with 2–4 stacked layers and 1–3 fully connected layers. The final embedding dimensionality is equal to the number of unique word labels in the training set, which is 1061. The recurrent hidden state dimensionality is fixed at 512 and dropout BIBREF32 between stacked recurrent layers is used with probability INLINEFORM0 . The fully connected hidden layer dimensionality is fixed at 1024. Rectified linear unit (ReLU) non-linearities and dropout with INLINEFORM1 are used between fully-connected layers. However, between the final recurrent hidden state output and the first fully-connected layer no non-linearity or dropout is applied. These settings were determined through experiments on the development set.
The classifier network is trained with a cross entropy loss and optimized using stochastic gradient descent (SGD) with Nesterov momentum BIBREF33 . The learning rate is initialized at 0.1 and is reduced by a factor of 10 according to the following heuristic: If 99% of the current epoch's average batch loss is greater than the running average of batch losses over the last 3 epochs, this is considered a plateau; if there are 3 consecutive plateau epochs, then the learning rate is reduced. Training stops when reducing the learning rate no longer improves dev set AP. Then, the model from the epoch corresponding to the the best dev set AP is chosen. Several other optimizers—Adagrad BIBREF34 , Adadelta BIBREF35 , and Adam BIBREF36 —were explored in initial experiments on the dev set, but all reported results were obtained using SGD with Nesterov momentum.
Siamese network details
For experiments with Siamese networks, we initialize (warm-start) the networks with the tuned classification network, removing the final log-softmax layer and replacing it with a linear layer of size equal to the desired embedding dimensionality. We explored embeddings with dimensionalities between 8 and 2048. We use a margin of 0.4 in the cos-hinge loss.
In training the Siamese networks, each training mini-batch consists of INLINEFORM0 triplets. INLINEFORM1 triplets are of the form INLINEFORM2 where INLINEFORM3 and INLINEFORM4 are examples of the same class (a pair from the 100k same-word pair set) and INLINEFORM5 is a randomly sampled example from a different class. Then, for each of these INLINEFORM6 triplets INLINEFORM7 , an additional triplet INLINEFORM8 is added to the mini-batch to allow all segments to serve as anchors. This is a slight departure from earlier work BIBREF13 , which we found to improve stability in training and performance on the development set.
In preliminary experiments, we compared two methods for choosing the negative examples INLINEFORM0 during training, a uniform sampling approach and a non-uniform one. In the case of uniform sampling, we sample INLINEFORM1 uniformly at random from the full set of training examples with labels different from INLINEFORM2 . This sampling method requires only word-pair supervision. In the case of non-uniform sampling, INLINEFORM3 is sampled in two steps. First, we construct a distribution INLINEFORM4 over word labels INLINEFORM5 and sample a different label from it. Second, we sample an example uniformly from within the subset with the chosen label. The goal of this method is to speed up training by targeting pairs that violate the margin constraint. To construct the multinomial PMF INLINEFORM6 , we maintain an INLINEFORM7 matrix INLINEFORM8 , where INLINEFORM9 is the number of unique word labels in training. Each word label corresponds to an integer INLINEFORM10 INLINEFORM11 [1, INLINEFORM12 ] and therefore a row in INLINEFORM13 . The values in a row of INLINEFORM14 are considered similarity scores, and we can retrieve the desired PMF for each row by normalizing by its sum.
At the start of each epoch, we initialize INLINEFORM0 with 0's along the diagonal and 1's elsewhere (which reduces to uniform sampling). For each training pair INLINEFORM1 , we update INLINEFORM2 for both INLINEFORM3 and INLINEFORM4 :
INLINEFORM0
The PMFs INLINEFORM0 are updated after the forward pass of an entire mini-batch. The constant INLINEFORM1 enforces a potentially stronger constraint than is used in the INLINEFORM2 loss, in order to promote diverse sampling. In all experiments, we set INLINEFORM3 . This is a heuristic approach, and it would be interesting to consider various alternatives. Preliminary experiments showed that the non-uniform sampling method outperformed uniform sampling, and in the following we report results with non-uniform sampling.
We optimize the Siamese network model using SGD with Nesterov momentum for 15 epochs. The learning rate is initialized to 0.001 and dropped every 3 epochs until no improvement is seen on the dev set. The final model is taken from the epoch with the highest dev set AP. All models were implemented in Torch BIBREF37 and used the rnn library of BIBREF38 .
Results
Based on development set results, our final embedding models are LSTM networks with 3 stacked layers and 3 fully connected layers, with output dimensionality of 1024 in the case of Siamese networks. Final test set results are given in Table TABREF7 . We include a comparison with the best prior results on this task from BIBREF13 , as well as the result of using standard DTW on the input MFCCs (reproduced from BIBREF13 ) and the best prior result using DTW, obtained with frame features learned with correlated autoencoders BIBREF21 . Both classifier and Siamese LSTM embedding models outperform all prior results on this task of which we are aware.
We next analyze the effects of model design choices, as well as the learned embeddings themselves.
Effect of model structure
Table TABREF10 shows the effect on development set performance of the number of stacked layers INLINEFORM0 , the number of fully connected layers INLINEFORM1 , and LSTM vs. GRU cells, for classifier-based embeddings. The best performance in this experiment is achieved by the LSTM network with INLINEFORM2 . However, performance still seems to be improving with additional layers, suggesting that we may be able to further improve performance by adding even more layers of either type. However, we fixed the model to INLINEFORM3 in order to allow for more experimentation and analysis within a reasonable time.
Table TABREF10 reveals an interesting trend. When only one fully connected layer is used, the GRU networks outperform the LSTMs given a sufficient number of stacked layers. On the other hand, once we add more fully connected layers, the LSTMs outperform the GRUs. In the first few lines of Table TABREF10 , we use 2, 3, and 4 layer stacks of LSTMs and GRUs while holding fixed the number of fully-connected layers at INLINEFORM0 . There is clear utility in stacking additional layers; however, even with 4 stacked layers the RNNs still underperform the CNN-based embeddings of BIBREF13 until we begin adding fully connected layers.
After exploring a variety of stacked RNNs, we fixed the stack to 3 layers and varied the number of fully connected layers. The value of each additional fully connected layer is clearly greater than that of adding stacked layers. All networks trained with 2 or 3 fully connected layers obtain more than 0.4 AP on the development set, while stacked RNNs with 1 fully connected layer are at around 0.3 AP or less. This may raise the question of whether some simple fully connected model may be all that is needed; however, previous work has shown that this approach is not competitive BIBREF13 , and convolutional or recurrent layers are needed to summarize arbitrary-length segments into a fixed-dimensional representation.
Effect of embedding dimensionality
For the Siamese networks, we varied the output embedding dimensionality, as shown in Fig. FIGREF11 . This analysis shows that the embeddings learned by the Siamese RNN network are quite robust to reduced dimensionality, outperforming the classifier model for all dimensionalities 32 or higher and outperforming previously reported dev set performance with CNN-based embeddings BIBREF13 for all dimensionalities INLINEFORM0 .
Effect of training vocabulary
We might expect the learned embeddings to be more accurate for words that are seen in training than for ones that are not. Fig. FIGREF11 measures this effect by showing performance as a function of the number of occurrences of the dev words in the training set. Indeed, both model types are much more successful for in-vocabulary words, and their performance improves the higher the training frequency of the words. However, performance increases more quickly for the Siamese network than for the classifier as training frequency increases. This may be due to the fact that, if a word type occurs at least INLINEFORM0 times in the classifier training set, then it occurs at least INLINEFORM1 times in the Siamese paired training data.
Visualization of embeddings
In order to gain a better qualitative understanding of the differences between clasiffier and Siamese-based embeddings, and of the learned embedding space more generally, we plot a two-dimensional visualization of some of our learned embeddings via t-SNE BIBREF40 in Fig. FIGREF12 . For both classifier and Siamese embeddings, there is a marked difference in the quality of clusters formed by embeddings of words that were previously seen vs. previously unseen in training. However, the Siamese network embeddings appear to have better relative distances between word clusters with similar and dissimilar pronunciations. For example, the word programs appears equidistant from problems and problem in the classifier-based embedding space, but in the Siamese embedding space problems falls between problem and programs. Similarly, the cluster for democracy shifts with respect to actually and especially to better respect differences in pronunciation. More study of learned embeddings, using more data and word types, is needed to confirm such patterns in general. Improvements in unseen word embeddings from the classifier embedding space to the Siamese embedding space (such as for democracy, morning, and basketball) are a likely result of optimizing the model for relative distances between words.
Conclusion
Our main finding is that RNN-based acoustic word embeddings outperform prior approaches, as measured via a word discrimination task related to query-by-example search. Our best results are obtained with deep LSTM RNNs with a combination of several stacked layers and several fully connected layers, optimized with a contrastive Siamese loss. Siamese networks have the benefit that, for any given training data set, they are effectively trained on a much larger set, in the sense that they measure a loss and gradient for every possible pair of data points. Our experiments suggest that the models could still be improved with additional layers. In addition, we have found that, for the purposes of acoustic word embeddings, fully connected layers are very important and have a more significant effect per layer than stacked layers, particularly when trained with the cross entropy loss function.
These experiments represent an initial exploration of sequential neural models for acoustic word embeddings. There are a number of directions for further work. For example, while our analyses suggest that Siamese networks are better than classifier-based models at embedding previously unseen words, our best embeddings are still much poorer for unseen words. Improvements in this direction may come from larger training sets, or may require new models that better model the shared structure between words. Other directions for future work include additional forms of supervision and training, as well as application to downstream tasks. | 1061 |
6b6360fab2edc836901195c0aba973eae4891975 | 6b6360fab2edc836901195c0aba973eae4891975_0 | Q: Which dataset do they use?
Text: Introduction
Many speech processing tasks – such as automatic speech recognition or spoken term detection – hinge on associating segments of speech signals with word labels. In most systems developed for such tasks, words are broken down into sub-word units such as phones, and models are built for the individual units. An alternative, which has been considered by some researchers, is to consider each entire word segment as a single unit, without assigning parts of it to sub-word units. One motivation for the use of whole-word approaches is that they avoid the need for sub-word models. This is helpful since, despite decades of work on sub-word modeling BIBREF0 , BIBREF1 , it still poses significant challenges. For example, speech processing systems are still hampered by differences in conversational pronunciations BIBREF2 . A second motivation is that considering whole words at once allows us to consider a more flexible set of features and reason over longer time spans.
Whole-word approaches typically involve, at some level, template matching. For example, in template-based speech recognition BIBREF3 , BIBREF4 , word scores are computed from dynamic time warping (DTW) distances between an observed segment and training segments of the hypothesized word. In query-by-example search, putative matches are typically found by measuring the DTW distance between the query and segments of the search database BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . In other words, whole-word approaches often boil down to making decisions about whether two segments are examples of the same word or not.
An alternative to DTW that has begun to be explored is the use of acoustic word embeddings (AWEs), or vector representations of spoken word segments. AWEs are representations that can be learned from data, ideally such that the embeddings of two segments corresponding to the same word are close, while embeddings of segments corresponding to different words are far apart. Once word segments are represented via fixed-dimensional embeddings, computing distances is as simple as measuring a cosine or Euclidean distance between two vectors.
There has been some, thus far limited, work on acoustic word embeddings, focused on a number of embedding models, training approaches, and tasks BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 . In this paper we explore new embedding models based on recurrent neural networks (RNNs), applied to a word discrimination task related to query-by-example search. RNNs are a natural model class for acoustic word embeddings, since they can handle arbitrary-length sequences. We compare several types of RNN-based embeddings and analyze their properties. Compared to prior embeddings tested on the same task, our best models achieve sizable improvements in average precision.
Related work
We next briefly describe the most closely related prior work.
Maas et al. BIBREF9 and Bengio and Heigold BIBREF10 used acoustic word embeddings, based on convolutional neural networks (CNNs), to generate scores for word segments in automatic speech recognition. Maas et al. trained CNNs to predict (continuous-valued) embeddings of the word labels, and used the resulting embeddings to define feature functions in a segmental conditional random field BIBREF17 rescoring system. Bengio and Heigold also developed CNN-based embeddings for lattice rescoring, but with a contrastive loss to separate embeddings of a given word from embeddings of other words.
Levin et al. BIBREF11 developed unsupervised embeddings based on representing each word as a vector of DTW distances to a collection of reference word segments. This representation was subsequently used in several applications: a segmental approach for query-by-example search BIBREF12 , lexical clustering BIBREF18 , and unsupervised speech recognition BIBREF19 . Voinea et al. BIBREF15 developed a representation also based on templates, in their case phone templates, designed to be invariant to specific transformations, and showed their robustness on digit classification.
Kamper et al. BIBREF13 compared several types of acoustic word embeddings for a word discrimination task related to query-by-example search, finding that embeddings based on convolutional neural networks (CNNs) trained with a contrastive loss outperformed the reference vector approach of Levin et al. BIBREF11 as well as several other CNN and DNN embeddings and DTW using several feature types. There have now been a number of approaches compared on this same task and data BIBREF11 , BIBREF20 , BIBREF21 , BIBREF22 . For a direct comparison with this prior work, in this paper we use the same task and some of the same training losses as Kamper et al., but develop new embedding models based on RNNs.
The only prior work of which we are aware using RNNs for acoustic word embeddings is that of Chen et al. BIBREF16 and Chung et al. BIBREF14 . Chen et al. learned a long short-term memory (LSTM) RNN for word classification and used the resulting hidden state vectors as a word embedding in a query-by-example task. The setting was quite specific, however, with a small number of queries and speaker-dependent training. Chung et al. BIBREF14 worked in an unsupervised setting and trained single-layer RNN autoencoders to produce embeddings for a word discrimination task. In this paper we focus on the supervised setting, and compare a variety of RNN-based structures trained with different losses.
Approach
An acoustic word embedding is a function that takes as input a speech segment corresponding to a word, INLINEFORM0 , where each INLINEFORM1 is a vector of frame-level acoustic features, and outputs a fixed-dimensional vector representing the segment, INLINEFORM2 . The basic embedding model structure we use is shown in Fig. FIGREF1 . The model consists of a deep RNN with some number INLINEFORM3 of stacked layers, whose final hidden state vector is passed as input to a set of INLINEFORM4 of fully connected layers; the output of the final fully connected layer is the embedding INLINEFORM5 .
The RNN hidden state at each time frame can be viewed as a representation of the input seen thus far, and its value in the last time frame INLINEFORM0 could itself serve as the final word embedding. The fully connected layers are added to account for the fact that some additional transformation may improve the representation. For example, the hidden state may need to be larger than the desired word embedding dimension, in order to be able to "remember" all of the needed intermediate information. Some of that information may not be needed in the final embedding. In addition, the information maintained in the hidden state may not necessarily be discriminative; some additional linear or non-linear transformation may help to learn a discriminative embedding.
Within this class of embedding models, we focus on Long Short-Term Memory (LSTM) networks BIBREF23 and Gated Recurrent Unit (GRU) networks BIBREF24 . These are both types of RNNs that include a mechanism for selectively retaining or discarding information at each time frame when updating the hidden state, in order to better utilize long-term context. Both of these RNN variants have been used successfully in speech recognition BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 .
In an LSTM RNN, at each time frame both the hidden state INLINEFORM0 and an associated “cell memory" vector INLINEFORM1 , are updated and passed on to the next time frame. In other words, each forward edge in Figure FIGREF1 can be viewed as carrying both the cell memory and hidden state vectors. The updates are modulated by the values of several gating vectors, which control the degree to which the cell memory and hidden state are updated in light of new information in the current frame. For a single-layer LSTM network, the updates are as follows:
INLINEFORM0
where INLINEFORM0 , and INLINEFORM1 are all vectors of the same dimensionality, INLINEFORM2 , and INLINEFORM3 are learned weight matrices of the appropriate sizes, INLINEFORM4 and INLINEFORM5 are learned bias vectors, INLINEFORM6 is a componentwise logistic activation, and INLINEFORM7 refers to the Hadamard (componentwise) product.
Similarly, in a GRU network, at each time step a GRU cell determines what components of old information are retained, overwritten, or modified in light of the next step in the input sequence. The output from a GRU cell is only the hidden state vector. A GRU cell uses a reset gate INLINEFORM0 and an update gate INLINEFORM1 as described below for a single-layer network: INLINEFORM2
where INLINEFORM0 , and INLINEFORM1 are all the same dimensionality, INLINEFORM2 , and INLINEFORM3 are learned weight matrices of the appropriate size, and INLINEFORM4 , INLINEFORM5 and INLINEFORM6 are learned bias vectors.
All of the above equations refer to single-layer networks. In a deep network, with multiple stacked layers, the same update equations are used in each layer, with the state, cell, and gate vectors replaced by layer-specific vectors INLINEFORM0 and so on for layer INLINEFORM1 . For all but the first layer, the input INLINEFORM2 is replaced by the hidden state vector from the previous layer INLINEFORM3 .
For the fully connected layers, we use rectified linear unit (ReLU) BIBREF29 activation, except for the final layer which depends on the form of supervision and loss used in training.
Training
We train the RNN-based embedding models using a set of pre-segmented spoken words. We use two main training approaches, inspired by prior work but with some differences in the details. As in BIBREF13 , BIBREF10 , our first approach is to use the word labels of the training segments and train the networks to classify the word. In this case, the final layer of INLINEFORM0 is a log-softmax layer. Here we are limited to the subset of the training set that has a sufficient number of segments per word to train a good classifier, and the output dimensionality is equal to the number of words (but see BIBREF13 for a study of varying the dimensionality in such a classifier-based embedding model by introducing a bottleneck layer). This model is trained end-to-end and is optimized with a cross entropy loss. Although labeled data is necessarily limited, the hope is that the learned models will be useful even when applied to spoken examples of words not previously seen in the training data. For words not seen in training, the embeddings should correspond to some measure of similarity of the word to the training words, measured via the posterior probabilities of the previously seen words. In the experiments below, we examine this assumption by analyzing performance on words that appear in the training data compared to those that do not.
The second training approach, based on earlier work of Kamper et al. BIBREF13 , is to train "Siamese" networks BIBREF30 . In this approach, full supervision is not needed; rather, we use weak supervision in the form of pairs of segments labeled as same or different. The base model remains the same as before—an RNN followed by a set of fully connected layers—but the final layer is no longer a softmax but rather a linear activation layer of arbitrary size. In order to learn the parameters, we simultaneously feed three word segments through three copies of our model (i.e. three networks with shared weights). One input segment is an “anchor", INLINEFORM0 , the second is another segment with the same word label, INLINEFORM1 , and the third is a segment corresponding to a different word label, INLINEFORM2 . Then, the network is trained using a “cos-hinge" loss:
DISPLAYFORM0
where INLINEFORM0 is the cosine distance between INLINEFORM1 . Unlike cross entropy training, here we directly aim to optimize relative (cosine) distance between same and different word pairs. For tasks such as query-by-example search, this training loss better respects our end objective, and can use more data since neither fully labeled data nor any minimum number of examples of each word should be needed.
EXPERIMENTS
Our end goal is to improve performance on downstream tasks requiring accurate word discrimination. In this paper we use an intermediate task that more directly tests whether same- and different-word pairs have the expected relationship. and that allows us to compare to a variety of prior work. Specifically, we use the word discrimination task of Carlin et al. BIBREF20 , which is similar to a query-by-example task where the word segmentations are known. The evaluation consists of determining, for each pair of evaluation segments, whether they are examples of the same or different words, and measuring performance via the average precision (AP). We do this by measuring the cosine similarity between their acoustic word embeddings and declaring them to be the same if the distance is below a threshold. By sweeping the threshold, we obtain a precision-recall curve from which we compute the AP.
The data used for this task is drawn from the Switchboard conversational English corpus BIBREF31 . The word segments range from 50 to 200 frames in length. The acoustic features in each frame (the input to the word embedding models INLINEFORM0 ) are 39-dimensional MFCCs+ INLINEFORM1 + INLINEFORM2 . We use the same train, development, and test partitions as in prior work BIBREF13 , BIBREF11 , and the same acoustic features as in BIBREF13 , for as direct a comparison as possible. The train set contains approximately 10k example segments, while dev and test each contain approximately 11k segments (corresponding to about 60M pairs for computing the dev/test AP). As in BIBREF13 , when training the classification-based embeddings, we use a subset of the training set containing all word types with a minimum of 3 occurrences, reducing the training set size to approximately 9k segments.
When training the Siamese networks, the training data consists of all of the same-word pairs in the full training set (approximately 100k pairs). For each such training pair, we randomly sample a third example belonging to a different word type, as required for the INLINEFORM0 loss.
Classification network details
Our classifier-based embeddings use LSTM or GRU networks with 2–4 stacked layers and 1–3 fully connected layers. The final embedding dimensionality is equal to the number of unique word labels in the training set, which is 1061. The recurrent hidden state dimensionality is fixed at 512 and dropout BIBREF32 between stacked recurrent layers is used with probability INLINEFORM0 . The fully connected hidden layer dimensionality is fixed at 1024. Rectified linear unit (ReLU) non-linearities and dropout with INLINEFORM1 are used between fully-connected layers. However, between the final recurrent hidden state output and the first fully-connected layer no non-linearity or dropout is applied. These settings were determined through experiments on the development set.
The classifier network is trained with a cross entropy loss and optimized using stochastic gradient descent (SGD) with Nesterov momentum BIBREF33 . The learning rate is initialized at 0.1 and is reduced by a factor of 10 according to the following heuristic: If 99% of the current epoch's average batch loss is greater than the running average of batch losses over the last 3 epochs, this is considered a plateau; if there are 3 consecutive plateau epochs, then the learning rate is reduced. Training stops when reducing the learning rate no longer improves dev set AP. Then, the model from the epoch corresponding to the the best dev set AP is chosen. Several other optimizers—Adagrad BIBREF34 , Adadelta BIBREF35 , and Adam BIBREF36 —were explored in initial experiments on the dev set, but all reported results were obtained using SGD with Nesterov momentum.
Siamese network details
For experiments with Siamese networks, we initialize (warm-start) the networks with the tuned classification network, removing the final log-softmax layer and replacing it with a linear layer of size equal to the desired embedding dimensionality. We explored embeddings with dimensionalities between 8 and 2048. We use a margin of 0.4 in the cos-hinge loss.
In training the Siamese networks, each training mini-batch consists of INLINEFORM0 triplets. INLINEFORM1 triplets are of the form INLINEFORM2 where INLINEFORM3 and INLINEFORM4 are examples of the same class (a pair from the 100k same-word pair set) and INLINEFORM5 is a randomly sampled example from a different class. Then, for each of these INLINEFORM6 triplets INLINEFORM7 , an additional triplet INLINEFORM8 is added to the mini-batch to allow all segments to serve as anchors. This is a slight departure from earlier work BIBREF13 , which we found to improve stability in training and performance on the development set.
In preliminary experiments, we compared two methods for choosing the negative examples INLINEFORM0 during training, a uniform sampling approach and a non-uniform one. In the case of uniform sampling, we sample INLINEFORM1 uniformly at random from the full set of training examples with labels different from INLINEFORM2 . This sampling method requires only word-pair supervision. In the case of non-uniform sampling, INLINEFORM3 is sampled in two steps. First, we construct a distribution INLINEFORM4 over word labels INLINEFORM5 and sample a different label from it. Second, we sample an example uniformly from within the subset with the chosen label. The goal of this method is to speed up training by targeting pairs that violate the margin constraint. To construct the multinomial PMF INLINEFORM6 , we maintain an INLINEFORM7 matrix INLINEFORM8 , where INLINEFORM9 is the number of unique word labels in training. Each word label corresponds to an integer INLINEFORM10 INLINEFORM11 [1, INLINEFORM12 ] and therefore a row in INLINEFORM13 . The values in a row of INLINEFORM14 are considered similarity scores, and we can retrieve the desired PMF for each row by normalizing by its sum.
At the start of each epoch, we initialize INLINEFORM0 with 0's along the diagonal and 1's elsewhere (which reduces to uniform sampling). For each training pair INLINEFORM1 , we update INLINEFORM2 for both INLINEFORM3 and INLINEFORM4 :
INLINEFORM0
The PMFs INLINEFORM0 are updated after the forward pass of an entire mini-batch. The constant INLINEFORM1 enforces a potentially stronger constraint than is used in the INLINEFORM2 loss, in order to promote diverse sampling. In all experiments, we set INLINEFORM3 . This is a heuristic approach, and it would be interesting to consider various alternatives. Preliminary experiments showed that the non-uniform sampling method outperformed uniform sampling, and in the following we report results with non-uniform sampling.
We optimize the Siamese network model using SGD with Nesterov momentum for 15 epochs. The learning rate is initialized to 0.001 and dropped every 3 epochs until no improvement is seen on the dev set. The final model is taken from the epoch with the highest dev set AP. All models were implemented in Torch BIBREF37 and used the rnn library of BIBREF38 .
Results
Based on development set results, our final embedding models are LSTM networks with 3 stacked layers and 3 fully connected layers, with output dimensionality of 1024 in the case of Siamese networks. Final test set results are given in Table TABREF7 . We include a comparison with the best prior results on this task from BIBREF13 , as well as the result of using standard DTW on the input MFCCs (reproduced from BIBREF13 ) and the best prior result using DTW, obtained with frame features learned with correlated autoencoders BIBREF21 . Both classifier and Siamese LSTM embedding models outperform all prior results on this task of which we are aware.
We next analyze the effects of model design choices, as well as the learned embeddings themselves.
Effect of model structure
Table TABREF10 shows the effect on development set performance of the number of stacked layers INLINEFORM0 , the number of fully connected layers INLINEFORM1 , and LSTM vs. GRU cells, for classifier-based embeddings. The best performance in this experiment is achieved by the LSTM network with INLINEFORM2 . However, performance still seems to be improving with additional layers, suggesting that we may be able to further improve performance by adding even more layers of either type. However, we fixed the model to INLINEFORM3 in order to allow for more experimentation and analysis within a reasonable time.
Table TABREF10 reveals an interesting trend. When only one fully connected layer is used, the GRU networks outperform the LSTMs given a sufficient number of stacked layers. On the other hand, once we add more fully connected layers, the LSTMs outperform the GRUs. In the first few lines of Table TABREF10 , we use 2, 3, and 4 layer stacks of LSTMs and GRUs while holding fixed the number of fully-connected layers at INLINEFORM0 . There is clear utility in stacking additional layers; however, even with 4 stacked layers the RNNs still underperform the CNN-based embeddings of BIBREF13 until we begin adding fully connected layers.
After exploring a variety of stacked RNNs, we fixed the stack to 3 layers and varied the number of fully connected layers. The value of each additional fully connected layer is clearly greater than that of adding stacked layers. All networks trained with 2 or 3 fully connected layers obtain more than 0.4 AP on the development set, while stacked RNNs with 1 fully connected layer are at around 0.3 AP or less. This may raise the question of whether some simple fully connected model may be all that is needed; however, previous work has shown that this approach is not competitive BIBREF13 , and convolutional or recurrent layers are needed to summarize arbitrary-length segments into a fixed-dimensional representation.
Effect of embedding dimensionality
For the Siamese networks, we varied the output embedding dimensionality, as shown in Fig. FIGREF11 . This analysis shows that the embeddings learned by the Siamese RNN network are quite robust to reduced dimensionality, outperforming the classifier model for all dimensionalities 32 or higher and outperforming previously reported dev set performance with CNN-based embeddings BIBREF13 for all dimensionalities INLINEFORM0 .
Effect of training vocabulary
We might expect the learned embeddings to be more accurate for words that are seen in training than for ones that are not. Fig. FIGREF11 measures this effect by showing performance as a function of the number of occurrences of the dev words in the training set. Indeed, both model types are much more successful for in-vocabulary words, and their performance improves the higher the training frequency of the words. However, performance increases more quickly for the Siamese network than for the classifier as training frequency increases. This may be due to the fact that, if a word type occurs at least INLINEFORM0 times in the classifier training set, then it occurs at least INLINEFORM1 times in the Siamese paired training data.
Visualization of embeddings
In order to gain a better qualitative understanding of the differences between clasiffier and Siamese-based embeddings, and of the learned embedding space more generally, we plot a two-dimensional visualization of some of our learned embeddings via t-SNE BIBREF40 in Fig. FIGREF12 . For both classifier and Siamese embeddings, there is a marked difference in the quality of clusters formed by embeddings of words that were previously seen vs. previously unseen in training. However, the Siamese network embeddings appear to have better relative distances between word clusters with similar and dissimilar pronunciations. For example, the word programs appears equidistant from problems and problem in the classifier-based embedding space, but in the Siamese embedding space problems falls between problem and programs. Similarly, the cluster for democracy shifts with respect to actually and especially to better respect differences in pronunciation. More study of learned embeddings, using more data and word types, is needed to confirm such patterns in general. Improvements in unseen word embeddings from the classifier embedding space to the Siamese embedding space (such as for democracy, morning, and basketball) are a likely result of optimizing the model for relative distances between words.
Conclusion
Our main finding is that RNN-based acoustic word embeddings outperform prior approaches, as measured via a word discrimination task related to query-by-example search. Our best results are obtained with deep LSTM RNNs with a combination of several stacked layers and several fully connected layers, optimized with a contrastive Siamese loss. Siamese networks have the benefit that, for any given training data set, they are effectively trained on a much larger set, in the sense that they measure a loss and gradient for every possible pair of data points. Our experiments suggest that the models could still be improved with additional layers. In addition, we have found that, for the purposes of acoustic word embeddings, fully connected layers are very important and have a more significant effect per layer than stacked layers, particularly when trained with the cross entropy loss function.
These experiments represent an initial exploration of sequential neural models for acoustic word embeddings. There are a number of directions for further work. For example, while our analyses suggest that Siamese networks are better than classifier-based models at embedding previously unseen words, our best embeddings are still much poorer for unseen words. Improvements in this direction may come from larger training sets, or may require new models that better model the shared structure between words. Other directions for future work include additional forms of supervision and training, as well as application to downstream tasks. | Switchboard conversational English corpus |
b6b5f92a1d9fa623b25c70c1ac67d59d84d9eec8 | b6b5f92a1d9fa623b25c70c1ac67d59d84d9eec8_0 | Q: By how much do they outpeform previous results on the word discrimination task?
Text: Introduction
Many speech processing tasks – such as automatic speech recognition or spoken term detection – hinge on associating segments of speech signals with word labels. In most systems developed for such tasks, words are broken down into sub-word units such as phones, and models are built for the individual units. An alternative, which has been considered by some researchers, is to consider each entire word segment as a single unit, without assigning parts of it to sub-word units. One motivation for the use of whole-word approaches is that they avoid the need for sub-word models. This is helpful since, despite decades of work on sub-word modeling BIBREF0 , BIBREF1 , it still poses significant challenges. For example, speech processing systems are still hampered by differences in conversational pronunciations BIBREF2 . A second motivation is that considering whole words at once allows us to consider a more flexible set of features and reason over longer time spans.
Whole-word approaches typically involve, at some level, template matching. For example, in template-based speech recognition BIBREF3 , BIBREF4 , word scores are computed from dynamic time warping (DTW) distances between an observed segment and training segments of the hypothesized word. In query-by-example search, putative matches are typically found by measuring the DTW distance between the query and segments of the search database BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . In other words, whole-word approaches often boil down to making decisions about whether two segments are examples of the same word or not.
An alternative to DTW that has begun to be explored is the use of acoustic word embeddings (AWEs), or vector representations of spoken word segments. AWEs are representations that can be learned from data, ideally such that the embeddings of two segments corresponding to the same word are close, while embeddings of segments corresponding to different words are far apart. Once word segments are represented via fixed-dimensional embeddings, computing distances is as simple as measuring a cosine or Euclidean distance between two vectors.
There has been some, thus far limited, work on acoustic word embeddings, focused on a number of embedding models, training approaches, and tasks BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 . In this paper we explore new embedding models based on recurrent neural networks (RNNs), applied to a word discrimination task related to query-by-example search. RNNs are a natural model class for acoustic word embeddings, since they can handle arbitrary-length sequences. We compare several types of RNN-based embeddings and analyze their properties. Compared to prior embeddings tested on the same task, our best models achieve sizable improvements in average precision.
Related work
We next briefly describe the most closely related prior work.
Maas et al. BIBREF9 and Bengio and Heigold BIBREF10 used acoustic word embeddings, based on convolutional neural networks (CNNs), to generate scores for word segments in automatic speech recognition. Maas et al. trained CNNs to predict (continuous-valued) embeddings of the word labels, and used the resulting embeddings to define feature functions in a segmental conditional random field BIBREF17 rescoring system. Bengio and Heigold also developed CNN-based embeddings for lattice rescoring, but with a contrastive loss to separate embeddings of a given word from embeddings of other words.
Levin et al. BIBREF11 developed unsupervised embeddings based on representing each word as a vector of DTW distances to a collection of reference word segments. This representation was subsequently used in several applications: a segmental approach for query-by-example search BIBREF12 , lexical clustering BIBREF18 , and unsupervised speech recognition BIBREF19 . Voinea et al. BIBREF15 developed a representation also based on templates, in their case phone templates, designed to be invariant to specific transformations, and showed their robustness on digit classification.
Kamper et al. BIBREF13 compared several types of acoustic word embeddings for a word discrimination task related to query-by-example search, finding that embeddings based on convolutional neural networks (CNNs) trained with a contrastive loss outperformed the reference vector approach of Levin et al. BIBREF11 as well as several other CNN and DNN embeddings and DTW using several feature types. There have now been a number of approaches compared on this same task and data BIBREF11 , BIBREF20 , BIBREF21 , BIBREF22 . For a direct comparison with this prior work, in this paper we use the same task and some of the same training losses as Kamper et al., but develop new embedding models based on RNNs.
The only prior work of which we are aware using RNNs for acoustic word embeddings is that of Chen et al. BIBREF16 and Chung et al. BIBREF14 . Chen et al. learned a long short-term memory (LSTM) RNN for word classification and used the resulting hidden state vectors as a word embedding in a query-by-example task. The setting was quite specific, however, with a small number of queries and speaker-dependent training. Chung et al. BIBREF14 worked in an unsupervised setting and trained single-layer RNN autoencoders to produce embeddings for a word discrimination task. In this paper we focus on the supervised setting, and compare a variety of RNN-based structures trained with different losses.
Approach
An acoustic word embedding is a function that takes as input a speech segment corresponding to a word, INLINEFORM0 , where each INLINEFORM1 is a vector of frame-level acoustic features, and outputs a fixed-dimensional vector representing the segment, INLINEFORM2 . The basic embedding model structure we use is shown in Fig. FIGREF1 . The model consists of a deep RNN with some number INLINEFORM3 of stacked layers, whose final hidden state vector is passed as input to a set of INLINEFORM4 of fully connected layers; the output of the final fully connected layer is the embedding INLINEFORM5 .
The RNN hidden state at each time frame can be viewed as a representation of the input seen thus far, and its value in the last time frame INLINEFORM0 could itself serve as the final word embedding. The fully connected layers are added to account for the fact that some additional transformation may improve the representation. For example, the hidden state may need to be larger than the desired word embedding dimension, in order to be able to "remember" all of the needed intermediate information. Some of that information may not be needed in the final embedding. In addition, the information maintained in the hidden state may not necessarily be discriminative; some additional linear or non-linear transformation may help to learn a discriminative embedding.
Within this class of embedding models, we focus on Long Short-Term Memory (LSTM) networks BIBREF23 and Gated Recurrent Unit (GRU) networks BIBREF24 . These are both types of RNNs that include a mechanism for selectively retaining or discarding information at each time frame when updating the hidden state, in order to better utilize long-term context. Both of these RNN variants have been used successfully in speech recognition BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 .
In an LSTM RNN, at each time frame both the hidden state INLINEFORM0 and an associated “cell memory" vector INLINEFORM1 , are updated and passed on to the next time frame. In other words, each forward edge in Figure FIGREF1 can be viewed as carrying both the cell memory and hidden state vectors. The updates are modulated by the values of several gating vectors, which control the degree to which the cell memory and hidden state are updated in light of new information in the current frame. For a single-layer LSTM network, the updates are as follows:
INLINEFORM0
where INLINEFORM0 , and INLINEFORM1 are all vectors of the same dimensionality, INLINEFORM2 , and INLINEFORM3 are learned weight matrices of the appropriate sizes, INLINEFORM4 and INLINEFORM5 are learned bias vectors, INLINEFORM6 is a componentwise logistic activation, and INLINEFORM7 refers to the Hadamard (componentwise) product.
Similarly, in a GRU network, at each time step a GRU cell determines what components of old information are retained, overwritten, or modified in light of the next step in the input sequence. The output from a GRU cell is only the hidden state vector. A GRU cell uses a reset gate INLINEFORM0 and an update gate INLINEFORM1 as described below for a single-layer network: INLINEFORM2
where INLINEFORM0 , and INLINEFORM1 are all the same dimensionality, INLINEFORM2 , and INLINEFORM3 are learned weight matrices of the appropriate size, and INLINEFORM4 , INLINEFORM5 and INLINEFORM6 are learned bias vectors.
All of the above equations refer to single-layer networks. In a deep network, with multiple stacked layers, the same update equations are used in each layer, with the state, cell, and gate vectors replaced by layer-specific vectors INLINEFORM0 and so on for layer INLINEFORM1 . For all but the first layer, the input INLINEFORM2 is replaced by the hidden state vector from the previous layer INLINEFORM3 .
For the fully connected layers, we use rectified linear unit (ReLU) BIBREF29 activation, except for the final layer which depends on the form of supervision and loss used in training.
Training
We train the RNN-based embedding models using a set of pre-segmented spoken words. We use two main training approaches, inspired by prior work but with some differences in the details. As in BIBREF13 , BIBREF10 , our first approach is to use the word labels of the training segments and train the networks to classify the word. In this case, the final layer of INLINEFORM0 is a log-softmax layer. Here we are limited to the subset of the training set that has a sufficient number of segments per word to train a good classifier, and the output dimensionality is equal to the number of words (but see BIBREF13 for a study of varying the dimensionality in such a classifier-based embedding model by introducing a bottleneck layer). This model is trained end-to-end and is optimized with a cross entropy loss. Although labeled data is necessarily limited, the hope is that the learned models will be useful even when applied to spoken examples of words not previously seen in the training data. For words not seen in training, the embeddings should correspond to some measure of similarity of the word to the training words, measured via the posterior probabilities of the previously seen words. In the experiments below, we examine this assumption by analyzing performance on words that appear in the training data compared to those that do not.
The second training approach, based on earlier work of Kamper et al. BIBREF13 , is to train "Siamese" networks BIBREF30 . In this approach, full supervision is not needed; rather, we use weak supervision in the form of pairs of segments labeled as same or different. The base model remains the same as before—an RNN followed by a set of fully connected layers—but the final layer is no longer a softmax but rather a linear activation layer of arbitrary size. In order to learn the parameters, we simultaneously feed three word segments through three copies of our model (i.e. three networks with shared weights). One input segment is an “anchor", INLINEFORM0 , the second is another segment with the same word label, INLINEFORM1 , and the third is a segment corresponding to a different word label, INLINEFORM2 . Then, the network is trained using a “cos-hinge" loss:
DISPLAYFORM0
where INLINEFORM0 is the cosine distance between INLINEFORM1 . Unlike cross entropy training, here we directly aim to optimize relative (cosine) distance between same and different word pairs. For tasks such as query-by-example search, this training loss better respects our end objective, and can use more data since neither fully labeled data nor any minimum number of examples of each word should be needed.
EXPERIMENTS
Our end goal is to improve performance on downstream tasks requiring accurate word discrimination. In this paper we use an intermediate task that more directly tests whether same- and different-word pairs have the expected relationship. and that allows us to compare to a variety of prior work. Specifically, we use the word discrimination task of Carlin et al. BIBREF20 , which is similar to a query-by-example task where the word segmentations are known. The evaluation consists of determining, for each pair of evaluation segments, whether they are examples of the same or different words, and measuring performance via the average precision (AP). We do this by measuring the cosine similarity between their acoustic word embeddings and declaring them to be the same if the distance is below a threshold. By sweeping the threshold, we obtain a precision-recall curve from which we compute the AP.
The data used for this task is drawn from the Switchboard conversational English corpus BIBREF31 . The word segments range from 50 to 200 frames in length. The acoustic features in each frame (the input to the word embedding models INLINEFORM0 ) are 39-dimensional MFCCs+ INLINEFORM1 + INLINEFORM2 . We use the same train, development, and test partitions as in prior work BIBREF13 , BIBREF11 , and the same acoustic features as in BIBREF13 , for as direct a comparison as possible. The train set contains approximately 10k example segments, while dev and test each contain approximately 11k segments (corresponding to about 60M pairs for computing the dev/test AP). As in BIBREF13 , when training the classification-based embeddings, we use a subset of the training set containing all word types with a minimum of 3 occurrences, reducing the training set size to approximately 9k segments.
When training the Siamese networks, the training data consists of all of the same-word pairs in the full training set (approximately 100k pairs). For each such training pair, we randomly sample a third example belonging to a different word type, as required for the INLINEFORM0 loss.
Classification network details
Our classifier-based embeddings use LSTM or GRU networks with 2–4 stacked layers and 1–3 fully connected layers. The final embedding dimensionality is equal to the number of unique word labels in the training set, which is 1061. The recurrent hidden state dimensionality is fixed at 512 and dropout BIBREF32 between stacked recurrent layers is used with probability INLINEFORM0 . The fully connected hidden layer dimensionality is fixed at 1024. Rectified linear unit (ReLU) non-linearities and dropout with INLINEFORM1 are used between fully-connected layers. However, between the final recurrent hidden state output and the first fully-connected layer no non-linearity or dropout is applied. These settings were determined through experiments on the development set.
The classifier network is trained with a cross entropy loss and optimized using stochastic gradient descent (SGD) with Nesterov momentum BIBREF33 . The learning rate is initialized at 0.1 and is reduced by a factor of 10 according to the following heuristic: If 99% of the current epoch's average batch loss is greater than the running average of batch losses over the last 3 epochs, this is considered a plateau; if there are 3 consecutive plateau epochs, then the learning rate is reduced. Training stops when reducing the learning rate no longer improves dev set AP. Then, the model from the epoch corresponding to the the best dev set AP is chosen. Several other optimizers—Adagrad BIBREF34 , Adadelta BIBREF35 , and Adam BIBREF36 —were explored in initial experiments on the dev set, but all reported results were obtained using SGD with Nesterov momentum.
Siamese network details
For experiments with Siamese networks, we initialize (warm-start) the networks with the tuned classification network, removing the final log-softmax layer and replacing it with a linear layer of size equal to the desired embedding dimensionality. We explored embeddings with dimensionalities between 8 and 2048. We use a margin of 0.4 in the cos-hinge loss.
In training the Siamese networks, each training mini-batch consists of INLINEFORM0 triplets. INLINEFORM1 triplets are of the form INLINEFORM2 where INLINEFORM3 and INLINEFORM4 are examples of the same class (a pair from the 100k same-word pair set) and INLINEFORM5 is a randomly sampled example from a different class. Then, for each of these INLINEFORM6 triplets INLINEFORM7 , an additional triplet INLINEFORM8 is added to the mini-batch to allow all segments to serve as anchors. This is a slight departure from earlier work BIBREF13 , which we found to improve stability in training and performance on the development set.
In preliminary experiments, we compared two methods for choosing the negative examples INLINEFORM0 during training, a uniform sampling approach and a non-uniform one. In the case of uniform sampling, we sample INLINEFORM1 uniformly at random from the full set of training examples with labels different from INLINEFORM2 . This sampling method requires only word-pair supervision. In the case of non-uniform sampling, INLINEFORM3 is sampled in two steps. First, we construct a distribution INLINEFORM4 over word labels INLINEFORM5 and sample a different label from it. Second, we sample an example uniformly from within the subset with the chosen label. The goal of this method is to speed up training by targeting pairs that violate the margin constraint. To construct the multinomial PMF INLINEFORM6 , we maintain an INLINEFORM7 matrix INLINEFORM8 , where INLINEFORM9 is the number of unique word labels in training. Each word label corresponds to an integer INLINEFORM10 INLINEFORM11 [1, INLINEFORM12 ] and therefore a row in INLINEFORM13 . The values in a row of INLINEFORM14 are considered similarity scores, and we can retrieve the desired PMF for each row by normalizing by its sum.
At the start of each epoch, we initialize INLINEFORM0 with 0's along the diagonal and 1's elsewhere (which reduces to uniform sampling). For each training pair INLINEFORM1 , we update INLINEFORM2 for both INLINEFORM3 and INLINEFORM4 :
INLINEFORM0
The PMFs INLINEFORM0 are updated after the forward pass of an entire mini-batch. The constant INLINEFORM1 enforces a potentially stronger constraint than is used in the INLINEFORM2 loss, in order to promote diverse sampling. In all experiments, we set INLINEFORM3 . This is a heuristic approach, and it would be interesting to consider various alternatives. Preliminary experiments showed that the non-uniform sampling method outperformed uniform sampling, and in the following we report results with non-uniform sampling.
We optimize the Siamese network model using SGD with Nesterov momentum for 15 epochs. The learning rate is initialized to 0.001 and dropped every 3 epochs until no improvement is seen on the dev set. The final model is taken from the epoch with the highest dev set AP. All models were implemented in Torch BIBREF37 and used the rnn library of BIBREF38 .
Results
Based on development set results, our final embedding models are LSTM networks with 3 stacked layers and 3 fully connected layers, with output dimensionality of 1024 in the case of Siamese networks. Final test set results are given in Table TABREF7 . We include a comparison with the best prior results on this task from BIBREF13 , as well as the result of using standard DTW on the input MFCCs (reproduced from BIBREF13 ) and the best prior result using DTW, obtained with frame features learned with correlated autoencoders BIBREF21 . Both classifier and Siamese LSTM embedding models outperform all prior results on this task of which we are aware.
We next analyze the effects of model design choices, as well as the learned embeddings themselves.
Effect of model structure
Table TABREF10 shows the effect on development set performance of the number of stacked layers INLINEFORM0 , the number of fully connected layers INLINEFORM1 , and LSTM vs. GRU cells, for classifier-based embeddings. The best performance in this experiment is achieved by the LSTM network with INLINEFORM2 . However, performance still seems to be improving with additional layers, suggesting that we may be able to further improve performance by adding even more layers of either type. However, we fixed the model to INLINEFORM3 in order to allow for more experimentation and analysis within a reasonable time.
Table TABREF10 reveals an interesting trend. When only one fully connected layer is used, the GRU networks outperform the LSTMs given a sufficient number of stacked layers. On the other hand, once we add more fully connected layers, the LSTMs outperform the GRUs. In the first few lines of Table TABREF10 , we use 2, 3, and 4 layer stacks of LSTMs and GRUs while holding fixed the number of fully-connected layers at INLINEFORM0 . There is clear utility in stacking additional layers; however, even with 4 stacked layers the RNNs still underperform the CNN-based embeddings of BIBREF13 until we begin adding fully connected layers.
After exploring a variety of stacked RNNs, we fixed the stack to 3 layers and varied the number of fully connected layers. The value of each additional fully connected layer is clearly greater than that of adding stacked layers. All networks trained with 2 or 3 fully connected layers obtain more than 0.4 AP on the development set, while stacked RNNs with 1 fully connected layer are at around 0.3 AP or less. This may raise the question of whether some simple fully connected model may be all that is needed; however, previous work has shown that this approach is not competitive BIBREF13 , and convolutional or recurrent layers are needed to summarize arbitrary-length segments into a fixed-dimensional representation.
Effect of embedding dimensionality
For the Siamese networks, we varied the output embedding dimensionality, as shown in Fig. FIGREF11 . This analysis shows that the embeddings learned by the Siamese RNN network are quite robust to reduced dimensionality, outperforming the classifier model for all dimensionalities 32 or higher and outperforming previously reported dev set performance with CNN-based embeddings BIBREF13 for all dimensionalities INLINEFORM0 .
Effect of training vocabulary
We might expect the learned embeddings to be more accurate for words that are seen in training than for ones that are not. Fig. FIGREF11 measures this effect by showing performance as a function of the number of occurrences of the dev words in the training set. Indeed, both model types are much more successful for in-vocabulary words, and their performance improves the higher the training frequency of the words. However, performance increases more quickly for the Siamese network than for the classifier as training frequency increases. This may be due to the fact that, if a word type occurs at least INLINEFORM0 times in the classifier training set, then it occurs at least INLINEFORM1 times in the Siamese paired training data.
Visualization of embeddings
In order to gain a better qualitative understanding of the differences between clasiffier and Siamese-based embeddings, and of the learned embedding space more generally, we plot a two-dimensional visualization of some of our learned embeddings via t-SNE BIBREF40 in Fig. FIGREF12 . For both classifier and Siamese embeddings, there is a marked difference in the quality of clusters formed by embeddings of words that were previously seen vs. previously unseen in training. However, the Siamese network embeddings appear to have better relative distances between word clusters with similar and dissimilar pronunciations. For example, the word programs appears equidistant from problems and problem in the classifier-based embedding space, but in the Siamese embedding space problems falls between problem and programs. Similarly, the cluster for democracy shifts with respect to actually and especially to better respect differences in pronunciation. More study of learned embeddings, using more data and word types, is needed to confirm such patterns in general. Improvements in unseen word embeddings from the classifier embedding space to the Siamese embedding space (such as for democracy, morning, and basketball) are a likely result of optimizing the model for relative distances between words.
Conclusion
Our main finding is that RNN-based acoustic word embeddings outperform prior approaches, as measured via a word discrimination task related to query-by-example search. Our best results are obtained with deep LSTM RNNs with a combination of several stacked layers and several fully connected layers, optimized with a contrastive Siamese loss. Siamese networks have the benefit that, for any given training data set, they are effectively trained on a much larger set, in the sense that they measure a loss and gradient for every possible pair of data points. Our experiments suggest that the models could still be improved with additional layers. In addition, we have found that, for the purposes of acoustic word embeddings, fully connected layers are very important and have a more significant effect per layer than stacked layers, particularly when trained with the cross entropy loss function.
These experiments represent an initial exploration of sequential neural models for acoustic word embeddings. There are a number of directions for further work. For example, while our analyses suggest that Siamese networks are better than classifier-based models at embedding previously unseen words, our best embeddings are still much poorer for unseen words. Improvements in this direction may come from larger training sets, or may require new models that better model the shared structure between words. Other directions for future work include additional forms of supervision and training, as well as application to downstream tasks. | Their best average precision tops previous best result by 0.202 |
86a93a2d1c19cd0cd21ad1608f2a336240725700 | 86a93a2d1c19cd0cd21ad1608f2a336240725700_0 | Q: How does Frege's holistic and functional approach to meaning relates to general distributional hypothesis?
Text: INTRODUCTION
“Meaning is, therefore, something that words have in sentences; and it's something that sentences have in a language.” BIBREF0 On the other hand, meaning could also be something that words have on their own, with sentences being compositions and language a collection of words. This is the question of semantic holism versus atomism, which was important in the philosophy of language in the second half of the 20th century and has not been satisfyingly answered yet.
Artificial neural networks are the state-of-the-art solution for many problems in natural language processing (and machine learning in general). They produce word representation with interesting properties, but the way they work is little understood from the perspective of linguistics or the philosophy of language.
We believe that by finding parallels between concepts in AI and the philosophy of language, we can better understand both areas.
In this paper, we present an analogy between meaning defined as truth-value potential (a reformulation of Fregean holistic and functional approach) and a variant of language representation model, therefore pointing out a possibility that its “striking syntactic and semantic properties” BIBREF1 are formed due to adhering to holistic principles.
INTRODUCTION ::: Related work
We have found only one work concerning the philosophical aspects of neural language models BIBREF2. It is, however, concentrating on Self-Organizing Maps and Quine's version of semantic holism.
There are papers showing that Skip-gram with negative sampling is implicitly a factorization of a word-context matrix (e.g. BIBREF3, although this result was later contested by various authors, such as BIBREF4 and BIBREF5), or deriving the equations in an alternative way BIBREF6 (discussed more in Section SECREF3). This may tell us something about the model, but it does not answer the principal question: why should the matrix factorized in a certain way contain semantic information?
SEMANTIC HOLISM AND ATOMISM
Semantic holism (or meaning holism) is “the thesis that what a linguistic expression means depends on its relations to many or all other expressions within the same totality. [...] The totality in question may be the language to which the expressions belong, or a theory formulation in that language.” BIBREF7 The opposing view is called semantic atomism, and it claims that there are expressions (typically words), whose meaning does not depend on the meaning of other expressions. The meaning of these expressions is given by something outside language (e.g. their relation to physical or mental objects).
In the following sections, we will specify the implications of both alternatives for semantics. The question also plays a role in cognitive science (content identity and similarity), epistemology (commensurability of theories) and seems to be strongly connected with the analytic/synthetic distinction BIBREF0. There are other positions in between these two, such as semantic molecularism or the belief that neither relations external nor internal are primary in forming meaning. However, to keep this text simple, we will only concentrate on extreme positions. We will also only talk about words, although the same argument can be used with smaller meaningful language units (e.g. parts of a compound word).
Our goal is not to asses whether the truth lies with holism, atomism or neither of them. We will only show that holism is a useful perspective when understanding neural language models is concerned.
Before we get into details of the two perspectives, let us point out two critical aspects of their difference: holism proclaims interdependence of meanings of words, contrary to their independence in atomism. And holism favours decomposition over composition.
SEMANTIC HOLISM AND ATOMISM ::: Atomism
“It is a widely held view that much of the history of the philosophy of language consists of a failed attempt to make semantic atomism work.” BIBREF0
Atomism played an important role in analytic philosophy, starting with Bertrand Russell's logical atomism and continuing with logical positivism, as exemplified in this quote by Carnap BIBREF8:
A language consists of a vocabulary and a syntax, i.e. a set of words which have meanings and rules of sentence formation. These rules indicate how sentences may be formed out of the various sorts of words.
For logical positivists, words have meaning, because they refer to objects (be it physical, sensual, logical, mathematical or other). The rules of composition determine the meaning of sentences (and rule out senseless sequences of words).
Under this (or similar) view, the fact that words refer to the outside world is presupposed. Their references are independent of each other (that “dog” refers to dog is independent of that “horse” refers to horse). There is strong emphasis on compositionality, that reached its peak in Chomskian linguistics and is still relevant today.
Crucially, this means that a word can have meaning on its own (e.g. by referring to something). The meaning of larger units, such as sentences, is derived by the rules of composition from the meaning of words.
SEMANTIC HOLISM AND ATOMISM ::: Holism
Semantic holism accents the interdependence of meaning. The whole (language, theory, ...) is the primary vehicle of meaning. The meaning of smaller units is derived by decomposition.
This view is motivated by the same word having a different meaning in a different context. Gottlob Frege has shown BIBREF9 that even such seemingly unambiguous words as numbers play distinct roles in different situations: “5 is a prime number” and “there are 5 cows on the meadow” are different at least in that the first “5” signifies a complete (abstract) object, while the second one needs to be supplemented with information that it is cattle of which there are 5 specimens, otherwise the expression would not be grammatical.
Frege promoted what we could call sentence holism: “Only in the context of a sentence does a word have a meaning.” BIBREF10 We will later use its modern reformulation to show an analogy with certain neural language models and therefore their holistic character.
Another group of arguments for holism consist of variations on the theme of impossibility of knowing or using a word without being able to use other words. For example, it could be argued that a person could not correctly use the word “mammal”, without also knowing (at least some of) “bird”, “animal” and kinds of animals. Therefore the meaning of words cannot be formed in isolation.
Something that is harder to explain under holism than under atomism is the fact that words refer to objects. If the meaning of words is given by other words, how is it connected to the world around us? However, not all words refer to something. And even if subscribing to holism makes explaining reference harder, it may be because it is a hard problem to explain.
Another thing that is simpler under atomism is compositionality. While in atomism it plays a central role as one of the presupposed properties of language, holism may not need it. But it does not claim that words do not have meanining at all, only that it is derived (by some sort of decomposition) from the meaning of the whole.
WORD REPRESENTATIONS IN AI
Although all artificial neural networks that work with language must have some way of representing it, the most interesting representations come from neural language models. Language modelling is a task of predicting a missing word from a sequence or generating text. There is also a similar class of models that are designed specifically to produce representations of language units, which we will call neural language representation models.
The representations (also called embeddings) are high dimensional vectors of real numbers. They are either learned together with the rest of the network for the particular task or pretrained by a general language representation model (typically on a larger dataset not specific for the task).
Some neural language (representation) models produce representation with semantic properties, although the task of language modeling itself is not (at least at the first sight) directly connected with semantics and no explicit semantic annotation is given to the neural network.
These semantic properties became popular with the invention of the word2vec software and the Skip-gram model, whose author said about it BIBREF1:
The model itself has no knowledge of syntax or morphology or semantics. Remarkably, training such a purely lexical model to maximize likelihood will induce word representations with striking syntactic and semantic properties.
However, they did not present any explanation of the phenomenon.
Goldberg and Levy BIBREF6 present a detailed derivation of the central equation of the Skip-gram model. In the last section they say:
Why does this produce good word representations?
Good question. We don't really know.
The distributional hypothesis states that words in similar contexts have similar meanings. The objective [of the Skip-gram model] clearly tries to increase the [dot product of the context and the word representations] for good word-context pairs, and decrease it for bad ones. Intuitively, this means that words that share many contexts will be similar to each other (note also that contexts sharing many words will also be similar to each other). This is, however, very hand-wavy. Can we make this intuition more precise? We'd really like to see something more formal.
We believe that the implicit holistic component of this “hand-wavy” approach is central to the quality of Skip-gram representations and we can make the intuition more precise by analogy with the definition of the truth-value potential.
WORD REPRESENTATIONS IN AI ::: Semantic properties of the Skip-Gram model
The Skip-gram model was introduced by Tomáš Mikolov et al. BIBREF11 as a method to efficiently train word embeddings. It exceeded state-of-the-art in various semantic tasks. The embeddings have interesting semantic properties, most notably the vector arithmetic illustrated by Figure FIGREF4 and the following equation BIBREF1:
meaning that starting with the word “king”, if we subtract the vector for the word “man” and add the vector for the word “woman”, the nearest vector in the embedding space will be the one that corresponds to the word “queen”. This means that queen is to woman as king is to man.
Hollis et al. BIBREF12 show that it is possible to infer various psycholinguistic and semantic properties of words from the Skip-gram embeddings. Mikolov et al. BIBREF13 also trained the Skip-gram model with phrases, resulting in even simpler and more elegant equations, such as
Mikolov et al. BIBREF11 proposed another shallow neural language model, Continuous Bag of Words (CBOW). The main difference between CBOW and Skip-gram (see Figure FIGREF6) is that while Skip-gram predicts context words from a given word, CBOW predicts a word from a given context.
RELEVANT THEORIES OF MEANING
In this section, we discuss theories of meaning that are relevant to word representations in artificial neural networks. Notice that even though they strictly speaking do not require meaning holism, they all lean towards it quite strongly.
RELEVANT THEORIES OF MEANING ::: The distributional hypothesis
Holism is generally a better alternative in cases where there is nothing beside language itself to anchor meaning to. This is the case of neural language (representation) models. If they represent meaning at all, it must be derived from the training corpus. This may be the reason behind the popularity of the distributional hypothesis in neural language model literature. The famous saying by Firth BIBREF14, “You shall know a word by the company it keeps!”, is quoted in majority of papers concerned with vector space models of language.
The general distributional hypothesis states that the meaning of a word is given by the contexts in which it occurs. It is, however, worth noticing that in Firth's theory, collocation is just one among multiple levels of meaning and his text does not support the idea of meaning based on context alone.
A more suitable formulation of the distributional hypothesis (referenced in connection to Skip-gram in BIBREF15) is found in Distributional structure BIBREF16, where it is suggested that distribution may be used for comparing meanings and that “difference of meaning correlates with difference of distribution”.
Although this certainly describes a basic principle of neural language models, it is still rather vague.
RELEVANT THEORIES OF MEANING ::: The use theory of meaning
The use theory of meaning can be summed up as “the meaning of a word is its use in the language” BIBREF17. It is associated with late Wittgenstein's concept of language game. In Philosophical Investigations BIBREF17, he writes:
To say “This combination of words makes no sense” excludes it from the sphere of language and thereby bounds the domain of language. [...] When a sentence is called senseless, it is not as it were its sense that is senseless. But a combination of words is being excluded from the language, withdrawn from circulation.
This “bounding of the domain of language” is precisely what language model does, therefore the use theory may be one way to connect language modelling and semantics.
That “knowledge of language emerges from language use” is also one of main hypotheses of cognitive linguistics BIBREF18.
RELEVANT THEORIES OF MEANING ::: Structuralism
In structuralism BIBREF19, the meaning of a word is given by its relation to the other words of the language:
The elements of a structure have neither extrinsic designation, nor intrinsic signification. Then what is left? [...] [N]othing other than a sense [...]: a sense which is necessarily and uniquely “positional.” BIBREF20
This holds for word representations in artificial neural networks as well. The vectors representing the words do not have any other meaning than their position among the rest of the vectors and a single vector does not have any significance outside the model. This is also demonstrated by the vectors being different every time the model is trained because of random initialization.
SKIP-GRAM AND TRUTH-VALUE POTENTIAL
In this section, we introduce the truth-value potential and show that Skip-gram corresponds to it better than CBOW.
SKIP-GRAM AND TRUTH-VALUE POTENTIAL ::: The truth-value potential
Tugendhat's compact reformulation of Frege's sentence holism, the definition of meaning as truth-value potential is BIBREF21:
[T]wo expressions $\phi $ and $\psi $ have the same truth-value potential if and only if, whenever each is completed by the same expression to form a sentence, the two sentences have the same truth-value.
We can also express this definition in the following form:
where $M$ is the truth-value potential (meaning), $T$ is the truth-value of the sentence and $x(\omega )$ is the result of completing the expression $\omega $ by the expression $x$ to form a sentence.
One important aspect of this definition is that, following Frege BIBREF10, it is based on an assumption that the sentence (or rather the corresponding judgement) is the basic unit of meaning.
SKIP-GRAM AND TRUTH-VALUE POTENTIAL ::: Word2vec models and semantic holism
The definition of meaning as truth-value potential is analogous to the process of training a model for word representations. One difference is that when we are training a model, we do not have the whole of language at our disposal. Even after approximating the language with a finite corpus, it still is not practical to compare all the contexts for a given word at the same time, therefore the universal quantifier has to be replaced by an iterative process of examining the contexts one by one (or actually batch by batch, which is a step back towards the totality that is being estimated). And we have no means to asses whether the sentences from the corpus are true or false. We can either assume that they are mostly true, or try to replace the concept of truth with something else (maybe language use). Even the first option seems to be enough—imagine a corpus full of false sentences about cats, e.g. “Cats can fly.”, “Cats are cetaceans.” etc. We cannot expect the representation of the word “cats” in a model trained on this corpus to be any good, therefore the requirement for the corpus to consist mostly of true sentences is not excessive.
The simplest model that corresponds to this analogy is the Skip-gram model. It does just what is described in the definition – it fixes a word and goes through all the possible contexts. It compares the words based on the context. The context words are predicted and their representations are fixed (in a single training step), while the representation of a single word is learned. By learning the representation of a word from the representation of the context, Skip-gram complies to the principles of semantic holism. The analogy between the definition of truth-value potential and the process of training the Skip-gram model is one possible explanation for its semantic properties and its performance in semantic tasks.
The complementary CBOW architecture (see Figure FIGREF6) performs much worse in the evaluation of the semantic tasks BIBREF11. In CBOW, a missing word is predicted from its context. Therefore, in a single learning step, the representation of the missing word is fixed. What changes (and is learned) is the representation of the context words. By learning the representation of the context from the representation of the word, CBOW is implicitly conforming to semantic atomism: words are the basic units of meaning and the meaning of the broader context is derived from the atomic meaning of words. This may be the reason why CBOW does not exhibit the same semantic properties as Skip-gram and it performs worse in semantic tasks.
CONCLUSION AND FUTURE WORK
The distributional hypothesis as an explanation for the semantic properties of neural language models should be expanded into a more detailed account. We show one possible way to do that via a Fregean approach to meaning.
Both the distributional hypothesis itself and Tugendhat's interpretation of Frege's work are examples of holistic approaches to meaning, where the meaning of the whole determines the meaning of parts. As we demonstrated on the opposition between Skip-gram and CBOW models, the distinction between semantic holism and atomism may play an essential role in semantic properties of neural language representations models.
We have demonstrated the connection between the Skip-gram model and the definition of meaning as truth-value potential. Although this is an isolated observation of an analogy between a specific model and a specific theory about meaning, it is a crucial step towards finding a theory of meaning that would correspond to the current results of NLP research, increasing our understanding of NLP and ultimately the language itself.
The direction of research from successful language technologies to properties of language itself offers many opportunities for inquiry, with very few being explored so far.
Many state-of-the-art models for natural language processing use smaller units than words for their input and output. This analysis could be extended to take this into account.
It might also be interesting to think about the philosophy of science in technical fields dominated by machine learning, but that is far beyond the scope of this paper.
This work has been supported by the grant 18-02196S of the Czech Science Foundation. This research was partially supported by SVV project number 260 575. | interpretation of Frege's work are examples of holistic approaches to meaning |
6090d3187c41829613abe785f0f3665d9ecd90d9 | 6090d3187c41829613abe785f0f3665d9ecd90d9_0 | Q: What does Frege's holistic and functional approach to meaning states?
Text: INTRODUCTION
“Meaning is, therefore, something that words have in sentences; and it's something that sentences have in a language.” BIBREF0 On the other hand, meaning could also be something that words have on their own, with sentences being compositions and language a collection of words. This is the question of semantic holism versus atomism, which was important in the philosophy of language in the second half of the 20th century and has not been satisfyingly answered yet.
Artificial neural networks are the state-of-the-art solution for many problems in natural language processing (and machine learning in general). They produce word representation with interesting properties, but the way they work is little understood from the perspective of linguistics or the philosophy of language.
We believe that by finding parallels between concepts in AI and the philosophy of language, we can better understand both areas.
In this paper, we present an analogy between meaning defined as truth-value potential (a reformulation of Fregean holistic and functional approach) and a variant of language representation model, therefore pointing out a possibility that its “striking syntactic and semantic properties” BIBREF1 are formed due to adhering to holistic principles.
INTRODUCTION ::: Related work
We have found only one work concerning the philosophical aspects of neural language models BIBREF2. It is, however, concentrating on Self-Organizing Maps and Quine's version of semantic holism.
There are papers showing that Skip-gram with negative sampling is implicitly a factorization of a word-context matrix (e.g. BIBREF3, although this result was later contested by various authors, such as BIBREF4 and BIBREF5), or deriving the equations in an alternative way BIBREF6 (discussed more in Section SECREF3). This may tell us something about the model, but it does not answer the principal question: why should the matrix factorized in a certain way contain semantic information?
SEMANTIC HOLISM AND ATOMISM
Semantic holism (or meaning holism) is “the thesis that what a linguistic expression means depends on its relations to many or all other expressions within the same totality. [...] The totality in question may be the language to which the expressions belong, or a theory formulation in that language.” BIBREF7 The opposing view is called semantic atomism, and it claims that there are expressions (typically words), whose meaning does not depend on the meaning of other expressions. The meaning of these expressions is given by something outside language (e.g. their relation to physical or mental objects).
In the following sections, we will specify the implications of both alternatives for semantics. The question also plays a role in cognitive science (content identity and similarity), epistemology (commensurability of theories) and seems to be strongly connected with the analytic/synthetic distinction BIBREF0. There are other positions in between these two, such as semantic molecularism or the belief that neither relations external nor internal are primary in forming meaning. However, to keep this text simple, we will only concentrate on extreme positions. We will also only talk about words, although the same argument can be used with smaller meaningful language units (e.g. parts of a compound word).
Our goal is not to asses whether the truth lies with holism, atomism or neither of them. We will only show that holism is a useful perspective when understanding neural language models is concerned.
Before we get into details of the two perspectives, let us point out two critical aspects of their difference: holism proclaims interdependence of meanings of words, contrary to their independence in atomism. And holism favours decomposition over composition.
SEMANTIC HOLISM AND ATOMISM ::: Atomism
“It is a widely held view that much of the history of the philosophy of language consists of a failed attempt to make semantic atomism work.” BIBREF0
Atomism played an important role in analytic philosophy, starting with Bertrand Russell's logical atomism and continuing with logical positivism, as exemplified in this quote by Carnap BIBREF8:
A language consists of a vocabulary and a syntax, i.e. a set of words which have meanings and rules of sentence formation. These rules indicate how sentences may be formed out of the various sorts of words.
For logical positivists, words have meaning, because they refer to objects (be it physical, sensual, logical, mathematical or other). The rules of composition determine the meaning of sentences (and rule out senseless sequences of words).
Under this (or similar) view, the fact that words refer to the outside world is presupposed. Their references are independent of each other (that “dog” refers to dog is independent of that “horse” refers to horse). There is strong emphasis on compositionality, that reached its peak in Chomskian linguistics and is still relevant today.
Crucially, this means that a word can have meaning on its own (e.g. by referring to something). The meaning of larger units, such as sentences, is derived by the rules of composition from the meaning of words.
SEMANTIC HOLISM AND ATOMISM ::: Holism
Semantic holism accents the interdependence of meaning. The whole (language, theory, ...) is the primary vehicle of meaning. The meaning of smaller units is derived by decomposition.
This view is motivated by the same word having a different meaning in a different context. Gottlob Frege has shown BIBREF9 that even such seemingly unambiguous words as numbers play distinct roles in different situations: “5 is a prime number” and “there are 5 cows on the meadow” are different at least in that the first “5” signifies a complete (abstract) object, while the second one needs to be supplemented with information that it is cattle of which there are 5 specimens, otherwise the expression would not be grammatical.
Frege promoted what we could call sentence holism: “Only in the context of a sentence does a word have a meaning.” BIBREF10 We will later use its modern reformulation to show an analogy with certain neural language models and therefore their holistic character.
Another group of arguments for holism consist of variations on the theme of impossibility of knowing or using a word without being able to use other words. For example, it could be argued that a person could not correctly use the word “mammal”, without also knowing (at least some of) “bird”, “animal” and kinds of animals. Therefore the meaning of words cannot be formed in isolation.
Something that is harder to explain under holism than under atomism is the fact that words refer to objects. If the meaning of words is given by other words, how is it connected to the world around us? However, not all words refer to something. And even if subscribing to holism makes explaining reference harder, it may be because it is a hard problem to explain.
Another thing that is simpler under atomism is compositionality. While in atomism it plays a central role as one of the presupposed properties of language, holism may not need it. But it does not claim that words do not have meanining at all, only that it is derived (by some sort of decomposition) from the meaning of the whole.
WORD REPRESENTATIONS IN AI
Although all artificial neural networks that work with language must have some way of representing it, the most interesting representations come from neural language models. Language modelling is a task of predicting a missing word from a sequence or generating text. There is also a similar class of models that are designed specifically to produce representations of language units, which we will call neural language representation models.
The representations (also called embeddings) are high dimensional vectors of real numbers. They are either learned together with the rest of the network for the particular task or pretrained by a general language representation model (typically on a larger dataset not specific for the task).
Some neural language (representation) models produce representation with semantic properties, although the task of language modeling itself is not (at least at the first sight) directly connected with semantics and no explicit semantic annotation is given to the neural network.
These semantic properties became popular with the invention of the word2vec software and the Skip-gram model, whose author said about it BIBREF1:
The model itself has no knowledge of syntax or morphology or semantics. Remarkably, training such a purely lexical model to maximize likelihood will induce word representations with striking syntactic and semantic properties.
However, they did not present any explanation of the phenomenon.
Goldberg and Levy BIBREF6 present a detailed derivation of the central equation of the Skip-gram model. In the last section they say:
Why does this produce good word representations?
Good question. We don't really know.
The distributional hypothesis states that words in similar contexts have similar meanings. The objective [of the Skip-gram model] clearly tries to increase the [dot product of the context and the word representations] for good word-context pairs, and decrease it for bad ones. Intuitively, this means that words that share many contexts will be similar to each other (note also that contexts sharing many words will also be similar to each other). This is, however, very hand-wavy. Can we make this intuition more precise? We'd really like to see something more formal.
We believe that the implicit holistic component of this “hand-wavy” approach is central to the quality of Skip-gram representations and we can make the intuition more precise by analogy with the definition of the truth-value potential.
WORD REPRESENTATIONS IN AI ::: Semantic properties of the Skip-Gram model
The Skip-gram model was introduced by Tomáš Mikolov et al. BIBREF11 as a method to efficiently train word embeddings. It exceeded state-of-the-art in various semantic tasks. The embeddings have interesting semantic properties, most notably the vector arithmetic illustrated by Figure FIGREF4 and the following equation BIBREF1:
meaning that starting with the word “king”, if we subtract the vector for the word “man” and add the vector for the word “woman”, the nearest vector in the embedding space will be the one that corresponds to the word “queen”. This means that queen is to woman as king is to man.
Hollis et al. BIBREF12 show that it is possible to infer various psycholinguistic and semantic properties of words from the Skip-gram embeddings. Mikolov et al. BIBREF13 also trained the Skip-gram model with phrases, resulting in even simpler and more elegant equations, such as
Mikolov et al. BIBREF11 proposed another shallow neural language model, Continuous Bag of Words (CBOW). The main difference between CBOW and Skip-gram (see Figure FIGREF6) is that while Skip-gram predicts context words from a given word, CBOW predicts a word from a given context.
RELEVANT THEORIES OF MEANING
In this section, we discuss theories of meaning that are relevant to word representations in artificial neural networks. Notice that even though they strictly speaking do not require meaning holism, they all lean towards it quite strongly.
RELEVANT THEORIES OF MEANING ::: The distributional hypothesis
Holism is generally a better alternative in cases where there is nothing beside language itself to anchor meaning to. This is the case of neural language (representation) models. If they represent meaning at all, it must be derived from the training corpus. This may be the reason behind the popularity of the distributional hypothesis in neural language model literature. The famous saying by Firth BIBREF14, “You shall know a word by the company it keeps!”, is quoted in majority of papers concerned with vector space models of language.
The general distributional hypothesis states that the meaning of a word is given by the contexts in which it occurs. It is, however, worth noticing that in Firth's theory, collocation is just one among multiple levels of meaning and his text does not support the idea of meaning based on context alone.
A more suitable formulation of the distributional hypothesis (referenced in connection to Skip-gram in BIBREF15) is found in Distributional structure BIBREF16, where it is suggested that distribution may be used for comparing meanings and that “difference of meaning correlates with difference of distribution”.
Although this certainly describes a basic principle of neural language models, it is still rather vague.
RELEVANT THEORIES OF MEANING ::: The use theory of meaning
The use theory of meaning can be summed up as “the meaning of a word is its use in the language” BIBREF17. It is associated with late Wittgenstein's concept of language game. In Philosophical Investigations BIBREF17, he writes:
To say “This combination of words makes no sense” excludes it from the sphere of language and thereby bounds the domain of language. [...] When a sentence is called senseless, it is not as it were its sense that is senseless. But a combination of words is being excluded from the language, withdrawn from circulation.
This “bounding of the domain of language” is precisely what language model does, therefore the use theory may be one way to connect language modelling and semantics.
That “knowledge of language emerges from language use” is also one of main hypotheses of cognitive linguistics BIBREF18.
RELEVANT THEORIES OF MEANING ::: Structuralism
In structuralism BIBREF19, the meaning of a word is given by its relation to the other words of the language:
The elements of a structure have neither extrinsic designation, nor intrinsic signification. Then what is left? [...] [N]othing other than a sense [...]: a sense which is necessarily and uniquely “positional.” BIBREF20
This holds for word representations in artificial neural networks as well. The vectors representing the words do not have any other meaning than their position among the rest of the vectors and a single vector does not have any significance outside the model. This is also demonstrated by the vectors being different every time the model is trained because of random initialization.
SKIP-GRAM AND TRUTH-VALUE POTENTIAL
In this section, we introduce the truth-value potential and show that Skip-gram corresponds to it better than CBOW.
SKIP-GRAM AND TRUTH-VALUE POTENTIAL ::: The truth-value potential
Tugendhat's compact reformulation of Frege's sentence holism, the definition of meaning as truth-value potential is BIBREF21:
[T]wo expressions $\phi $ and $\psi $ have the same truth-value potential if and only if, whenever each is completed by the same expression to form a sentence, the two sentences have the same truth-value.
We can also express this definition in the following form:
where $M$ is the truth-value potential (meaning), $T$ is the truth-value of the sentence and $x(\omega )$ is the result of completing the expression $\omega $ by the expression $x$ to form a sentence.
One important aspect of this definition is that, following Frege BIBREF10, it is based on an assumption that the sentence (or rather the corresponding judgement) is the basic unit of meaning.
SKIP-GRAM AND TRUTH-VALUE POTENTIAL ::: Word2vec models and semantic holism
The definition of meaning as truth-value potential is analogous to the process of training a model for word representations. One difference is that when we are training a model, we do not have the whole of language at our disposal. Even after approximating the language with a finite corpus, it still is not practical to compare all the contexts for a given word at the same time, therefore the universal quantifier has to be replaced by an iterative process of examining the contexts one by one (or actually batch by batch, which is a step back towards the totality that is being estimated). And we have no means to asses whether the sentences from the corpus are true or false. We can either assume that they are mostly true, or try to replace the concept of truth with something else (maybe language use). Even the first option seems to be enough—imagine a corpus full of false sentences about cats, e.g. “Cats can fly.”, “Cats are cetaceans.” etc. We cannot expect the representation of the word “cats” in a model trained on this corpus to be any good, therefore the requirement for the corpus to consist mostly of true sentences is not excessive.
The simplest model that corresponds to this analogy is the Skip-gram model. It does just what is described in the definition – it fixes a word and goes through all the possible contexts. It compares the words based on the context. The context words are predicted and their representations are fixed (in a single training step), while the representation of a single word is learned. By learning the representation of a word from the representation of the context, Skip-gram complies to the principles of semantic holism. The analogy between the definition of truth-value potential and the process of training the Skip-gram model is one possible explanation for its semantic properties and its performance in semantic tasks.
The complementary CBOW architecture (see Figure FIGREF6) performs much worse in the evaluation of the semantic tasks BIBREF11. In CBOW, a missing word is predicted from its context. Therefore, in a single learning step, the representation of the missing word is fixed. What changes (and is learned) is the representation of the context words. By learning the representation of the context from the representation of the word, CBOW is implicitly conforming to semantic atomism: words are the basic units of meaning and the meaning of the broader context is derived from the atomic meaning of words. This may be the reason why CBOW does not exhibit the same semantic properties as Skip-gram and it performs worse in semantic tasks.
CONCLUSION AND FUTURE WORK
The distributional hypothesis as an explanation for the semantic properties of neural language models should be expanded into a more detailed account. We show one possible way to do that via a Fregean approach to meaning.
Both the distributional hypothesis itself and Tugendhat's interpretation of Frege's work are examples of holistic approaches to meaning, where the meaning of the whole determines the meaning of parts. As we demonstrated on the opposition between Skip-gram and CBOW models, the distinction between semantic holism and atomism may play an essential role in semantic properties of neural language representations models.
We have demonstrated the connection between the Skip-gram model and the definition of meaning as truth-value potential. Although this is an isolated observation of an analogy between a specific model and a specific theory about meaning, it is a crucial step towards finding a theory of meaning that would correspond to the current results of NLP research, increasing our understanding of NLP and ultimately the language itself.
The direction of research from successful language technologies to properties of language itself offers many opportunities for inquiry, with very few being explored so far.
Many state-of-the-art models for natural language processing use smaller units than words for their input and output. This analysis could be extended to take this into account.
It might also be interesting to think about the philosophy of science in technical fields dominated by machine learning, but that is far beyond the scope of this paper.
This work has been supported by the grant 18-02196S of the Czech Science Foundation. This research was partially supported by SVV project number 260 575. | Only in the context of a sentence does a word have a meaning. |
117aa7811ed60e84d40cd8f9cb3ca78781935a98 | 117aa7811ed60e84d40cd8f9cb3ca78781935a98_0 | Q: Do they evaluate the quality of the paraphrasing model?
Text: Introduction
Semantic parsers map sentences onto logical forms that can be used to query databases BIBREF0 , BIBREF1 , instruct robots BIBREF2 , extract information BIBREF3 , or describe visual scenes BIBREF4 . In this paper we consider the problem of semantically parsing questions into Freebase logical forms for the goal of question answering. Current systems accomplish this by learning task-specific grammars BIBREF5 , strongly-typed CCG grammars BIBREF6 , BIBREF7 , or neural networks without requiring any grammar BIBREF8 . These methods are sensitive to the words used in a question and their word order, making them vulnerable to unseen words and phrases. Furthermore, mismatch between natural language and Freebase makes the problem even harder. For example, Freebase expresses the fact that “Czech is the official language of Czech Republic” (encoded as a graph), whereas to answer a question like “What do people in Czech Republic speak?” one should infer people in Czech Republic refers to Czech Republic and What refers to the language and speak refers to the predicate official language.
We address the above problems by using paraphrases of the original question. Paraphrasing has shown to be promising for semantic parsing BIBREF9 , BIBREF10 , BIBREF11 . We propose a novel framework for paraphrasing using latent-variable PCFGs (L-PCFGs). Earlier approaches to paraphrasing used phrase-based machine translation for text-based QA BIBREF12 , BIBREF13 , or hand annotated grammars for KB-based QA BIBREF10 . We find that phrase-based statistical machine translation (MT) approaches mainly produce lexical paraphrases without much syntactic diversity, whereas our grammar-based approach is capable of producing both lexically and syntactically diverse paraphrases. Unlike MT based approaches, our system does not require aligned parallel paraphrase corpora. In addition we do not require hand annotated grammars for paraphrase generation but instead learn the grammar directly from a large scale question corpus.
The main contributions of this paper are two fold. First, we present an algorithm (§ "Paraphrase Generation Using Grammars" ) to generate paraphrases using latent-variable PCFGs. We use the spectral method of narayan-15 to estimate L-PCFGs on a large scale question treebank. Our grammar model leads to a robust and an efficient system for paraphrase generation in open-domain question answering. While CFGs have been explored for paraphrasing using bilingual parallel corpus BIBREF14 , ours is the first implementation of CFG that uses only monolingual data. Second, we show that generated paraphrases can be used to improve semantic parsing of questions into Freebase logical forms (§ "Semantic Parsing using Paraphrasing" ). We build on a strong baseline of reddylargescale2014 and show that our grammar model competes with MT baseline even without using any parallel paraphrase resources.
Paraphrase Generation Using Grammars
Our paraphrase generation algorithm is based on a model in the form of an L-PCFG. L-PCFGs are PCFGs where the nonterminals are refined with latent states that provide some contextual information about each node in a given derivation. L-PCFGs have been used in various ways, most commonly for syntactic parsing BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 .
In our estimation of L-PCFGs, we use the spectral method of narayan-15, instead of using EM, as has been used in the past by matsuzaki-2005 and petrov-2006. The spectral method we use enables the choice of a set of feature functions that indicate the latent states, which proves to be useful in our case. It also leads to sparse grammar estimates and compact models.
The spectral method works by identifying feature functions for “inside” and “outside” trees, and then clusters them into latent states. Then it follows with a maximum likelihood estimation step, that assumes the latent states are represented by clusters obtained through the feature function clustering. For more details about these constructions, we refer the reader to cohen-13 and narayan-15.
The rest of this section describes our paraphrase generation algorithm.
Paraphrases Generation Algorithm
We define our paraphrase generation task as a sampling problem from an L-PCFG $G_{\mathrm {syn}}$ , which is estimated from a large corpus of parsed questions. Once this grammar is estimated, our algorithm follows a pipeline with two major steps.
We first build a word lattice $W_q$ for the input question $q$ . We use the lattice to constrain our paraphrases to a specific choice of words and phrases that can be used. Once this lattice is created, a grammar $G_{\mathrm {syn}}^{\prime }$ is then extracted from $G_{\mathrm {syn}}$ . This grammar is constrained to the lattice.
We experiment with three ways of constructing word lattices: naïve word lattices representing the words from the input question only, word lattices constructed with the Paraphrase Database BIBREF14 and word lattices constructed with a bi-layered L-PCFG, described in § "Bi-Layered L-PCFGs" . For example, Figure 1 shows an example word lattice for the question What language do people in Czech Republic speak? using the lexical and phrasal rules from the PPDB.
Once $G_{\mathrm {syn}}^{\prime }$ is generated, we sample paraphrases of the input question $q$ . These paraphrases are further filtered with a classifier to improve the precision of the generated paraphrases.
We train the L-PCFG $G_{\mathrm {syn}}$ on the Paralex corpus BIBREF9 . Paralex is a large monolingual parallel corpus, containing 18 million pairs of question paraphrases with 2.4M distinct questions in the corpus. It is suitable for our task of generating paraphrases since its large scale makes our model robust for open-domain questions. We construct a treebank by parsing 2.4M distinct questions from Paralex using the BLLIP parser BIBREF25 .
Given the treebank, we use the spectral algorithm of narayan-15 to learn an L-PCFG for constituency parsing to learn $G_{\mathrm {syn}}$ . We follow narayan-15 and use the same feature functions for the inside and outside trees as they use, capturing contextual syntactic information about nonterminals. We refer the reader to narayan-15 for more detailed description of these features. In our experiments, we set the number of latent states to 24.
Once we estimate $G_{\mathrm {syn}}$ from the Paralex corpus, we restrict it for each question to a grammar $G_{\mathrm {syn}}^{\prime }$ by keeping only the rules that could lead to a derivation over the lattice. This step is similar to lexical pruning in standard grammar-based generation process to avoid an intermediate derivation which can never lead to a successful derivation BIBREF26 , BIBREF27 .
Sampling a question from the grammar $G_{\mathrm {syn}}^{\prime }$ is done by recursively sampling nodes in the derivation tree, together with their latent states, in a top-down breadth-first fashion. Sampling from the pruned grammar $G_{\mathrm {syn}}^{\prime }$ raises an issue of oversampling words that are more frequent in the training data. To lessen this problem, we follow a controlled sampling approach where sampling is guided by the word lattice $W_q$ . Once a word $w$ from a path $e$ in $W_q$ is sampled, all other parallel or conflicting paths to $e$ are removed from $W_q$ . For example, generating for the word lattice in Figure 1 , when we sample the word citizens, we drop out the paths “human beings”, “people's”, “the population”, “people” and “members of the public” from $W_q$ and accordingly update the grammar. The controlled sampling ensures that each sampled question uses words from a single start-to-end path in $W_q$ . For example, we could sample a question what is Czech Republic 's language? by sampling words from the path (what, language, do, people 's, in, Czech, Republic, is speaking, ?) in Figure 1 . We repeat this sampling process to generate multiple potential paraphrases.
The resulting generation algorithm has multiple advantages over existing grammar generation methods. First, the sampling from an L-PCFG grammar lessens the lexical ambiguity problem evident in lexicalized grammars such as tree adjoining grammars BIBREF27 and combinatory categorial grammars BIBREF28 . Our grammar is not lexicalized, only unary context-free rules are lexicalized. Second, the top-down sampling restricts the combinatorics inherent to bottom-up search BIBREF29 . Third, we do not restrict the generation by the order information in the input. The lack of order information in the input often raises the high combinatorics in lexicalist approaches BIBREF30 . In our case, however, we use sampling to reduce this problem, and it allows us to produce syntactically diverse questions. And fourth, we impose no constraints on the grammar thereby making it easier to maintain bi-directional (recursive) grammars that can be used both for parsing and for generation BIBREF31 .
Bi-Layered L-PCFGs
As mentioned earlier, one of our lattice types is based on bi-layered PCFGs introduced here.
In their traditional use, the latent states in L-PCFGs aim to capture syntactic information. We introduce here the use of an L-PCFG with two layers of latent states: one layer is intended to capture the usual syntactic information, and the other aims to capture semantic and topical information by using a large set of states with specific feature functions.
To create the bi-layered L-PCFG, we again use the spectral algorithm of narayan-15 to estimate a grammar $G_{\mathrm {par}}$ from the Paralex corpus. We use the word alignment of paraphrase question pairs in Paralex to map inside and outside trees of each nonterminals in the treebank to bag of word features. The number of latent states we use is 1,000.
Once the two feature functions (syntactic in $G_{\mathrm {syn}}$ and semantic in $G_{\mathrm {par}}$ ) are created, each nonterminal in the training treebank is assigned two latent states (cluster identifiers). Figure 2 shows an example annotation of trees for three paraphrase questions from the Paralex corpus. We compute the parameters of the bi-layered L-PCFG $G_{\mathrm {layered}}$ with a simple frequency count maximum likelihood estimate over this annotated treebank. As such, $G_{\mathrm {layered}}$ is a combination of $G_{\mathrm {syn}}$ and $G_{\mathrm {par}}$ , resulting in 24,000 latent states (24 syntactic x 1000 semantic).
Consider an example where we want to generate paraphrases for the question what day is nochebuena. Parsing it with $G_{\mathrm {layered}}$ will lead to the leftmost hybrid structure as shown in Figure 2 . The assignment of the first latent states for each nonterminals ensures that we retrieve the correct syntactic representation of the sentence. Here, however, we are more interested in the second latent states assigned to each nonterminals which capture the paraphrase information of the sentence at various levels. For example, we have a unary lexical rule (NN-*-142 day) indicating that we observe day with NN of the paraphrase type 142. We could use this information to extract unary rules of the form (NN-*-142 $w$ ) in the treebank that will generate words $w$ which are paraphrases to day. Similarly, any node WHNP-*-291 in the treebank will generate paraphrases for what day, SBARQ-*-403, for what day is nochebuena. This way we will be able to generate paraphrases when is nochebuena and when is nochebuena celebrated as they both have SBARQ-*-403 as their roots.
To generate a word lattice $W_q$ for a given question $q$ , we parse $q$ with the bi-layered grammar $G_{\mathrm {layered}}$ . For each rule of the form $X$ - $m_1$ - $m_2 \rightarrow w$ in the bi-layered tree with $X \in {\cal P}$ , $m_1 \in \lbrace 1, \ldots , 24 \rbrace $ , $m_2 \in \lbrace 1, \ldots , 1000 \rbrace $ and $q$0 a word in $q$1 , we extract rules of the form $q$2 - $q$3 - $q$4 from $q$5 such that $q$6 . For each such $q$7 , we add a path $q$8 parallel to $q$9 in the word lattice.
Paraphrase Classification
Our sampling algorithm overgenerates paraphrases which are incorrect. To improve its precision, we build a binary classifier to filter the generated paraphrases. We randomly select 100 distinct questions from the Paralex corpus and generate paraphrases using our generation algorithm with various lattice settings. We randomly select 1,000 pairs of input-sampled sentences and manually annotate them as “correct” or “incorrect” paraphrases. We train our classifier on this manually created training data. We follow madnani2012, who used MT metrics for paraphrase identification, and experiment with 8 MT metrics as features for our binary classifier. In addition, we experiment with a binary feature which checks if the sampled paraphrase preserves named entities from the input sentence. We use WEKA BIBREF32 to replicate the classifier of madnani2012 with our new feature. We tune the feature set for our classifier on the development data.
Semantic Parsing using Paraphrasing
In this section we describe how the paraphrase algorithm is used for converting natural language to Freebase queries. Following reddylargescale2014, we formalize the semantic parsing problem as a graph matching problem, i.e., finding the Freebase subgraph (grounded graph) that is isomorphic to the input question semantic structure (ungrounded graph).
This formulation has a major limitation that can be alleviated by using our paraphrase generation algorithm. Consider the question What language do people in Czech Republic speak?. The ungrounded graph corresponding to this question is shown in Figure 3 . The Freebase grounded graph which results in correct answer is shown in Figure 3 . Note that these two graphs are non-isomorphic making it impossible to derive the correct grounding from the ungrounded graph. In fact, at least 15% of the examples in our development set fail to satisfy isomorphic assumption. In order to address this problem, we use paraphrases of the input question to generate additional ungrounded graphs, with the aim that one of those paraphrases will have a structure isomorphic to the correct grounding. Figure 3 and Figure 3 are two such paraphrases which can be converted to Figure 3 as described in sec:groundedGraphs.
For a given input question, first we build ungrounded graphs from its paraphrases. We convert these graphs to Freebase graphs. To learn this mapping, we rely on manually assembled question-answer pairs. For each training question, we first find the set of oracle grounded graphs—Freebase subgraphs which when executed yield the correct answer—derivable from the question's ungrounded graphs. These oracle graphs are then used to train a structured perceptron model. These steps are discussed in detail below.
Ungrounded Graphs from Paraphrases
We use GraphParser BIBREF7 to convert paraphrases to ungrounded graphs. This conversion involves three steps: 1) parsing the paraphrase using a CCG parser to extract syntactic derivations BIBREF33 , 2) extracting logical forms from the CCG derivations BIBREF34 , and 3) converting the logical forms to an ungrounded graph. The ungrounded graph for the example question and its paraphrases are shown in Figure 3 , Figure 3 and Figure 3 , respectively.
Grounded Graphs from Ungrounded Graphs
The ungrounded graphs are grounded to Freebase subgraphs by mapping entity nodes, entity-entity edges and entity type nodes in the ungrounded graph to Freebase entities, relations and types, respectively. For example, the graph in Figure 3 can be converted to a Freebase graph in Figure 3 by replacing the entity node Czech Republic with the Freebase entity CzechRepublic, the edge (speak.arg $_2$ , speak.in) between $x$ and Czech Republic with the Freebase relation (location.country.official_language.2, location.country.official_language.1), the type node language with the Freebase type language.human_language, and the target node remains intact. The rest of the nodes, edges and types are grounded to null. In a similar fashion, Figure 3 can be grounded to Figure 3 , but not Figure 3 to Figure 3 . If no paraphrase is isomorphic to the target grounded grounded graph, our grounding fails.
Learning
We use a linear model to map ungrounded graphs to grounded ones. The parameters of the model are learned from question-answer pairs. For example, the question What language do people in Czech Republic speak? paired with its answer $\lbrace \textsc {CzechLanguage}\rbrace $ . In line with most work on question answering against Freebase, we do not rely on annotated logical forms associated with the question for training and treat the mapping of a question to its grounded graph as latent.
Let $q$ be a question, let $p$ be a paraphrase, let $u$ be an ungrounded graph for $p$ , and let $g$ be a grounded graph formed by grounding the nodes and edges of $u$ to the knowledge base $\mathcal {K}$ (throughout we use Freebase as the knowledge base). Following reddylargescale2014, we use beam search to find the highest scoring tuple of paraphrase, ungrounded and grounded graphs $(\hat{p}, \hat{u}, \hat{g})$ under the model $\theta \in \mathbb {R}^n$ : $ ({\hat{p},\hat{u},\hat{g}}) = \operatornamewithlimits{arg\,max}_{(p,u,g)} \theta \cdot \Phi (p,u,g,q,\mathcal {K})\,, $
where $\Phi (p, u, g, q, \mathcal {K}) \in \mathbb {R}^n$ denotes the features for the tuple of paraphrase, ungrounded and grounded graphs. The feature function has access to the paraphrase, ungrounded and grounded graphs, the original question, as well as to the content of the knowledge base and the denotation $|g|_\mathcal {K}$ (the denotation of a grounded graph is defined as the set of entities or attributes reachable at its target node). See sec:details for the features employed. The model parameters are estimated with the averaged structured perceptron BIBREF35 . Given a training question-answer pair $(q,\mathcal {A})$ , the update is: $ \theta ^{t+1} \leftarrow \theta ^{t} + \Phi (p^+, u^+, g^+, q, \mathcal {K}) - \Phi (\hat{p}, \hat{u}, \hat{g}, q, \mathcal {K})\,, $
where $({p^+,u^+,g^+})$ denotes the tuple of gold paraphrase, gold ungrounded and grounded graphs for $q$ . Since we do not have direct access to the gold paraphrase and graphs, we instead rely on the set of oracle tuples, $\mathcal {O}_{\mathcal {K}, \mathcal {A}}(q)$ , as a proxy: $ (p^{+},u^{+},{g^{+}}) = \operatornamewithlimits{arg\,max}_{(p,u,g) \in \mathcal {O}_{\mathcal {K},\mathcal {A}}(q)} \theta \cdot \Phi ({p,u,g,q,\mathcal {K}})\,, $
where $\mathcal {O}_{\mathcal {K}, \mathcal {A}}(q)$ is defined as the set of tuples ( $p$ , $u$ , $g$ ) derivable from the question $q$ , whose denotation $|g|_\mathcal {K}$ has minimal $F_1$ -loss against the gold answer $\mathcal {A}$ . We find the oracle graphs for each question a priori by performing beam-search with a very large beam.
Experimental Setup
Below, we give details on the evaluation dataset and baselines used for comparison. We also describe the model features and provide implementation details.
Evaluation Data and Metric
We evaluate our approach on the WebQuestions dataset BIBREF5 . WebQuestions consists of 5,810 question-answer pairs where questions represents real Google search queries. We use the standard train/test splits, with 3,778 train and 2,032 test questions. For our development experiments we tune the models on held-out data consisting of 30% training questions, while for final testing we use the complete training data. We use average precision (avg P.), average recall (avg R.) and average F $_1$ (avg F $_1$ ) proposed by berantsemantic2013 as evaluation metrics.
Baselines
We use GraphParser without paraphrases as our baseline. This gives an idea about the impact of using paraphrases.
We compare our paraphrasing models with monolingual machine translation based model for paraphrase generation BIBREF24 , BIBREF36 . In particular, we use Moses BIBREF37 to train a monolingual phrase-based MT system on the Paralex corpus. Finally, we use Moses decoder to generate 10-best distinct paraphrases for the test questions.
Implementation Details
For WebQuestions, we use 8 handcrafted part-of-speech patterns (e.g., the pattern (DT)?(JJ.? $\mid $ NN.?){0,2}NN.? matches the noun phrase the big lebowski) to identify candidate named entity mention spans. We use the Stanford CoreNLP caseless tagger for part-of-speech tagging BIBREF38 . For each candidate mention span, we retrieve the top 10 entities according to the Freebase API. We then create a lattice in which the nodes correspond to mention-entity pairs, scored by their Freebase API scores, and the edges encode the fact that no joint assignment of entities to mentions can contain overlapping spans. We take the top 10 paths through the lattice as possible entity disambiguations. For each possibility, we generate $n$ -best paraphrases that contains the entity mention spans. In the end, this process creates a total of $10n$ paraphrases. We generate ungrounded graphs for these paraphrases and treat the final entity disambiguation and paraphrase selection as part of the semantic parsing problem.
We use the features from reddylargescale2014. These include edge alignments and stem overlaps between ungrounded and grounded graphs, and contextual features such as word and grounded relation pairs. In addition to these features, we add two new real-valued features – the paraphrase classifier's score and the entity disambiguation lattice score.
We use beam search to infer the highest scoring graph pair for a question. The search operates over entity-entity edges and entity type nodes of each ungrounded graph. For an entity-entity edge, there are two operations: ground the edge to a Freebase relation, or skip the edge. Similarly, for an entity type node, there are two operations: ground the node to a Freebase type, or skip the node. We use a beam size of 100 in all our experiments.
Results and Discussion
In this section, we present results from five different systems for our question-answering experiments: original, mt, naive, ppdb and bilayered. First two are baseline systems. Other three systems use paraphrases generated from an L-PCFG grammar. naive uses a word lattice with a single start-to-end path representing the input question itself, ppdb uses a word lattice constructed using the PPDB rules, and bilayered uses bi-layered L-PCFG to build word lattices. Note that naive does not require any parallel resource to train, ppdb requires an external paraphrase database, and bilayered, like mt, needs a parallel corpus with paraphrase pairs. We tune our classifier features and GraphParser features on the development data. We use the best setting from tuning for evaluation on the test data.
Conclusion
We described a grammar method to generate paraphrases for questions, and applied it to a question answering system based on semantic parsing. We showed that using paraphrases for a question answering system is a useful way to improve its performance. Our method is rather generic and can be applied to any question answering system.
Acknowledgements
The authors would like to thank Nitin Madnani for his help with the implementation of the paraphrase classifier. We would like to thank our anonymous reviewers for their insightful comments. This research was supported by an EPSRC grant (EP/L02411X/1), the H2020 project SUMMA (under grant agreement 688139), and a Google PhD Fellowship for the second author. | No |
c359ab8ebef6f60c5a38f5244e8c18d85e92761d | c359ab8ebef6f60c5a38f5244e8c18d85e92761d_0 | Q: How many paraphrases are generated per question?
Text: Introduction
Semantic parsers map sentences onto logical forms that can be used to query databases BIBREF0 , BIBREF1 , instruct robots BIBREF2 , extract information BIBREF3 , or describe visual scenes BIBREF4 . In this paper we consider the problem of semantically parsing questions into Freebase logical forms for the goal of question answering. Current systems accomplish this by learning task-specific grammars BIBREF5 , strongly-typed CCG grammars BIBREF6 , BIBREF7 , or neural networks without requiring any grammar BIBREF8 . These methods are sensitive to the words used in a question and their word order, making them vulnerable to unseen words and phrases. Furthermore, mismatch between natural language and Freebase makes the problem even harder. For example, Freebase expresses the fact that “Czech is the official language of Czech Republic” (encoded as a graph), whereas to answer a question like “What do people in Czech Republic speak?” one should infer people in Czech Republic refers to Czech Republic and What refers to the language and speak refers to the predicate official language.
We address the above problems by using paraphrases of the original question. Paraphrasing has shown to be promising for semantic parsing BIBREF9 , BIBREF10 , BIBREF11 . We propose a novel framework for paraphrasing using latent-variable PCFGs (L-PCFGs). Earlier approaches to paraphrasing used phrase-based machine translation for text-based QA BIBREF12 , BIBREF13 , or hand annotated grammars for KB-based QA BIBREF10 . We find that phrase-based statistical machine translation (MT) approaches mainly produce lexical paraphrases without much syntactic diversity, whereas our grammar-based approach is capable of producing both lexically and syntactically diverse paraphrases. Unlike MT based approaches, our system does not require aligned parallel paraphrase corpora. In addition we do not require hand annotated grammars for paraphrase generation but instead learn the grammar directly from a large scale question corpus.
The main contributions of this paper are two fold. First, we present an algorithm (§ "Paraphrase Generation Using Grammars" ) to generate paraphrases using latent-variable PCFGs. We use the spectral method of narayan-15 to estimate L-PCFGs on a large scale question treebank. Our grammar model leads to a robust and an efficient system for paraphrase generation in open-domain question answering. While CFGs have been explored for paraphrasing using bilingual parallel corpus BIBREF14 , ours is the first implementation of CFG that uses only monolingual data. Second, we show that generated paraphrases can be used to improve semantic parsing of questions into Freebase logical forms (§ "Semantic Parsing using Paraphrasing" ). We build on a strong baseline of reddylargescale2014 and show that our grammar model competes with MT baseline even without using any parallel paraphrase resources.
Paraphrase Generation Using Grammars
Our paraphrase generation algorithm is based on a model in the form of an L-PCFG. L-PCFGs are PCFGs where the nonterminals are refined with latent states that provide some contextual information about each node in a given derivation. L-PCFGs have been used in various ways, most commonly for syntactic parsing BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 .
In our estimation of L-PCFGs, we use the spectral method of narayan-15, instead of using EM, as has been used in the past by matsuzaki-2005 and petrov-2006. The spectral method we use enables the choice of a set of feature functions that indicate the latent states, which proves to be useful in our case. It also leads to sparse grammar estimates and compact models.
The spectral method works by identifying feature functions for “inside” and “outside” trees, and then clusters them into latent states. Then it follows with a maximum likelihood estimation step, that assumes the latent states are represented by clusters obtained through the feature function clustering. For more details about these constructions, we refer the reader to cohen-13 and narayan-15.
The rest of this section describes our paraphrase generation algorithm.
Paraphrases Generation Algorithm
We define our paraphrase generation task as a sampling problem from an L-PCFG $G_{\mathrm {syn}}$ , which is estimated from a large corpus of parsed questions. Once this grammar is estimated, our algorithm follows a pipeline with two major steps.
We first build a word lattice $W_q$ for the input question $q$ . We use the lattice to constrain our paraphrases to a specific choice of words and phrases that can be used. Once this lattice is created, a grammar $G_{\mathrm {syn}}^{\prime }$ is then extracted from $G_{\mathrm {syn}}$ . This grammar is constrained to the lattice.
We experiment with three ways of constructing word lattices: naïve word lattices representing the words from the input question only, word lattices constructed with the Paraphrase Database BIBREF14 and word lattices constructed with a bi-layered L-PCFG, described in § "Bi-Layered L-PCFGs" . For example, Figure 1 shows an example word lattice for the question What language do people in Czech Republic speak? using the lexical and phrasal rules from the PPDB.
Once $G_{\mathrm {syn}}^{\prime }$ is generated, we sample paraphrases of the input question $q$ . These paraphrases are further filtered with a classifier to improve the precision of the generated paraphrases.
We train the L-PCFG $G_{\mathrm {syn}}$ on the Paralex corpus BIBREF9 . Paralex is a large monolingual parallel corpus, containing 18 million pairs of question paraphrases with 2.4M distinct questions in the corpus. It is suitable for our task of generating paraphrases since its large scale makes our model robust for open-domain questions. We construct a treebank by parsing 2.4M distinct questions from Paralex using the BLLIP parser BIBREF25 .
Given the treebank, we use the spectral algorithm of narayan-15 to learn an L-PCFG for constituency parsing to learn $G_{\mathrm {syn}}$ . We follow narayan-15 and use the same feature functions for the inside and outside trees as they use, capturing contextual syntactic information about nonterminals. We refer the reader to narayan-15 for more detailed description of these features. In our experiments, we set the number of latent states to 24.
Once we estimate $G_{\mathrm {syn}}$ from the Paralex corpus, we restrict it for each question to a grammar $G_{\mathrm {syn}}^{\prime }$ by keeping only the rules that could lead to a derivation over the lattice. This step is similar to lexical pruning in standard grammar-based generation process to avoid an intermediate derivation which can never lead to a successful derivation BIBREF26 , BIBREF27 .
Sampling a question from the grammar $G_{\mathrm {syn}}^{\prime }$ is done by recursively sampling nodes in the derivation tree, together with their latent states, in a top-down breadth-first fashion. Sampling from the pruned grammar $G_{\mathrm {syn}}^{\prime }$ raises an issue of oversampling words that are more frequent in the training data. To lessen this problem, we follow a controlled sampling approach where sampling is guided by the word lattice $W_q$ . Once a word $w$ from a path $e$ in $W_q$ is sampled, all other parallel or conflicting paths to $e$ are removed from $W_q$ . For example, generating for the word lattice in Figure 1 , when we sample the word citizens, we drop out the paths “human beings”, “people's”, “the population”, “people” and “members of the public” from $W_q$ and accordingly update the grammar. The controlled sampling ensures that each sampled question uses words from a single start-to-end path in $W_q$ . For example, we could sample a question what is Czech Republic 's language? by sampling words from the path (what, language, do, people 's, in, Czech, Republic, is speaking, ?) in Figure 1 . We repeat this sampling process to generate multiple potential paraphrases.
The resulting generation algorithm has multiple advantages over existing grammar generation methods. First, the sampling from an L-PCFG grammar lessens the lexical ambiguity problem evident in lexicalized grammars such as tree adjoining grammars BIBREF27 and combinatory categorial grammars BIBREF28 . Our grammar is not lexicalized, only unary context-free rules are lexicalized. Second, the top-down sampling restricts the combinatorics inherent to bottom-up search BIBREF29 . Third, we do not restrict the generation by the order information in the input. The lack of order information in the input often raises the high combinatorics in lexicalist approaches BIBREF30 . In our case, however, we use sampling to reduce this problem, and it allows us to produce syntactically diverse questions. And fourth, we impose no constraints on the grammar thereby making it easier to maintain bi-directional (recursive) grammars that can be used both for parsing and for generation BIBREF31 .
Bi-Layered L-PCFGs
As mentioned earlier, one of our lattice types is based on bi-layered PCFGs introduced here.
In their traditional use, the latent states in L-PCFGs aim to capture syntactic information. We introduce here the use of an L-PCFG with two layers of latent states: one layer is intended to capture the usual syntactic information, and the other aims to capture semantic and topical information by using a large set of states with specific feature functions.
To create the bi-layered L-PCFG, we again use the spectral algorithm of narayan-15 to estimate a grammar $G_{\mathrm {par}}$ from the Paralex corpus. We use the word alignment of paraphrase question pairs in Paralex to map inside and outside trees of each nonterminals in the treebank to bag of word features. The number of latent states we use is 1,000.
Once the two feature functions (syntactic in $G_{\mathrm {syn}}$ and semantic in $G_{\mathrm {par}}$ ) are created, each nonterminal in the training treebank is assigned two latent states (cluster identifiers). Figure 2 shows an example annotation of trees for three paraphrase questions from the Paralex corpus. We compute the parameters of the bi-layered L-PCFG $G_{\mathrm {layered}}$ with a simple frequency count maximum likelihood estimate over this annotated treebank. As such, $G_{\mathrm {layered}}$ is a combination of $G_{\mathrm {syn}}$ and $G_{\mathrm {par}}$ , resulting in 24,000 latent states (24 syntactic x 1000 semantic).
Consider an example where we want to generate paraphrases for the question what day is nochebuena. Parsing it with $G_{\mathrm {layered}}$ will lead to the leftmost hybrid structure as shown in Figure 2 . The assignment of the first latent states for each nonterminals ensures that we retrieve the correct syntactic representation of the sentence. Here, however, we are more interested in the second latent states assigned to each nonterminals which capture the paraphrase information of the sentence at various levels. For example, we have a unary lexical rule (NN-*-142 day) indicating that we observe day with NN of the paraphrase type 142. We could use this information to extract unary rules of the form (NN-*-142 $w$ ) in the treebank that will generate words $w$ which are paraphrases to day. Similarly, any node WHNP-*-291 in the treebank will generate paraphrases for what day, SBARQ-*-403, for what day is nochebuena. This way we will be able to generate paraphrases when is nochebuena and when is nochebuena celebrated as they both have SBARQ-*-403 as their roots.
To generate a word lattice $W_q$ for a given question $q$ , we parse $q$ with the bi-layered grammar $G_{\mathrm {layered}}$ . For each rule of the form $X$ - $m_1$ - $m_2 \rightarrow w$ in the bi-layered tree with $X \in {\cal P}$ , $m_1 \in \lbrace 1, \ldots , 24 \rbrace $ , $m_2 \in \lbrace 1, \ldots , 1000 \rbrace $ and $q$0 a word in $q$1 , we extract rules of the form $q$2 - $q$3 - $q$4 from $q$5 such that $q$6 . For each such $q$7 , we add a path $q$8 parallel to $q$9 in the word lattice.
Paraphrase Classification
Our sampling algorithm overgenerates paraphrases which are incorrect. To improve its precision, we build a binary classifier to filter the generated paraphrases. We randomly select 100 distinct questions from the Paralex corpus and generate paraphrases using our generation algorithm with various lattice settings. We randomly select 1,000 pairs of input-sampled sentences and manually annotate them as “correct” or “incorrect” paraphrases. We train our classifier on this manually created training data. We follow madnani2012, who used MT metrics for paraphrase identification, and experiment with 8 MT metrics as features for our binary classifier. In addition, we experiment with a binary feature which checks if the sampled paraphrase preserves named entities from the input sentence. We use WEKA BIBREF32 to replicate the classifier of madnani2012 with our new feature. We tune the feature set for our classifier on the development data.
Semantic Parsing using Paraphrasing
In this section we describe how the paraphrase algorithm is used for converting natural language to Freebase queries. Following reddylargescale2014, we formalize the semantic parsing problem as a graph matching problem, i.e., finding the Freebase subgraph (grounded graph) that is isomorphic to the input question semantic structure (ungrounded graph).
This formulation has a major limitation that can be alleviated by using our paraphrase generation algorithm. Consider the question What language do people in Czech Republic speak?. The ungrounded graph corresponding to this question is shown in Figure 3 . The Freebase grounded graph which results in correct answer is shown in Figure 3 . Note that these two graphs are non-isomorphic making it impossible to derive the correct grounding from the ungrounded graph. In fact, at least 15% of the examples in our development set fail to satisfy isomorphic assumption. In order to address this problem, we use paraphrases of the input question to generate additional ungrounded graphs, with the aim that one of those paraphrases will have a structure isomorphic to the correct grounding. Figure 3 and Figure 3 are two such paraphrases which can be converted to Figure 3 as described in sec:groundedGraphs.
For a given input question, first we build ungrounded graphs from its paraphrases. We convert these graphs to Freebase graphs. To learn this mapping, we rely on manually assembled question-answer pairs. For each training question, we first find the set of oracle grounded graphs—Freebase subgraphs which when executed yield the correct answer—derivable from the question's ungrounded graphs. These oracle graphs are then used to train a structured perceptron model. These steps are discussed in detail below.
Ungrounded Graphs from Paraphrases
We use GraphParser BIBREF7 to convert paraphrases to ungrounded graphs. This conversion involves three steps: 1) parsing the paraphrase using a CCG parser to extract syntactic derivations BIBREF33 , 2) extracting logical forms from the CCG derivations BIBREF34 , and 3) converting the logical forms to an ungrounded graph. The ungrounded graph for the example question and its paraphrases are shown in Figure 3 , Figure 3 and Figure 3 , respectively.
Grounded Graphs from Ungrounded Graphs
The ungrounded graphs are grounded to Freebase subgraphs by mapping entity nodes, entity-entity edges and entity type nodes in the ungrounded graph to Freebase entities, relations and types, respectively. For example, the graph in Figure 3 can be converted to a Freebase graph in Figure 3 by replacing the entity node Czech Republic with the Freebase entity CzechRepublic, the edge (speak.arg $_2$ , speak.in) between $x$ and Czech Republic with the Freebase relation (location.country.official_language.2, location.country.official_language.1), the type node language with the Freebase type language.human_language, and the target node remains intact. The rest of the nodes, edges and types are grounded to null. In a similar fashion, Figure 3 can be grounded to Figure 3 , but not Figure 3 to Figure 3 . If no paraphrase is isomorphic to the target grounded grounded graph, our grounding fails.
Learning
We use a linear model to map ungrounded graphs to grounded ones. The parameters of the model are learned from question-answer pairs. For example, the question What language do people in Czech Republic speak? paired with its answer $\lbrace \textsc {CzechLanguage}\rbrace $ . In line with most work on question answering against Freebase, we do not rely on annotated logical forms associated with the question for training and treat the mapping of a question to its grounded graph as latent.
Let $q$ be a question, let $p$ be a paraphrase, let $u$ be an ungrounded graph for $p$ , and let $g$ be a grounded graph formed by grounding the nodes and edges of $u$ to the knowledge base $\mathcal {K}$ (throughout we use Freebase as the knowledge base). Following reddylargescale2014, we use beam search to find the highest scoring tuple of paraphrase, ungrounded and grounded graphs $(\hat{p}, \hat{u}, \hat{g})$ under the model $\theta \in \mathbb {R}^n$ : $ ({\hat{p},\hat{u},\hat{g}}) = \operatornamewithlimits{arg\,max}_{(p,u,g)} \theta \cdot \Phi (p,u,g,q,\mathcal {K})\,, $
where $\Phi (p, u, g, q, \mathcal {K}) \in \mathbb {R}^n$ denotes the features for the tuple of paraphrase, ungrounded and grounded graphs. The feature function has access to the paraphrase, ungrounded and grounded graphs, the original question, as well as to the content of the knowledge base and the denotation $|g|_\mathcal {K}$ (the denotation of a grounded graph is defined as the set of entities or attributes reachable at its target node). See sec:details for the features employed. The model parameters are estimated with the averaged structured perceptron BIBREF35 . Given a training question-answer pair $(q,\mathcal {A})$ , the update is: $ \theta ^{t+1} \leftarrow \theta ^{t} + \Phi (p^+, u^+, g^+, q, \mathcal {K}) - \Phi (\hat{p}, \hat{u}, \hat{g}, q, \mathcal {K})\,, $
where $({p^+,u^+,g^+})$ denotes the tuple of gold paraphrase, gold ungrounded and grounded graphs for $q$ . Since we do not have direct access to the gold paraphrase and graphs, we instead rely on the set of oracle tuples, $\mathcal {O}_{\mathcal {K}, \mathcal {A}}(q)$ , as a proxy: $ (p^{+},u^{+},{g^{+}}) = \operatornamewithlimits{arg\,max}_{(p,u,g) \in \mathcal {O}_{\mathcal {K},\mathcal {A}}(q)} \theta \cdot \Phi ({p,u,g,q,\mathcal {K}})\,, $
where $\mathcal {O}_{\mathcal {K}, \mathcal {A}}(q)$ is defined as the set of tuples ( $p$ , $u$ , $g$ ) derivable from the question $q$ , whose denotation $|g|_\mathcal {K}$ has minimal $F_1$ -loss against the gold answer $\mathcal {A}$ . We find the oracle graphs for each question a priori by performing beam-search with a very large beam.
Experimental Setup
Below, we give details on the evaluation dataset and baselines used for comparison. We also describe the model features and provide implementation details.
Evaluation Data and Metric
We evaluate our approach on the WebQuestions dataset BIBREF5 . WebQuestions consists of 5,810 question-answer pairs where questions represents real Google search queries. We use the standard train/test splits, with 3,778 train and 2,032 test questions. For our development experiments we tune the models on held-out data consisting of 30% training questions, while for final testing we use the complete training data. We use average precision (avg P.), average recall (avg R.) and average F $_1$ (avg F $_1$ ) proposed by berantsemantic2013 as evaluation metrics.
Baselines
We use GraphParser without paraphrases as our baseline. This gives an idea about the impact of using paraphrases.
We compare our paraphrasing models with monolingual machine translation based model for paraphrase generation BIBREF24 , BIBREF36 . In particular, we use Moses BIBREF37 to train a monolingual phrase-based MT system on the Paralex corpus. Finally, we use Moses decoder to generate 10-best distinct paraphrases for the test questions.
Implementation Details
For WebQuestions, we use 8 handcrafted part-of-speech patterns (e.g., the pattern (DT)?(JJ.? $\mid $ NN.?){0,2}NN.? matches the noun phrase the big lebowski) to identify candidate named entity mention spans. We use the Stanford CoreNLP caseless tagger for part-of-speech tagging BIBREF38 . For each candidate mention span, we retrieve the top 10 entities according to the Freebase API. We then create a lattice in which the nodes correspond to mention-entity pairs, scored by their Freebase API scores, and the edges encode the fact that no joint assignment of entities to mentions can contain overlapping spans. We take the top 10 paths through the lattice as possible entity disambiguations. For each possibility, we generate $n$ -best paraphrases that contains the entity mention spans. In the end, this process creates a total of $10n$ paraphrases. We generate ungrounded graphs for these paraphrases and treat the final entity disambiguation and paraphrase selection as part of the semantic parsing problem.
We use the features from reddylargescale2014. These include edge alignments and stem overlaps between ungrounded and grounded graphs, and contextual features such as word and grounded relation pairs. In addition to these features, we add two new real-valued features – the paraphrase classifier's score and the entity disambiguation lattice score.
We use beam search to infer the highest scoring graph pair for a question. The search operates over entity-entity edges and entity type nodes of each ungrounded graph. For an entity-entity edge, there are two operations: ground the edge to a Freebase relation, or skip the edge. Similarly, for an entity type node, there are two operations: ground the node to a Freebase type, or skip the node. We use a beam size of 100 in all our experiments.
Results and Discussion
In this section, we present results from five different systems for our question-answering experiments: original, mt, naive, ppdb and bilayered. First two are baseline systems. Other three systems use paraphrases generated from an L-PCFG grammar. naive uses a word lattice with a single start-to-end path representing the input question itself, ppdb uses a word lattice constructed using the PPDB rules, and bilayered uses bi-layered L-PCFG to build word lattices. Note that naive does not require any parallel resource to train, ppdb requires an external paraphrase database, and bilayered, like mt, needs a parallel corpus with paraphrase pairs. We tune our classifier features and GraphParser features on the development data. We use the best setting from tuning for evaluation on the test data.
Conclusion
We described a grammar method to generate paraphrases for questions, and applied it to a question answering system based on semantic parsing. We showed that using paraphrases for a question answering system is a useful way to improve its performance. Our method is rather generic and can be applied to any question answering system.
Acknowledgements
The authors would like to thank Nitin Madnani for his help with the implementation of the paraphrase classifier. We would like to thank our anonymous reviewers for their insightful comments. This research was supported by an EPSRC grant (EP/L02411X/1), the H2020 project SUMMA (under grant agreement 688139), and a Google PhD Fellowship for the second author. | 10*n paraphrases, where n depends on the number of paraphrases that contain the entity mention spans |
ad362365656b0b218ba324ae60701eb25fe664c1 | ad362365656b0b218ba324ae60701eb25fe664c1_0 | Q: What latent variables are modeled in the PCFG?
Text: Introduction
Semantic parsers map sentences onto logical forms that can be used to query databases BIBREF0 , BIBREF1 , instruct robots BIBREF2 , extract information BIBREF3 , or describe visual scenes BIBREF4 . In this paper we consider the problem of semantically parsing questions into Freebase logical forms for the goal of question answering. Current systems accomplish this by learning task-specific grammars BIBREF5 , strongly-typed CCG grammars BIBREF6 , BIBREF7 , or neural networks without requiring any grammar BIBREF8 . These methods are sensitive to the words used in a question and their word order, making them vulnerable to unseen words and phrases. Furthermore, mismatch between natural language and Freebase makes the problem even harder. For example, Freebase expresses the fact that “Czech is the official language of Czech Republic” (encoded as a graph), whereas to answer a question like “What do people in Czech Republic speak?” one should infer people in Czech Republic refers to Czech Republic and What refers to the language and speak refers to the predicate official language.
We address the above problems by using paraphrases of the original question. Paraphrasing has shown to be promising for semantic parsing BIBREF9 , BIBREF10 , BIBREF11 . We propose a novel framework for paraphrasing using latent-variable PCFGs (L-PCFGs). Earlier approaches to paraphrasing used phrase-based machine translation for text-based QA BIBREF12 , BIBREF13 , or hand annotated grammars for KB-based QA BIBREF10 . We find that phrase-based statistical machine translation (MT) approaches mainly produce lexical paraphrases without much syntactic diversity, whereas our grammar-based approach is capable of producing both lexically and syntactically diverse paraphrases. Unlike MT based approaches, our system does not require aligned parallel paraphrase corpora. In addition we do not require hand annotated grammars for paraphrase generation but instead learn the grammar directly from a large scale question corpus.
The main contributions of this paper are two fold. First, we present an algorithm (§ "Paraphrase Generation Using Grammars" ) to generate paraphrases using latent-variable PCFGs. We use the spectral method of narayan-15 to estimate L-PCFGs on a large scale question treebank. Our grammar model leads to a robust and an efficient system for paraphrase generation in open-domain question answering. While CFGs have been explored for paraphrasing using bilingual parallel corpus BIBREF14 , ours is the first implementation of CFG that uses only monolingual data. Second, we show that generated paraphrases can be used to improve semantic parsing of questions into Freebase logical forms (§ "Semantic Parsing using Paraphrasing" ). We build on a strong baseline of reddylargescale2014 and show that our grammar model competes with MT baseline even without using any parallel paraphrase resources.
Paraphrase Generation Using Grammars
Our paraphrase generation algorithm is based on a model in the form of an L-PCFG. L-PCFGs are PCFGs where the nonterminals are refined with latent states that provide some contextual information about each node in a given derivation. L-PCFGs have been used in various ways, most commonly for syntactic parsing BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 .
In our estimation of L-PCFGs, we use the spectral method of narayan-15, instead of using EM, as has been used in the past by matsuzaki-2005 and petrov-2006. The spectral method we use enables the choice of a set of feature functions that indicate the latent states, which proves to be useful in our case. It also leads to sparse grammar estimates and compact models.
The spectral method works by identifying feature functions for “inside” and “outside” trees, and then clusters them into latent states. Then it follows with a maximum likelihood estimation step, that assumes the latent states are represented by clusters obtained through the feature function clustering. For more details about these constructions, we refer the reader to cohen-13 and narayan-15.
The rest of this section describes our paraphrase generation algorithm.
Paraphrases Generation Algorithm
We define our paraphrase generation task as a sampling problem from an L-PCFG $G_{\mathrm {syn}}$ , which is estimated from a large corpus of parsed questions. Once this grammar is estimated, our algorithm follows a pipeline with two major steps.
We first build a word lattice $W_q$ for the input question $q$ . We use the lattice to constrain our paraphrases to a specific choice of words and phrases that can be used. Once this lattice is created, a grammar $G_{\mathrm {syn}}^{\prime }$ is then extracted from $G_{\mathrm {syn}}$ . This grammar is constrained to the lattice.
We experiment with three ways of constructing word lattices: naïve word lattices representing the words from the input question only, word lattices constructed with the Paraphrase Database BIBREF14 and word lattices constructed with a bi-layered L-PCFG, described in § "Bi-Layered L-PCFGs" . For example, Figure 1 shows an example word lattice for the question What language do people in Czech Republic speak? using the lexical and phrasal rules from the PPDB.
Once $G_{\mathrm {syn}}^{\prime }$ is generated, we sample paraphrases of the input question $q$ . These paraphrases are further filtered with a classifier to improve the precision of the generated paraphrases.
We train the L-PCFG $G_{\mathrm {syn}}$ on the Paralex corpus BIBREF9 . Paralex is a large monolingual parallel corpus, containing 18 million pairs of question paraphrases with 2.4M distinct questions in the corpus. It is suitable for our task of generating paraphrases since its large scale makes our model robust for open-domain questions. We construct a treebank by parsing 2.4M distinct questions from Paralex using the BLLIP parser BIBREF25 .
Given the treebank, we use the spectral algorithm of narayan-15 to learn an L-PCFG for constituency parsing to learn $G_{\mathrm {syn}}$ . We follow narayan-15 and use the same feature functions for the inside and outside trees as they use, capturing contextual syntactic information about nonterminals. We refer the reader to narayan-15 for more detailed description of these features. In our experiments, we set the number of latent states to 24.
Once we estimate $G_{\mathrm {syn}}$ from the Paralex corpus, we restrict it for each question to a grammar $G_{\mathrm {syn}}^{\prime }$ by keeping only the rules that could lead to a derivation over the lattice. This step is similar to lexical pruning in standard grammar-based generation process to avoid an intermediate derivation which can never lead to a successful derivation BIBREF26 , BIBREF27 .
Sampling a question from the grammar $G_{\mathrm {syn}}^{\prime }$ is done by recursively sampling nodes in the derivation tree, together with their latent states, in a top-down breadth-first fashion. Sampling from the pruned grammar $G_{\mathrm {syn}}^{\prime }$ raises an issue of oversampling words that are more frequent in the training data. To lessen this problem, we follow a controlled sampling approach where sampling is guided by the word lattice $W_q$ . Once a word $w$ from a path $e$ in $W_q$ is sampled, all other parallel or conflicting paths to $e$ are removed from $W_q$ . For example, generating for the word lattice in Figure 1 , when we sample the word citizens, we drop out the paths “human beings”, “people's”, “the population”, “people” and “members of the public” from $W_q$ and accordingly update the grammar. The controlled sampling ensures that each sampled question uses words from a single start-to-end path in $W_q$ . For example, we could sample a question what is Czech Republic 's language? by sampling words from the path (what, language, do, people 's, in, Czech, Republic, is speaking, ?) in Figure 1 . We repeat this sampling process to generate multiple potential paraphrases.
The resulting generation algorithm has multiple advantages over existing grammar generation methods. First, the sampling from an L-PCFG grammar lessens the lexical ambiguity problem evident in lexicalized grammars such as tree adjoining grammars BIBREF27 and combinatory categorial grammars BIBREF28 . Our grammar is not lexicalized, only unary context-free rules are lexicalized. Second, the top-down sampling restricts the combinatorics inherent to bottom-up search BIBREF29 . Third, we do not restrict the generation by the order information in the input. The lack of order information in the input often raises the high combinatorics in lexicalist approaches BIBREF30 . In our case, however, we use sampling to reduce this problem, and it allows us to produce syntactically diverse questions. And fourth, we impose no constraints on the grammar thereby making it easier to maintain bi-directional (recursive) grammars that can be used both for parsing and for generation BIBREF31 .
Bi-Layered L-PCFGs
As mentioned earlier, one of our lattice types is based on bi-layered PCFGs introduced here.
In their traditional use, the latent states in L-PCFGs aim to capture syntactic information. We introduce here the use of an L-PCFG with two layers of latent states: one layer is intended to capture the usual syntactic information, and the other aims to capture semantic and topical information by using a large set of states with specific feature functions.
To create the bi-layered L-PCFG, we again use the spectral algorithm of narayan-15 to estimate a grammar $G_{\mathrm {par}}$ from the Paralex corpus. We use the word alignment of paraphrase question pairs in Paralex to map inside and outside trees of each nonterminals in the treebank to bag of word features. The number of latent states we use is 1,000.
Once the two feature functions (syntactic in $G_{\mathrm {syn}}$ and semantic in $G_{\mathrm {par}}$ ) are created, each nonterminal in the training treebank is assigned two latent states (cluster identifiers). Figure 2 shows an example annotation of trees for three paraphrase questions from the Paralex corpus. We compute the parameters of the bi-layered L-PCFG $G_{\mathrm {layered}}$ with a simple frequency count maximum likelihood estimate over this annotated treebank. As such, $G_{\mathrm {layered}}$ is a combination of $G_{\mathrm {syn}}$ and $G_{\mathrm {par}}$ , resulting in 24,000 latent states (24 syntactic x 1000 semantic).
Consider an example where we want to generate paraphrases for the question what day is nochebuena. Parsing it with $G_{\mathrm {layered}}$ will lead to the leftmost hybrid structure as shown in Figure 2 . The assignment of the first latent states for each nonterminals ensures that we retrieve the correct syntactic representation of the sentence. Here, however, we are more interested in the second latent states assigned to each nonterminals which capture the paraphrase information of the sentence at various levels. For example, we have a unary lexical rule (NN-*-142 day) indicating that we observe day with NN of the paraphrase type 142. We could use this information to extract unary rules of the form (NN-*-142 $w$ ) in the treebank that will generate words $w$ which are paraphrases to day. Similarly, any node WHNP-*-291 in the treebank will generate paraphrases for what day, SBARQ-*-403, for what day is nochebuena. This way we will be able to generate paraphrases when is nochebuena and when is nochebuena celebrated as they both have SBARQ-*-403 as their roots.
To generate a word lattice $W_q$ for a given question $q$ , we parse $q$ with the bi-layered grammar $G_{\mathrm {layered}}$ . For each rule of the form $X$ - $m_1$ - $m_2 \rightarrow w$ in the bi-layered tree with $X \in {\cal P}$ , $m_1 \in \lbrace 1, \ldots , 24 \rbrace $ , $m_2 \in \lbrace 1, \ldots , 1000 \rbrace $ and $q$0 a word in $q$1 , we extract rules of the form $q$2 - $q$3 - $q$4 from $q$5 such that $q$6 . For each such $q$7 , we add a path $q$8 parallel to $q$9 in the word lattice.
Paraphrase Classification
Our sampling algorithm overgenerates paraphrases which are incorrect. To improve its precision, we build a binary classifier to filter the generated paraphrases. We randomly select 100 distinct questions from the Paralex corpus and generate paraphrases using our generation algorithm with various lattice settings. We randomly select 1,000 pairs of input-sampled sentences and manually annotate them as “correct” or “incorrect” paraphrases. We train our classifier on this manually created training data. We follow madnani2012, who used MT metrics for paraphrase identification, and experiment with 8 MT metrics as features for our binary classifier. In addition, we experiment with a binary feature which checks if the sampled paraphrase preserves named entities from the input sentence. We use WEKA BIBREF32 to replicate the classifier of madnani2012 with our new feature. We tune the feature set for our classifier on the development data.
Semantic Parsing using Paraphrasing
In this section we describe how the paraphrase algorithm is used for converting natural language to Freebase queries. Following reddylargescale2014, we formalize the semantic parsing problem as a graph matching problem, i.e., finding the Freebase subgraph (grounded graph) that is isomorphic to the input question semantic structure (ungrounded graph).
This formulation has a major limitation that can be alleviated by using our paraphrase generation algorithm. Consider the question What language do people in Czech Republic speak?. The ungrounded graph corresponding to this question is shown in Figure 3 . The Freebase grounded graph which results in correct answer is shown in Figure 3 . Note that these two graphs are non-isomorphic making it impossible to derive the correct grounding from the ungrounded graph. In fact, at least 15% of the examples in our development set fail to satisfy isomorphic assumption. In order to address this problem, we use paraphrases of the input question to generate additional ungrounded graphs, with the aim that one of those paraphrases will have a structure isomorphic to the correct grounding. Figure 3 and Figure 3 are two such paraphrases which can be converted to Figure 3 as described in sec:groundedGraphs.
For a given input question, first we build ungrounded graphs from its paraphrases. We convert these graphs to Freebase graphs. To learn this mapping, we rely on manually assembled question-answer pairs. For each training question, we first find the set of oracle grounded graphs—Freebase subgraphs which when executed yield the correct answer—derivable from the question's ungrounded graphs. These oracle graphs are then used to train a structured perceptron model. These steps are discussed in detail below.
Ungrounded Graphs from Paraphrases
We use GraphParser BIBREF7 to convert paraphrases to ungrounded graphs. This conversion involves three steps: 1) parsing the paraphrase using a CCG parser to extract syntactic derivations BIBREF33 , 2) extracting logical forms from the CCG derivations BIBREF34 , and 3) converting the logical forms to an ungrounded graph. The ungrounded graph for the example question and its paraphrases are shown in Figure 3 , Figure 3 and Figure 3 , respectively.
Grounded Graphs from Ungrounded Graphs
The ungrounded graphs are grounded to Freebase subgraphs by mapping entity nodes, entity-entity edges and entity type nodes in the ungrounded graph to Freebase entities, relations and types, respectively. For example, the graph in Figure 3 can be converted to a Freebase graph in Figure 3 by replacing the entity node Czech Republic with the Freebase entity CzechRepublic, the edge (speak.arg $_2$ , speak.in) between $x$ and Czech Republic with the Freebase relation (location.country.official_language.2, location.country.official_language.1), the type node language with the Freebase type language.human_language, and the target node remains intact. The rest of the nodes, edges and types are grounded to null. In a similar fashion, Figure 3 can be grounded to Figure 3 , but not Figure 3 to Figure 3 . If no paraphrase is isomorphic to the target grounded grounded graph, our grounding fails.
Learning
We use a linear model to map ungrounded graphs to grounded ones. The parameters of the model are learned from question-answer pairs. For example, the question What language do people in Czech Republic speak? paired with its answer $\lbrace \textsc {CzechLanguage}\rbrace $ . In line with most work on question answering against Freebase, we do not rely on annotated logical forms associated with the question for training and treat the mapping of a question to its grounded graph as latent.
Let $q$ be a question, let $p$ be a paraphrase, let $u$ be an ungrounded graph for $p$ , and let $g$ be a grounded graph formed by grounding the nodes and edges of $u$ to the knowledge base $\mathcal {K}$ (throughout we use Freebase as the knowledge base). Following reddylargescale2014, we use beam search to find the highest scoring tuple of paraphrase, ungrounded and grounded graphs $(\hat{p}, \hat{u}, \hat{g})$ under the model $\theta \in \mathbb {R}^n$ : $ ({\hat{p},\hat{u},\hat{g}}) = \operatornamewithlimits{arg\,max}_{(p,u,g)} \theta \cdot \Phi (p,u,g,q,\mathcal {K})\,, $
where $\Phi (p, u, g, q, \mathcal {K}) \in \mathbb {R}^n$ denotes the features for the tuple of paraphrase, ungrounded and grounded graphs. The feature function has access to the paraphrase, ungrounded and grounded graphs, the original question, as well as to the content of the knowledge base and the denotation $|g|_\mathcal {K}$ (the denotation of a grounded graph is defined as the set of entities or attributes reachable at its target node). See sec:details for the features employed. The model parameters are estimated with the averaged structured perceptron BIBREF35 . Given a training question-answer pair $(q,\mathcal {A})$ , the update is: $ \theta ^{t+1} \leftarrow \theta ^{t} + \Phi (p^+, u^+, g^+, q, \mathcal {K}) - \Phi (\hat{p}, \hat{u}, \hat{g}, q, \mathcal {K})\,, $
where $({p^+,u^+,g^+})$ denotes the tuple of gold paraphrase, gold ungrounded and grounded graphs for $q$ . Since we do not have direct access to the gold paraphrase and graphs, we instead rely on the set of oracle tuples, $\mathcal {O}_{\mathcal {K}, \mathcal {A}}(q)$ , as a proxy: $ (p^{+},u^{+},{g^{+}}) = \operatornamewithlimits{arg\,max}_{(p,u,g) \in \mathcal {O}_{\mathcal {K},\mathcal {A}}(q)} \theta \cdot \Phi ({p,u,g,q,\mathcal {K}})\,, $
where $\mathcal {O}_{\mathcal {K}, \mathcal {A}}(q)$ is defined as the set of tuples ( $p$ , $u$ , $g$ ) derivable from the question $q$ , whose denotation $|g|_\mathcal {K}$ has minimal $F_1$ -loss against the gold answer $\mathcal {A}$ . We find the oracle graphs for each question a priori by performing beam-search with a very large beam.
Experimental Setup
Below, we give details on the evaluation dataset and baselines used for comparison. We also describe the model features and provide implementation details.
Evaluation Data and Metric
We evaluate our approach on the WebQuestions dataset BIBREF5 . WebQuestions consists of 5,810 question-answer pairs where questions represents real Google search queries. We use the standard train/test splits, with 3,778 train and 2,032 test questions. For our development experiments we tune the models on held-out data consisting of 30% training questions, while for final testing we use the complete training data. We use average precision (avg P.), average recall (avg R.) and average F $_1$ (avg F $_1$ ) proposed by berantsemantic2013 as evaluation metrics.
Baselines
We use GraphParser without paraphrases as our baseline. This gives an idea about the impact of using paraphrases.
We compare our paraphrasing models with monolingual machine translation based model for paraphrase generation BIBREF24 , BIBREF36 . In particular, we use Moses BIBREF37 to train a monolingual phrase-based MT system on the Paralex corpus. Finally, we use Moses decoder to generate 10-best distinct paraphrases for the test questions.
Implementation Details
For WebQuestions, we use 8 handcrafted part-of-speech patterns (e.g., the pattern (DT)?(JJ.? $\mid $ NN.?){0,2}NN.? matches the noun phrase the big lebowski) to identify candidate named entity mention spans. We use the Stanford CoreNLP caseless tagger for part-of-speech tagging BIBREF38 . For each candidate mention span, we retrieve the top 10 entities according to the Freebase API. We then create a lattice in which the nodes correspond to mention-entity pairs, scored by their Freebase API scores, and the edges encode the fact that no joint assignment of entities to mentions can contain overlapping spans. We take the top 10 paths through the lattice as possible entity disambiguations. For each possibility, we generate $n$ -best paraphrases that contains the entity mention spans. In the end, this process creates a total of $10n$ paraphrases. We generate ungrounded graphs for these paraphrases and treat the final entity disambiguation and paraphrase selection as part of the semantic parsing problem.
We use the features from reddylargescale2014. These include edge alignments and stem overlaps between ungrounded and grounded graphs, and contextual features such as word and grounded relation pairs. In addition to these features, we add two new real-valued features – the paraphrase classifier's score and the entity disambiguation lattice score.
We use beam search to infer the highest scoring graph pair for a question. The search operates over entity-entity edges and entity type nodes of each ungrounded graph. For an entity-entity edge, there are two operations: ground the edge to a Freebase relation, or skip the edge. Similarly, for an entity type node, there are two operations: ground the node to a Freebase type, or skip the node. We use a beam size of 100 in all our experiments.
Results and Discussion
In this section, we present results from five different systems for our question-answering experiments: original, mt, naive, ppdb and bilayered. First two are baseline systems. Other three systems use paraphrases generated from an L-PCFG grammar. naive uses a word lattice with a single start-to-end path representing the input question itself, ppdb uses a word lattice constructed using the PPDB rules, and bilayered uses bi-layered L-PCFG to build word lattices. Note that naive does not require any parallel resource to train, ppdb requires an external paraphrase database, and bilayered, like mt, needs a parallel corpus with paraphrase pairs. We tune our classifier features and GraphParser features on the development data. We use the best setting from tuning for evaluation on the test data.
Conclusion
We described a grammar method to generate paraphrases for questions, and applied it to a question answering system based on semantic parsing. We showed that using paraphrases for a question answering system is a useful way to improve its performance. Our method is rather generic and can be applied to any question answering system.
Acknowledgements
The authors would like to thank Nitin Madnani for his help with the implementation of the paraphrase classifier. We would like to thank our anonymous reviewers for their insightful comments. This research was supported by an EPSRC grant (EP/L02411X/1), the H2020 project SUMMA (under grant agreement 688139), and a Google PhD Fellowship for the second author. | syntactic information, semantic and topical information |
423bb905e404e88a168e7e807950e24ca166306c | 423bb905e404e88a168e7e807950e24ca166306c_0 | Q: What are the baselines?
Text: Introduction
Semantic parsers map sentences onto logical forms that can be used to query databases BIBREF0 , BIBREF1 , instruct robots BIBREF2 , extract information BIBREF3 , or describe visual scenes BIBREF4 . In this paper we consider the problem of semantically parsing questions into Freebase logical forms for the goal of question answering. Current systems accomplish this by learning task-specific grammars BIBREF5 , strongly-typed CCG grammars BIBREF6 , BIBREF7 , or neural networks without requiring any grammar BIBREF8 . These methods are sensitive to the words used in a question and their word order, making them vulnerable to unseen words and phrases. Furthermore, mismatch between natural language and Freebase makes the problem even harder. For example, Freebase expresses the fact that “Czech is the official language of Czech Republic” (encoded as a graph), whereas to answer a question like “What do people in Czech Republic speak?” one should infer people in Czech Republic refers to Czech Republic and What refers to the language and speak refers to the predicate official language.
We address the above problems by using paraphrases of the original question. Paraphrasing has shown to be promising for semantic parsing BIBREF9 , BIBREF10 , BIBREF11 . We propose a novel framework for paraphrasing using latent-variable PCFGs (L-PCFGs). Earlier approaches to paraphrasing used phrase-based machine translation for text-based QA BIBREF12 , BIBREF13 , or hand annotated grammars for KB-based QA BIBREF10 . We find that phrase-based statistical machine translation (MT) approaches mainly produce lexical paraphrases without much syntactic diversity, whereas our grammar-based approach is capable of producing both lexically and syntactically diverse paraphrases. Unlike MT based approaches, our system does not require aligned parallel paraphrase corpora. In addition we do not require hand annotated grammars for paraphrase generation but instead learn the grammar directly from a large scale question corpus.
The main contributions of this paper are two fold. First, we present an algorithm (§ "Paraphrase Generation Using Grammars" ) to generate paraphrases using latent-variable PCFGs. We use the spectral method of narayan-15 to estimate L-PCFGs on a large scale question treebank. Our grammar model leads to a robust and an efficient system for paraphrase generation in open-domain question answering. While CFGs have been explored for paraphrasing using bilingual parallel corpus BIBREF14 , ours is the first implementation of CFG that uses only monolingual data. Second, we show that generated paraphrases can be used to improve semantic parsing of questions into Freebase logical forms (§ "Semantic Parsing using Paraphrasing" ). We build on a strong baseline of reddylargescale2014 and show that our grammar model competes with MT baseline even without using any parallel paraphrase resources.
Paraphrase Generation Using Grammars
Our paraphrase generation algorithm is based on a model in the form of an L-PCFG. L-PCFGs are PCFGs where the nonterminals are refined with latent states that provide some contextual information about each node in a given derivation. L-PCFGs have been used in various ways, most commonly for syntactic parsing BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 .
In our estimation of L-PCFGs, we use the spectral method of narayan-15, instead of using EM, as has been used in the past by matsuzaki-2005 and petrov-2006. The spectral method we use enables the choice of a set of feature functions that indicate the latent states, which proves to be useful in our case. It also leads to sparse grammar estimates and compact models.
The spectral method works by identifying feature functions for “inside” and “outside” trees, and then clusters them into latent states. Then it follows with a maximum likelihood estimation step, that assumes the latent states are represented by clusters obtained through the feature function clustering. For more details about these constructions, we refer the reader to cohen-13 and narayan-15.
The rest of this section describes our paraphrase generation algorithm.
Paraphrases Generation Algorithm
We define our paraphrase generation task as a sampling problem from an L-PCFG $G_{\mathrm {syn}}$ , which is estimated from a large corpus of parsed questions. Once this grammar is estimated, our algorithm follows a pipeline with two major steps.
We first build a word lattice $W_q$ for the input question $q$ . We use the lattice to constrain our paraphrases to a specific choice of words and phrases that can be used. Once this lattice is created, a grammar $G_{\mathrm {syn}}^{\prime }$ is then extracted from $G_{\mathrm {syn}}$ . This grammar is constrained to the lattice.
We experiment with three ways of constructing word lattices: naïve word lattices representing the words from the input question only, word lattices constructed with the Paraphrase Database BIBREF14 and word lattices constructed with a bi-layered L-PCFG, described in § "Bi-Layered L-PCFGs" . For example, Figure 1 shows an example word lattice for the question What language do people in Czech Republic speak? using the lexical and phrasal rules from the PPDB.
Once $G_{\mathrm {syn}}^{\prime }$ is generated, we sample paraphrases of the input question $q$ . These paraphrases are further filtered with a classifier to improve the precision of the generated paraphrases.
We train the L-PCFG $G_{\mathrm {syn}}$ on the Paralex corpus BIBREF9 . Paralex is a large monolingual parallel corpus, containing 18 million pairs of question paraphrases with 2.4M distinct questions in the corpus. It is suitable for our task of generating paraphrases since its large scale makes our model robust for open-domain questions. We construct a treebank by parsing 2.4M distinct questions from Paralex using the BLLIP parser BIBREF25 .
Given the treebank, we use the spectral algorithm of narayan-15 to learn an L-PCFG for constituency parsing to learn $G_{\mathrm {syn}}$ . We follow narayan-15 and use the same feature functions for the inside and outside trees as they use, capturing contextual syntactic information about nonterminals. We refer the reader to narayan-15 for more detailed description of these features. In our experiments, we set the number of latent states to 24.
Once we estimate $G_{\mathrm {syn}}$ from the Paralex corpus, we restrict it for each question to a grammar $G_{\mathrm {syn}}^{\prime }$ by keeping only the rules that could lead to a derivation over the lattice. This step is similar to lexical pruning in standard grammar-based generation process to avoid an intermediate derivation which can never lead to a successful derivation BIBREF26 , BIBREF27 .
Sampling a question from the grammar $G_{\mathrm {syn}}^{\prime }$ is done by recursively sampling nodes in the derivation tree, together with their latent states, in a top-down breadth-first fashion. Sampling from the pruned grammar $G_{\mathrm {syn}}^{\prime }$ raises an issue of oversampling words that are more frequent in the training data. To lessen this problem, we follow a controlled sampling approach where sampling is guided by the word lattice $W_q$ . Once a word $w$ from a path $e$ in $W_q$ is sampled, all other parallel or conflicting paths to $e$ are removed from $W_q$ . For example, generating for the word lattice in Figure 1 , when we sample the word citizens, we drop out the paths “human beings”, “people's”, “the population”, “people” and “members of the public” from $W_q$ and accordingly update the grammar. The controlled sampling ensures that each sampled question uses words from a single start-to-end path in $W_q$ . For example, we could sample a question what is Czech Republic 's language? by sampling words from the path (what, language, do, people 's, in, Czech, Republic, is speaking, ?) in Figure 1 . We repeat this sampling process to generate multiple potential paraphrases.
The resulting generation algorithm has multiple advantages over existing grammar generation methods. First, the sampling from an L-PCFG grammar lessens the lexical ambiguity problem evident in lexicalized grammars such as tree adjoining grammars BIBREF27 and combinatory categorial grammars BIBREF28 . Our grammar is not lexicalized, only unary context-free rules are lexicalized. Second, the top-down sampling restricts the combinatorics inherent to bottom-up search BIBREF29 . Third, we do not restrict the generation by the order information in the input. The lack of order information in the input often raises the high combinatorics in lexicalist approaches BIBREF30 . In our case, however, we use sampling to reduce this problem, and it allows us to produce syntactically diverse questions. And fourth, we impose no constraints on the grammar thereby making it easier to maintain bi-directional (recursive) grammars that can be used both for parsing and for generation BIBREF31 .
Bi-Layered L-PCFGs
As mentioned earlier, one of our lattice types is based on bi-layered PCFGs introduced here.
In their traditional use, the latent states in L-PCFGs aim to capture syntactic information. We introduce here the use of an L-PCFG with two layers of latent states: one layer is intended to capture the usual syntactic information, and the other aims to capture semantic and topical information by using a large set of states with specific feature functions.
To create the bi-layered L-PCFG, we again use the spectral algorithm of narayan-15 to estimate a grammar $G_{\mathrm {par}}$ from the Paralex corpus. We use the word alignment of paraphrase question pairs in Paralex to map inside and outside trees of each nonterminals in the treebank to bag of word features. The number of latent states we use is 1,000.
Once the two feature functions (syntactic in $G_{\mathrm {syn}}$ and semantic in $G_{\mathrm {par}}$ ) are created, each nonterminal in the training treebank is assigned two latent states (cluster identifiers). Figure 2 shows an example annotation of trees for three paraphrase questions from the Paralex corpus. We compute the parameters of the bi-layered L-PCFG $G_{\mathrm {layered}}$ with a simple frequency count maximum likelihood estimate over this annotated treebank. As such, $G_{\mathrm {layered}}$ is a combination of $G_{\mathrm {syn}}$ and $G_{\mathrm {par}}$ , resulting in 24,000 latent states (24 syntactic x 1000 semantic).
Consider an example where we want to generate paraphrases for the question what day is nochebuena. Parsing it with $G_{\mathrm {layered}}$ will lead to the leftmost hybrid structure as shown in Figure 2 . The assignment of the first latent states for each nonterminals ensures that we retrieve the correct syntactic representation of the sentence. Here, however, we are more interested in the second latent states assigned to each nonterminals which capture the paraphrase information of the sentence at various levels. For example, we have a unary lexical rule (NN-*-142 day) indicating that we observe day with NN of the paraphrase type 142. We could use this information to extract unary rules of the form (NN-*-142 $w$ ) in the treebank that will generate words $w$ which are paraphrases to day. Similarly, any node WHNP-*-291 in the treebank will generate paraphrases for what day, SBARQ-*-403, for what day is nochebuena. This way we will be able to generate paraphrases when is nochebuena and when is nochebuena celebrated as they both have SBARQ-*-403 as their roots.
To generate a word lattice $W_q$ for a given question $q$ , we parse $q$ with the bi-layered grammar $G_{\mathrm {layered}}$ . For each rule of the form $X$ - $m_1$ - $m_2 \rightarrow w$ in the bi-layered tree with $X \in {\cal P}$ , $m_1 \in \lbrace 1, \ldots , 24 \rbrace $ , $m_2 \in \lbrace 1, \ldots , 1000 \rbrace $ and $q$0 a word in $q$1 , we extract rules of the form $q$2 - $q$3 - $q$4 from $q$5 such that $q$6 . For each such $q$7 , we add a path $q$8 parallel to $q$9 in the word lattice.
Paraphrase Classification
Our sampling algorithm overgenerates paraphrases which are incorrect. To improve its precision, we build a binary classifier to filter the generated paraphrases. We randomly select 100 distinct questions from the Paralex corpus and generate paraphrases using our generation algorithm with various lattice settings. We randomly select 1,000 pairs of input-sampled sentences and manually annotate them as “correct” or “incorrect” paraphrases. We train our classifier on this manually created training data. We follow madnani2012, who used MT metrics for paraphrase identification, and experiment with 8 MT metrics as features for our binary classifier. In addition, we experiment with a binary feature which checks if the sampled paraphrase preserves named entities from the input sentence. We use WEKA BIBREF32 to replicate the classifier of madnani2012 with our new feature. We tune the feature set for our classifier on the development data.
Semantic Parsing using Paraphrasing
In this section we describe how the paraphrase algorithm is used for converting natural language to Freebase queries. Following reddylargescale2014, we formalize the semantic parsing problem as a graph matching problem, i.e., finding the Freebase subgraph (grounded graph) that is isomorphic to the input question semantic structure (ungrounded graph).
This formulation has a major limitation that can be alleviated by using our paraphrase generation algorithm. Consider the question What language do people in Czech Republic speak?. The ungrounded graph corresponding to this question is shown in Figure 3 . The Freebase grounded graph which results in correct answer is shown in Figure 3 . Note that these two graphs are non-isomorphic making it impossible to derive the correct grounding from the ungrounded graph. In fact, at least 15% of the examples in our development set fail to satisfy isomorphic assumption. In order to address this problem, we use paraphrases of the input question to generate additional ungrounded graphs, with the aim that one of those paraphrases will have a structure isomorphic to the correct grounding. Figure 3 and Figure 3 are two such paraphrases which can be converted to Figure 3 as described in sec:groundedGraphs.
For a given input question, first we build ungrounded graphs from its paraphrases. We convert these graphs to Freebase graphs. To learn this mapping, we rely on manually assembled question-answer pairs. For each training question, we first find the set of oracle grounded graphs—Freebase subgraphs which when executed yield the correct answer—derivable from the question's ungrounded graphs. These oracle graphs are then used to train a structured perceptron model. These steps are discussed in detail below.
Ungrounded Graphs from Paraphrases
We use GraphParser BIBREF7 to convert paraphrases to ungrounded graphs. This conversion involves three steps: 1) parsing the paraphrase using a CCG parser to extract syntactic derivations BIBREF33 , 2) extracting logical forms from the CCG derivations BIBREF34 , and 3) converting the logical forms to an ungrounded graph. The ungrounded graph for the example question and its paraphrases are shown in Figure 3 , Figure 3 and Figure 3 , respectively.
Grounded Graphs from Ungrounded Graphs
The ungrounded graphs are grounded to Freebase subgraphs by mapping entity nodes, entity-entity edges and entity type nodes in the ungrounded graph to Freebase entities, relations and types, respectively. For example, the graph in Figure 3 can be converted to a Freebase graph in Figure 3 by replacing the entity node Czech Republic with the Freebase entity CzechRepublic, the edge (speak.arg $_2$ , speak.in) between $x$ and Czech Republic with the Freebase relation (location.country.official_language.2, location.country.official_language.1), the type node language with the Freebase type language.human_language, and the target node remains intact. The rest of the nodes, edges and types are grounded to null. In a similar fashion, Figure 3 can be grounded to Figure 3 , but not Figure 3 to Figure 3 . If no paraphrase is isomorphic to the target grounded grounded graph, our grounding fails.
Learning
We use a linear model to map ungrounded graphs to grounded ones. The parameters of the model are learned from question-answer pairs. For example, the question What language do people in Czech Republic speak? paired with its answer $\lbrace \textsc {CzechLanguage}\rbrace $ . In line with most work on question answering against Freebase, we do not rely on annotated logical forms associated with the question for training and treat the mapping of a question to its grounded graph as latent.
Let $q$ be a question, let $p$ be a paraphrase, let $u$ be an ungrounded graph for $p$ , and let $g$ be a grounded graph formed by grounding the nodes and edges of $u$ to the knowledge base $\mathcal {K}$ (throughout we use Freebase as the knowledge base). Following reddylargescale2014, we use beam search to find the highest scoring tuple of paraphrase, ungrounded and grounded graphs $(\hat{p}, \hat{u}, \hat{g})$ under the model $\theta \in \mathbb {R}^n$ : $ ({\hat{p},\hat{u},\hat{g}}) = \operatornamewithlimits{arg\,max}_{(p,u,g)} \theta \cdot \Phi (p,u,g,q,\mathcal {K})\,, $
where $\Phi (p, u, g, q, \mathcal {K}) \in \mathbb {R}^n$ denotes the features for the tuple of paraphrase, ungrounded and grounded graphs. The feature function has access to the paraphrase, ungrounded and grounded graphs, the original question, as well as to the content of the knowledge base and the denotation $|g|_\mathcal {K}$ (the denotation of a grounded graph is defined as the set of entities or attributes reachable at its target node). See sec:details for the features employed. The model parameters are estimated with the averaged structured perceptron BIBREF35 . Given a training question-answer pair $(q,\mathcal {A})$ , the update is: $ \theta ^{t+1} \leftarrow \theta ^{t} + \Phi (p^+, u^+, g^+, q, \mathcal {K}) - \Phi (\hat{p}, \hat{u}, \hat{g}, q, \mathcal {K})\,, $
where $({p^+,u^+,g^+})$ denotes the tuple of gold paraphrase, gold ungrounded and grounded graphs for $q$ . Since we do not have direct access to the gold paraphrase and graphs, we instead rely on the set of oracle tuples, $\mathcal {O}_{\mathcal {K}, \mathcal {A}}(q)$ , as a proxy: $ (p^{+},u^{+},{g^{+}}) = \operatornamewithlimits{arg\,max}_{(p,u,g) \in \mathcal {O}_{\mathcal {K},\mathcal {A}}(q)} \theta \cdot \Phi ({p,u,g,q,\mathcal {K}})\,, $
where $\mathcal {O}_{\mathcal {K}, \mathcal {A}}(q)$ is defined as the set of tuples ( $p$ , $u$ , $g$ ) derivable from the question $q$ , whose denotation $|g|_\mathcal {K}$ has minimal $F_1$ -loss against the gold answer $\mathcal {A}$ . We find the oracle graphs for each question a priori by performing beam-search with a very large beam.
Experimental Setup
Below, we give details on the evaluation dataset and baselines used for comparison. We also describe the model features and provide implementation details.
Evaluation Data and Metric
We evaluate our approach on the WebQuestions dataset BIBREF5 . WebQuestions consists of 5,810 question-answer pairs where questions represents real Google search queries. We use the standard train/test splits, with 3,778 train and 2,032 test questions. For our development experiments we tune the models on held-out data consisting of 30% training questions, while for final testing we use the complete training data. We use average precision (avg P.), average recall (avg R.) and average F $_1$ (avg F $_1$ ) proposed by berantsemantic2013 as evaluation metrics.
Baselines
We use GraphParser without paraphrases as our baseline. This gives an idea about the impact of using paraphrases.
We compare our paraphrasing models with monolingual machine translation based model for paraphrase generation BIBREF24 , BIBREF36 . In particular, we use Moses BIBREF37 to train a monolingual phrase-based MT system on the Paralex corpus. Finally, we use Moses decoder to generate 10-best distinct paraphrases for the test questions.
Implementation Details
For WebQuestions, we use 8 handcrafted part-of-speech patterns (e.g., the pattern (DT)?(JJ.? $\mid $ NN.?){0,2}NN.? matches the noun phrase the big lebowski) to identify candidate named entity mention spans. We use the Stanford CoreNLP caseless tagger for part-of-speech tagging BIBREF38 . For each candidate mention span, we retrieve the top 10 entities according to the Freebase API. We then create a lattice in which the nodes correspond to mention-entity pairs, scored by their Freebase API scores, and the edges encode the fact that no joint assignment of entities to mentions can contain overlapping spans. We take the top 10 paths through the lattice as possible entity disambiguations. For each possibility, we generate $n$ -best paraphrases that contains the entity mention spans. In the end, this process creates a total of $10n$ paraphrases. We generate ungrounded graphs for these paraphrases and treat the final entity disambiguation and paraphrase selection as part of the semantic parsing problem.
We use the features from reddylargescale2014. These include edge alignments and stem overlaps between ungrounded and grounded graphs, and contextual features such as word and grounded relation pairs. In addition to these features, we add two new real-valued features – the paraphrase classifier's score and the entity disambiguation lattice score.
We use beam search to infer the highest scoring graph pair for a question. The search operates over entity-entity edges and entity type nodes of each ungrounded graph. For an entity-entity edge, there are two operations: ground the edge to a Freebase relation, or skip the edge. Similarly, for an entity type node, there are two operations: ground the node to a Freebase type, or skip the node. We use a beam size of 100 in all our experiments.
Results and Discussion
In this section, we present results from five different systems for our question-answering experiments: original, mt, naive, ppdb and bilayered. First two are baseline systems. Other three systems use paraphrases generated from an L-PCFG grammar. naive uses a word lattice with a single start-to-end path representing the input question itself, ppdb uses a word lattice constructed using the PPDB rules, and bilayered uses bi-layered L-PCFG to build word lattices. Note that naive does not require any parallel resource to train, ppdb requires an external paraphrase database, and bilayered, like mt, needs a parallel corpus with paraphrase pairs. We tune our classifier features and GraphParser features on the development data. We use the best setting from tuning for evaluation on the test data.
Conclusion
We described a grammar method to generate paraphrases for questions, and applied it to a question answering system based on semantic parsing. We showed that using paraphrases for a question answering system is a useful way to improve its performance. Our method is rather generic and can be applied to any question answering system.
Acknowledgements
The authors would like to thank Nitin Madnani for his help with the implementation of the paraphrase classifier. We would like to thank our anonymous reviewers for their insightful comments. This research was supported by an EPSRC grant (EP/L02411X/1), the H2020 project SUMMA (under grant agreement 688139), and a Google PhD Fellowship for the second author. | GraphParser without paraphrases, monolingual machine translation based model for paraphrase generation |
e5ae8ac51946db7475bb20b96e0a22083b366a6d | e5ae8ac51946db7475bb20b96e0a22083b366a6d_0 | Q: Do they evaluate only on English data?
Text: Introduction
The global prevalence of obesity has doubled between 1980 and 2014, with more than 1.9 billion adults considered as overweight and over 600 million adults considered as obese in 2014 BIBREF0 . Since the 1970s, obesity has risen 37 percent affecting 25 percent of the U.S. adults BIBREF1 . Similar upward trends of obesity have been found in youth populations, with a 60% increase in preschool aged children between 1990 and 2010 BIBREF2 . Overweight and obesity are the fifth leading risk for global deaths according to the European Association for the Study of Obesity BIBREF0 . Excess energy intake and inadequate energy expenditure both contribute to weight gain and diabetes BIBREF3 , BIBREF4 .
Obesity can be reduced through modifiable lifestyle behaviors such as diet and exercise BIBREF4 . There are several comorbidities associated with being overweight or obese, such as diabetes BIBREF5 . The prevalence of diabetes in adults has risen globally from 4.7% in 1980 to 8.5% in 2014. Current projections estimate that by 2050, 29 million Americans will be diagnosed with type 2 diabetes, which is a 165% increase from the 11 million diagnosed in 2002 BIBREF6 . Studies show that there are strong relations among diabetes, diet, exercise, and obesity (DDEO) BIBREF7 , BIBREF4 , BIBREF8 , BIBREF9 ; however, the general public's perception of DDEO remains limited to survey-based studies BIBREF10 .
The growth of social media has provided a research opportunity to track public behaviors, information, and opinions about common health issues. It is estimated that the number of social media users will increase from 2.34 billion in 2016 to 2.95 billion in 2020 BIBREF11 . Twitter has 316 million users worldwide BIBREF12 providing a unique opportunity to understand users' opinions with respect to the most common health issues BIBREF13 . Publicly available Twitter posts have facilitated data collection and leveraged the research at the intersection of public health and data science; thus, informing the research community of major opinions and topics of interest among the general population BIBREF14 , BIBREF15 , BIBREF16 that cannot otherwise be collected through traditional means of research (e.g., surveys, interviews, focus groups) BIBREF17 , BIBREF18 . Furthermore, analyzing Twitter data can help health organizations such as state health departments and large healthcare systems to provide health advice and track health opinions of their populations and provide effective health advice when needed BIBREF13 .
Among computational methods to analyze tweets, computational linguistics is a well-known developed approach to gain insight into a population, track health issues, and discover new knowledge BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 . Twitter data has been used for a wide range of health and non-health related applications, such as stock market BIBREF23 and election analysis BIBREF24 . Some examples of Twitter data analysis for health-related topics include: flu BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 , BIBREF30 , mental health BIBREF31 , Ebola BIBREF32 , BIBREF33 , Zika BIBREF34 , medication use BIBREF35 , BIBREF36 , BIBREF37 , diabetes BIBREF38 , and weight loss and obesity BIBREF39 , BIBREF40 , BIBREF41 , BIBREF42 , BIBREF21 .
The previous Twitter studies have dealt with extracting common topics of one health issue discussed by the users to better understand common themes; however, this study utilizes an innovative approach to computationally analyze unstructured health related text data exchanged via Twitter to characterize health opinions regarding four common health issues, including diabetes, diet, exercise and obesity (DDEO) on a population level. This study identifies the characteristics of the most common health opinions with respect to DDEO and discloses public perception of the relationship among diabetes, diet, exercise and obesity. These common public opinions/topics and perceptions can be used by providers and public health agencies to better understand the common opinions of their population denominators in regard to DDEO, and reflect upon those opinions accordingly.
Methods
Our approach uses semantic and linguistics analyses for disclosing health characteristics of opinions in tweets containing DDEO words. The present study included three phases: data collection, topic discovery, and topic-content analysis.
Data Collection
This phase collected tweets using Twitter's Application Programming Interfaces (API) BIBREF43 . Within the Twitter API, diabetes, diet, exercise, and obesity were selected as the related words BIBREF4 and the related health areas BIBREF19 . Twitter's APIs provides both historic and real-time data collections. The latter method randomly collects 1% of publicly available tweets. This paper used the real-time method to randomly collect 10% of publicly available English tweets using several pre-defined DDEO-related queries (Table TABREF6 ) within a specific time frame. We used the queries to collect approximately 4.5 million related tweets between 06/01/2016 and 06/30/2016. The data will be available in the first author's website. Figure FIGREF3 shows a sample of collected tweets in this research.
Topic Discovery
To discover topics from the collected tweets, we used a topic modeling approach that fuzzy clusters the semantically related words such as assigning “diabetes", “cancer", and “influenza" into a topic that has an overall “disease" theme BIBREF44 , BIBREF45 . Topic modeling has a wide range of applications in health and medical domains such as predicting protein-protein relationships based on the literature knowledge BIBREF46 , discovering relevant clinical concepts and structures in patients' health records BIBREF47 , and identifying patterns of clinical events in a cohort of brain cancer patients BIBREF48 .
Among topic models, Latent Dirichlet Allocation (LDA) BIBREF49 is the most popular effective model BIBREF50 , BIBREF19 as studies have shown that LDA is an effective computational linguistics model for discovering topics in a corpus BIBREF51 , BIBREF52 . LDA assumes that a corpus contains topics such that each word in each document can be assigned to the topics with different degrees of membership BIBREF53 , BIBREF54 , BIBREF55 .
Twitter users can post their opinions or share information about a subject to the public. Identifying the main topics of users' tweets provides an interesting point of reference, but conceptualizing larger subtopics of millions of tweets can reveal valuable insight to users' opinions. The topic discovery component of the study approach uses LDA to find main topics, themes, and opinions in the collected tweets.
We used the Mallet implementation of LDA BIBREF49 , BIBREF56 with its default settings to explore opinions in the tweets. Before identifying the opinions, two pre-processing steps were implemented: (1) using a standard list for removing stop words, that do not have semantic value for analysis (such as “the"); and, (2) finding the optimum number of topics. To determine a proper number of topics, log-likelihood estimation with 80% of tweets for training and 20% of tweets for testing was used to find the highest log-likelihood, as it is the optimum number of topics BIBREF57 . The highest log-likelihood was determined 425 topics.
Topic Content Analysis
The topic content analysis component used an objective interpretation approach with a lexicon-based approach to analyze the content of topics. The lexicon-based approach uses dictionaries to disclose the semantic orientation of words in a topic. Linguistic Inquiry and Word Count (LIWC) is a linguistics analysis tool that reveals thoughts, feelings, personality, and motivations in a corpus BIBREF58 , BIBREF59 , BIBREF60 . LIWC has accepted rate of sensitivity, specificity, and English proficiency measures BIBREF61 . LIWC has a health related dictionary that can help to find whether a topic contains words associated with health. In this analysis, we used LIWC to find health related topics.
Results
Obesity and Diabetes showed the highest and the lowest number of tweets (51.7% and 8.0%). Diet and Exercise formed 23.7% and 16.6% of the tweets (Table TABREF6 ).
Out of all 4.5 million DDEO-related tweets returned by Tweeter's API, the LDA found 425 topics. We used LIWC to filter the detected 425 topics and found 222 health-related topics. Additionally, we labeled topics based on the availability of DDEO words. For example, if a topic had “diet", we labeled it as a diet-related topic. As expected and driven by the initial Tweeter API's query, common topics were Diabetes, Diet, Exercise, and Obesity (DDEO). (Table TABREF7 ) shows that the highest and the lowest number of topics were related to exercise and diabetes (80 and 21 out of 222). Diet and Obesity had almost similar rates (58 and 63 out of 222).
Each of the DDEO topics included several common subtopics including both DDEO and non-DDEO terms discovered by the LDA algorithm (Table TABREF7 ). Common subtopics for “Diabetes", in order of frequency, included type 2 diabetes, obesity, diet, exercise, blood pressure, heart attack, yoga, and Alzheimer. Common subtopics for “Diet" included obesity, exercise, weight loss [medicine], celebrities, vegetarian, diabetes, religious diet, pregnancy, and mental health. Frequent subtopics for “Exercise" included fitness, obesity, daily plan, diet, brain, diabetes, and computer games. And finally, the most common subtopics for “Obesity" included diet, exercise, children, diabetes, Alzheimer, and cancer (Table TABREF7 ). Table TABREF8 provides illustrative examples for each of the topics and subtopics.
Further exploration of the subtopics revealed additional patterns of interest (Tables TABREF7 and TABREF8 ). We found 21 diabetes-related topics with 8 subtopics. While type 2 diabetes was the most frequent of the sub-topics, heart attack, Yoga, and Alzheimer are the least frequent subtopics for diabetes. Diet had a wide variety of emerging themes ranging from celebrity diet (e.g., Beyonce) to religious diet (e.g., Ramadan). Diet was detected in 63 topics with 10 subtopics; obesity, and pregnancy and mental health were the most and the least discussed obesity-related topics, respectively. Exploring the themes for Exercise subtopics revealed subjects such as computer games (e.g., Pokemon-Go) and brain exercises (e.g., memory improvement). Exercise had 7 subtopics with fitness as the most discussed subtopic and computer games as the least discussed subtopic. Finally, Obesity themes showed topics such as Alzheimer (e.g., research studies) and cancer (e.g., breast cancer). Obesity had the lowest diversity of subtopics: six with diet as the most discussed subtopic, and Alzheimer and cancer as the least discussed subtopics.
Diabetes subtopics show the relation between diabetes and exercise, diet, and obesity. Subtopics of diabetes revealed that users post about the relationship between diabetes and other diseases such as heart attack (Tables TABREF7 and TABREF8 ). The subtopic Alzheimer is also shown in the obesity subtopics. This overlap between categories prompts the discussion of research and linkages among obesity, diabetes, and Alzheimer's disease. Type 2 diabetes was another subtopic expressed by users and scientifically documented in the literature.
The main DDEO topics showed some level of interrelationship by appearing as subtopics of other DDEO topics. The words with italic and underline styles in Table 2 demonstrate the relation among the four DDEO areas. Our results show users' interest about posting their opinions, sharing information, and conversing about exercise & diabetes, exercise & diet, diabetes & diet, diabetes & obesity, and diet & obesity (Figure FIGREF9 ). The strongest correlation among the topics was determined to be between exercise and obesity ( INLINEFORM0 ). Other notable correlations were: diabetes and obesity ( INLINEFORM1 ), and diet and obesity ( INLINEFORM2 ).
Discussion
Diabetes, diet, exercise, and obesity are common public health related opinions. Analyzing individual- level opinions by automated algorithmic techniques can be a useful approach to better characterize health opinions of a population. Traditional public health polls and surveys are limited by a small sample size; however, Twitter provides a platform to capture an array of opinions and shared information a expressed in the words of the tweeter. Studies show that Twitter data can be used to discover trending topics, and that there is a strong correlation between Twitter health conversations and Centers for Disease Control and Prevention (CDC) statistics BIBREF62 .
This research provides a computational content analysis approach to conduct a deep analysis using a large data set of tweets. Our framework decodes public health opinions in DDEO related tweets, which can be applied to other public health issues. Among health-related subtopics, there are a wide range of topics from diseases to personal experiences such as participating in religious activities or vegetarian diets.
Diabetes subtopics showed the relationship between diabetes and exercise, diet, and obesity (Tables TABREF7 and TABREF8 ). Subtopics of diabetes revealed that users posted about the relation between diabetes and other diseases such as heart attack. The subtopic Alzheimer is also shown in the obesity subtopics. This overlap between categories prompts the discussion of research and linkages among obesity, diabetes, and Alzheimer's disease. Type 2 diabetes was another subtopic that was also expressed by users and scientifically documented in the literature. The inclusion of Yoga in posts about diabetes is interesting. While yoga would certainly be labeled as a form of fitness, when considering the post, it was insightful to see discussion on the mental health benefits that yoga offers to those living with diabetes BIBREF63 .
Diet had the highest number of subtopics. For example, religious diet activities such as fasting during the month of Ramadan for Muslims incorporated two subtopics categorized under the diet topic (Tables TABREF7 and TABREF8 ). This information has implications for the type of diets that are being practiced in the religious community, but may help inform religious scholars who focus on health and psychological conditions during fasting. Other religions such as Judaism, Christianity, and Taoism have periods of fasting that were not captured in our data collection, which may have been due to lack of posts or the timeframe in which we collected data. The diet plans of celebrities were also considered influential to explaining and informing diet opinions of Twitter users BIBREF64 .
Exercise themes show the Twitter users' association of exercise with “brain" benefits such as increased memory and cognitive performance (Tables TABREF7 and TABREF8 ) BIBREF65 . The topics also confirm that exercising is associated with controlling diabetes and assisting with meal planning BIBREF66 , BIBREF9 , and obesity BIBREF67 . Additionally, we found the Twitter users mentioned exercise topics about the use of computer games that assist with exercising. The recent mobile gaming phenomenon Pokeman-Go game BIBREF68 was highly associated with the exercise topic. Pokemon-Go allows users to operate in a virtual environment while simultaneously functioning in the real word. Capturing Pokemons, battling characters, and finding physical locations for meeting other users required physically activity to reach predefined locations. These themes reflect on the potential of augmented reality in increasing patients' physical activity levels BIBREF69 .
Obesity had the lowest number of subtopics in our study. Three of the subtopics were related to other diseases such as diabetes (Tables TABREF7 and TABREF8 ). The scholarly literature has well documented the possible linkages between obesity and chronic diseases such as diabetes BIBREF1 as supported by the study results. The topic of children is another prominent subtopic associated with obesity. There has been an increasing number of opinions in regard to child obesity and national health campaigns that have been developed to encourage physical activity among children BIBREF70 . Alzheimer was also identified as a topic under obesity. Although considered a perplexing finding, recent studies have been conducted to identify possible correlation between obesity and Alzheimer's disease BIBREF71 , BIBREF72 , BIBREF73 . Indeed, Twitter users have expressed opinions about the study of Alzheimer's disease and the linkage between these two topics.
This paper addresses a need for clinical providers, public health experts, and social scientists to utilize a large conversational dataset to collect and utilize population level opinions and information needs. Although our framework is applied to Twitter, the applications from this study can be used in patient communication devices monitored by physicians or weight management interventions with social media accounts, and support large scale population-wide initiatives to promote healthy behaviors and preventative measures for diabetes, diet, exercise, and obesity.
This research has some limitations. First, our DDEO analysis does not take geographical location of the Twitter users into consideration and thus does not reveal if certain geographical differences exists. Second, we used a limited number of queries to select the initial pool of tweets, thus perhaps missing tweets that may have been relevant to DDEO but have used unusual terms referenced. Third, our analysis only included tweets generated in one month; however, as our previous work has demonstrated BIBREF42 , public opinion can change during a year. Additionally, we did not track individuals across time to detect changes in common themes discussed. Our future research plans includes introducing a dynamic framework to collect and analyze DDEO related tweets during extended time periods (multiple months) and incorporating spatial analysis of DDEO-related tweets.
Conclusion
This study represents the first step in developing routine processes to collect, analyze, and interpret DDEO-related posts to social media around health-related topics and presents a transdisciplinary approach to analyzing public discussions around health topics. With 2.34 billion social media users in 2016, the ability to collect and synthesize social media data will continue to grow. Developing methods to make this process more streamlined and robust will allow for more rapid identification of public health trends in real time.
Note: Amir Karami will handle correspondence at all stages of refereeing and publication.
Conflict of interest
The authors state that they have no conflict of interest.
Acknowledgement
This research was partially supported by the first author's startup research funding provided by the University of South Carolina, School of Library and Information Science. We thank Jill Chappell-Fail and Jeff Salter at the University of South Carolina College of Information and Communications for assistance with technical support.
References | Yes |
18288c7b0f8bd7839ae92f9c293e7fb85c7e146a | 18288c7b0f8bd7839ae92f9c293e7fb85c7e146a_0 | Q: How strong was the correlation between exercise and diabetes?
Text: Introduction
The global prevalence of obesity has doubled between 1980 and 2014, with more than 1.9 billion adults considered as overweight and over 600 million adults considered as obese in 2014 BIBREF0 . Since the 1970s, obesity has risen 37 percent affecting 25 percent of the U.S. adults BIBREF1 . Similar upward trends of obesity have been found in youth populations, with a 60% increase in preschool aged children between 1990 and 2010 BIBREF2 . Overweight and obesity are the fifth leading risk for global deaths according to the European Association for the Study of Obesity BIBREF0 . Excess energy intake and inadequate energy expenditure both contribute to weight gain and diabetes BIBREF3 , BIBREF4 .
Obesity can be reduced through modifiable lifestyle behaviors such as diet and exercise BIBREF4 . There are several comorbidities associated with being overweight or obese, such as diabetes BIBREF5 . The prevalence of diabetes in adults has risen globally from 4.7% in 1980 to 8.5% in 2014. Current projections estimate that by 2050, 29 million Americans will be diagnosed with type 2 diabetes, which is a 165% increase from the 11 million diagnosed in 2002 BIBREF6 . Studies show that there are strong relations among diabetes, diet, exercise, and obesity (DDEO) BIBREF7 , BIBREF4 , BIBREF8 , BIBREF9 ; however, the general public's perception of DDEO remains limited to survey-based studies BIBREF10 .
The growth of social media has provided a research opportunity to track public behaviors, information, and opinions about common health issues. It is estimated that the number of social media users will increase from 2.34 billion in 2016 to 2.95 billion in 2020 BIBREF11 . Twitter has 316 million users worldwide BIBREF12 providing a unique opportunity to understand users' opinions with respect to the most common health issues BIBREF13 . Publicly available Twitter posts have facilitated data collection and leveraged the research at the intersection of public health and data science; thus, informing the research community of major opinions and topics of interest among the general population BIBREF14 , BIBREF15 , BIBREF16 that cannot otherwise be collected through traditional means of research (e.g., surveys, interviews, focus groups) BIBREF17 , BIBREF18 . Furthermore, analyzing Twitter data can help health organizations such as state health departments and large healthcare systems to provide health advice and track health opinions of their populations and provide effective health advice when needed BIBREF13 .
Among computational methods to analyze tweets, computational linguistics is a well-known developed approach to gain insight into a population, track health issues, and discover new knowledge BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 . Twitter data has been used for a wide range of health and non-health related applications, such as stock market BIBREF23 and election analysis BIBREF24 . Some examples of Twitter data analysis for health-related topics include: flu BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 , BIBREF30 , mental health BIBREF31 , Ebola BIBREF32 , BIBREF33 , Zika BIBREF34 , medication use BIBREF35 , BIBREF36 , BIBREF37 , diabetes BIBREF38 , and weight loss and obesity BIBREF39 , BIBREF40 , BIBREF41 , BIBREF42 , BIBREF21 .
The previous Twitter studies have dealt with extracting common topics of one health issue discussed by the users to better understand common themes; however, this study utilizes an innovative approach to computationally analyze unstructured health related text data exchanged via Twitter to characterize health opinions regarding four common health issues, including diabetes, diet, exercise and obesity (DDEO) on a population level. This study identifies the characteristics of the most common health opinions with respect to DDEO and discloses public perception of the relationship among diabetes, diet, exercise and obesity. These common public opinions/topics and perceptions can be used by providers and public health agencies to better understand the common opinions of their population denominators in regard to DDEO, and reflect upon those opinions accordingly.
Methods
Our approach uses semantic and linguistics analyses for disclosing health characteristics of opinions in tweets containing DDEO words. The present study included three phases: data collection, topic discovery, and topic-content analysis.
Data Collection
This phase collected tweets using Twitter's Application Programming Interfaces (API) BIBREF43 . Within the Twitter API, diabetes, diet, exercise, and obesity were selected as the related words BIBREF4 and the related health areas BIBREF19 . Twitter's APIs provides both historic and real-time data collections. The latter method randomly collects 1% of publicly available tweets. This paper used the real-time method to randomly collect 10% of publicly available English tweets using several pre-defined DDEO-related queries (Table TABREF6 ) within a specific time frame. We used the queries to collect approximately 4.5 million related tweets between 06/01/2016 and 06/30/2016. The data will be available in the first author's website. Figure FIGREF3 shows a sample of collected tweets in this research.
Topic Discovery
To discover topics from the collected tweets, we used a topic modeling approach that fuzzy clusters the semantically related words such as assigning “diabetes", “cancer", and “influenza" into a topic that has an overall “disease" theme BIBREF44 , BIBREF45 . Topic modeling has a wide range of applications in health and medical domains such as predicting protein-protein relationships based on the literature knowledge BIBREF46 , discovering relevant clinical concepts and structures in patients' health records BIBREF47 , and identifying patterns of clinical events in a cohort of brain cancer patients BIBREF48 .
Among topic models, Latent Dirichlet Allocation (LDA) BIBREF49 is the most popular effective model BIBREF50 , BIBREF19 as studies have shown that LDA is an effective computational linguistics model for discovering topics in a corpus BIBREF51 , BIBREF52 . LDA assumes that a corpus contains topics such that each word in each document can be assigned to the topics with different degrees of membership BIBREF53 , BIBREF54 , BIBREF55 .
Twitter users can post their opinions or share information about a subject to the public. Identifying the main topics of users' tweets provides an interesting point of reference, but conceptualizing larger subtopics of millions of tweets can reveal valuable insight to users' opinions. The topic discovery component of the study approach uses LDA to find main topics, themes, and opinions in the collected tweets.
We used the Mallet implementation of LDA BIBREF49 , BIBREF56 with its default settings to explore opinions in the tweets. Before identifying the opinions, two pre-processing steps were implemented: (1) using a standard list for removing stop words, that do not have semantic value for analysis (such as “the"); and, (2) finding the optimum number of topics. To determine a proper number of topics, log-likelihood estimation with 80% of tweets for training and 20% of tweets for testing was used to find the highest log-likelihood, as it is the optimum number of topics BIBREF57 . The highest log-likelihood was determined 425 topics.
Topic Content Analysis
The topic content analysis component used an objective interpretation approach with a lexicon-based approach to analyze the content of topics. The lexicon-based approach uses dictionaries to disclose the semantic orientation of words in a topic. Linguistic Inquiry and Word Count (LIWC) is a linguistics analysis tool that reveals thoughts, feelings, personality, and motivations in a corpus BIBREF58 , BIBREF59 , BIBREF60 . LIWC has accepted rate of sensitivity, specificity, and English proficiency measures BIBREF61 . LIWC has a health related dictionary that can help to find whether a topic contains words associated with health. In this analysis, we used LIWC to find health related topics.
Results
Obesity and Diabetes showed the highest and the lowest number of tweets (51.7% and 8.0%). Diet and Exercise formed 23.7% and 16.6% of the tweets (Table TABREF6 ).
Out of all 4.5 million DDEO-related tweets returned by Tweeter's API, the LDA found 425 topics. We used LIWC to filter the detected 425 topics and found 222 health-related topics. Additionally, we labeled topics based on the availability of DDEO words. For example, if a topic had “diet", we labeled it as a diet-related topic. As expected and driven by the initial Tweeter API's query, common topics were Diabetes, Diet, Exercise, and Obesity (DDEO). (Table TABREF7 ) shows that the highest and the lowest number of topics were related to exercise and diabetes (80 and 21 out of 222). Diet and Obesity had almost similar rates (58 and 63 out of 222).
Each of the DDEO topics included several common subtopics including both DDEO and non-DDEO terms discovered by the LDA algorithm (Table TABREF7 ). Common subtopics for “Diabetes", in order of frequency, included type 2 diabetes, obesity, diet, exercise, blood pressure, heart attack, yoga, and Alzheimer. Common subtopics for “Diet" included obesity, exercise, weight loss [medicine], celebrities, vegetarian, diabetes, religious diet, pregnancy, and mental health. Frequent subtopics for “Exercise" included fitness, obesity, daily plan, diet, brain, diabetes, and computer games. And finally, the most common subtopics for “Obesity" included diet, exercise, children, diabetes, Alzheimer, and cancer (Table TABREF7 ). Table TABREF8 provides illustrative examples for each of the topics and subtopics.
Further exploration of the subtopics revealed additional patterns of interest (Tables TABREF7 and TABREF8 ). We found 21 diabetes-related topics with 8 subtopics. While type 2 diabetes was the most frequent of the sub-topics, heart attack, Yoga, and Alzheimer are the least frequent subtopics for diabetes. Diet had a wide variety of emerging themes ranging from celebrity diet (e.g., Beyonce) to religious diet (e.g., Ramadan). Diet was detected in 63 topics with 10 subtopics; obesity, and pregnancy and mental health were the most and the least discussed obesity-related topics, respectively. Exploring the themes for Exercise subtopics revealed subjects such as computer games (e.g., Pokemon-Go) and brain exercises (e.g., memory improvement). Exercise had 7 subtopics with fitness as the most discussed subtopic and computer games as the least discussed subtopic. Finally, Obesity themes showed topics such as Alzheimer (e.g., research studies) and cancer (e.g., breast cancer). Obesity had the lowest diversity of subtopics: six with diet as the most discussed subtopic, and Alzheimer and cancer as the least discussed subtopics.
Diabetes subtopics show the relation between diabetes and exercise, diet, and obesity. Subtopics of diabetes revealed that users post about the relationship between diabetes and other diseases such as heart attack (Tables TABREF7 and TABREF8 ). The subtopic Alzheimer is also shown in the obesity subtopics. This overlap between categories prompts the discussion of research and linkages among obesity, diabetes, and Alzheimer's disease. Type 2 diabetes was another subtopic expressed by users and scientifically documented in the literature.
The main DDEO topics showed some level of interrelationship by appearing as subtopics of other DDEO topics. The words with italic and underline styles in Table 2 demonstrate the relation among the four DDEO areas. Our results show users' interest about posting their opinions, sharing information, and conversing about exercise & diabetes, exercise & diet, diabetes & diet, diabetes & obesity, and diet & obesity (Figure FIGREF9 ). The strongest correlation among the topics was determined to be between exercise and obesity ( INLINEFORM0 ). Other notable correlations were: diabetes and obesity ( INLINEFORM1 ), and diet and obesity ( INLINEFORM2 ).
Discussion
Diabetes, diet, exercise, and obesity are common public health related opinions. Analyzing individual- level opinions by automated algorithmic techniques can be a useful approach to better characterize health opinions of a population. Traditional public health polls and surveys are limited by a small sample size; however, Twitter provides a platform to capture an array of opinions and shared information a expressed in the words of the tweeter. Studies show that Twitter data can be used to discover trending topics, and that there is a strong correlation between Twitter health conversations and Centers for Disease Control and Prevention (CDC) statistics BIBREF62 .
This research provides a computational content analysis approach to conduct a deep analysis using a large data set of tweets. Our framework decodes public health opinions in DDEO related tweets, which can be applied to other public health issues. Among health-related subtopics, there are a wide range of topics from diseases to personal experiences such as participating in religious activities or vegetarian diets.
Diabetes subtopics showed the relationship between diabetes and exercise, diet, and obesity (Tables TABREF7 and TABREF8 ). Subtopics of diabetes revealed that users posted about the relation between diabetes and other diseases such as heart attack. The subtopic Alzheimer is also shown in the obesity subtopics. This overlap between categories prompts the discussion of research and linkages among obesity, diabetes, and Alzheimer's disease. Type 2 diabetes was another subtopic that was also expressed by users and scientifically documented in the literature. The inclusion of Yoga in posts about diabetes is interesting. While yoga would certainly be labeled as a form of fitness, when considering the post, it was insightful to see discussion on the mental health benefits that yoga offers to those living with diabetes BIBREF63 .
Diet had the highest number of subtopics. For example, religious diet activities such as fasting during the month of Ramadan for Muslims incorporated two subtopics categorized under the diet topic (Tables TABREF7 and TABREF8 ). This information has implications for the type of diets that are being practiced in the religious community, but may help inform religious scholars who focus on health and psychological conditions during fasting. Other religions such as Judaism, Christianity, and Taoism have periods of fasting that were not captured in our data collection, which may have been due to lack of posts or the timeframe in which we collected data. The diet plans of celebrities were also considered influential to explaining and informing diet opinions of Twitter users BIBREF64 .
Exercise themes show the Twitter users' association of exercise with “brain" benefits such as increased memory and cognitive performance (Tables TABREF7 and TABREF8 ) BIBREF65 . The topics also confirm that exercising is associated with controlling diabetes and assisting with meal planning BIBREF66 , BIBREF9 , and obesity BIBREF67 . Additionally, we found the Twitter users mentioned exercise topics about the use of computer games that assist with exercising. The recent mobile gaming phenomenon Pokeman-Go game BIBREF68 was highly associated with the exercise topic. Pokemon-Go allows users to operate in a virtual environment while simultaneously functioning in the real word. Capturing Pokemons, battling characters, and finding physical locations for meeting other users required physically activity to reach predefined locations. These themes reflect on the potential of augmented reality in increasing patients' physical activity levels BIBREF69 .
Obesity had the lowest number of subtopics in our study. Three of the subtopics were related to other diseases such as diabetes (Tables TABREF7 and TABREF8 ). The scholarly literature has well documented the possible linkages between obesity and chronic diseases such as diabetes BIBREF1 as supported by the study results. The topic of children is another prominent subtopic associated with obesity. There has been an increasing number of opinions in regard to child obesity and national health campaigns that have been developed to encourage physical activity among children BIBREF70 . Alzheimer was also identified as a topic under obesity. Although considered a perplexing finding, recent studies have been conducted to identify possible correlation between obesity and Alzheimer's disease BIBREF71 , BIBREF72 , BIBREF73 . Indeed, Twitter users have expressed opinions about the study of Alzheimer's disease and the linkage between these two topics.
This paper addresses a need for clinical providers, public health experts, and social scientists to utilize a large conversational dataset to collect and utilize population level opinions and information needs. Although our framework is applied to Twitter, the applications from this study can be used in patient communication devices monitored by physicians or weight management interventions with social media accounts, and support large scale population-wide initiatives to promote healthy behaviors and preventative measures for diabetes, diet, exercise, and obesity.
This research has some limitations. First, our DDEO analysis does not take geographical location of the Twitter users into consideration and thus does not reveal if certain geographical differences exists. Second, we used a limited number of queries to select the initial pool of tweets, thus perhaps missing tweets that may have been relevant to DDEO but have used unusual terms referenced. Third, our analysis only included tweets generated in one month; however, as our previous work has demonstrated BIBREF42 , public opinion can change during a year. Additionally, we did not track individuals across time to detect changes in common themes discussed. Our future research plans includes introducing a dynamic framework to collect and analyze DDEO related tweets during extended time periods (multiple months) and incorporating spatial analysis of DDEO-related tweets.
Conclusion
This study represents the first step in developing routine processes to collect, analyze, and interpret DDEO-related posts to social media around health-related topics and presents a transdisciplinary approach to analyzing public discussions around health topics. With 2.34 billion social media users in 2016, the ability to collect and synthesize social media data will continue to grow. Developing methods to make this process more streamlined and robust will allow for more rapid identification of public health trends in real time.
Note: Amir Karami will handle correspondence at all stages of refereeing and publication.
Conflict of interest
The authors state that they have no conflict of interest.
Acknowledgement
This research was partially supported by the first author's startup research funding provided by the University of South Carolina, School of Library and Information Science. We thank Jill Chappell-Fail and Jeff Salter at the University of South Carolina College of Information and Communications for assistance with technical support.
References | weak correlation with p-value of 0.08 |
b5e883b15e63029eb07d6ff42df703a64613a18a | b5e883b15e63029eb07d6ff42df703a64613a18a_0 | Q: How were topics of interest about DDEO identified?
Text: Introduction
The global prevalence of obesity has doubled between 1980 and 2014, with more than 1.9 billion adults considered as overweight and over 600 million adults considered as obese in 2014 BIBREF0 . Since the 1970s, obesity has risen 37 percent affecting 25 percent of the U.S. adults BIBREF1 . Similar upward trends of obesity have been found in youth populations, with a 60% increase in preschool aged children between 1990 and 2010 BIBREF2 . Overweight and obesity are the fifth leading risk for global deaths according to the European Association for the Study of Obesity BIBREF0 . Excess energy intake and inadequate energy expenditure both contribute to weight gain and diabetes BIBREF3 , BIBREF4 .
Obesity can be reduced through modifiable lifestyle behaviors such as diet and exercise BIBREF4 . There are several comorbidities associated with being overweight or obese, such as diabetes BIBREF5 . The prevalence of diabetes in adults has risen globally from 4.7% in 1980 to 8.5% in 2014. Current projections estimate that by 2050, 29 million Americans will be diagnosed with type 2 diabetes, which is a 165% increase from the 11 million diagnosed in 2002 BIBREF6 . Studies show that there are strong relations among diabetes, diet, exercise, and obesity (DDEO) BIBREF7 , BIBREF4 , BIBREF8 , BIBREF9 ; however, the general public's perception of DDEO remains limited to survey-based studies BIBREF10 .
The growth of social media has provided a research opportunity to track public behaviors, information, and opinions about common health issues. It is estimated that the number of social media users will increase from 2.34 billion in 2016 to 2.95 billion in 2020 BIBREF11 . Twitter has 316 million users worldwide BIBREF12 providing a unique opportunity to understand users' opinions with respect to the most common health issues BIBREF13 . Publicly available Twitter posts have facilitated data collection and leveraged the research at the intersection of public health and data science; thus, informing the research community of major opinions and topics of interest among the general population BIBREF14 , BIBREF15 , BIBREF16 that cannot otherwise be collected through traditional means of research (e.g., surveys, interviews, focus groups) BIBREF17 , BIBREF18 . Furthermore, analyzing Twitter data can help health organizations such as state health departments and large healthcare systems to provide health advice and track health opinions of their populations and provide effective health advice when needed BIBREF13 .
Among computational methods to analyze tweets, computational linguistics is a well-known developed approach to gain insight into a population, track health issues, and discover new knowledge BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 . Twitter data has been used for a wide range of health and non-health related applications, such as stock market BIBREF23 and election analysis BIBREF24 . Some examples of Twitter data analysis for health-related topics include: flu BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 , BIBREF30 , mental health BIBREF31 , Ebola BIBREF32 , BIBREF33 , Zika BIBREF34 , medication use BIBREF35 , BIBREF36 , BIBREF37 , diabetes BIBREF38 , and weight loss and obesity BIBREF39 , BIBREF40 , BIBREF41 , BIBREF42 , BIBREF21 .
The previous Twitter studies have dealt with extracting common topics of one health issue discussed by the users to better understand common themes; however, this study utilizes an innovative approach to computationally analyze unstructured health related text data exchanged via Twitter to characterize health opinions regarding four common health issues, including diabetes, diet, exercise and obesity (DDEO) on a population level. This study identifies the characteristics of the most common health opinions with respect to DDEO and discloses public perception of the relationship among diabetes, diet, exercise and obesity. These common public opinions/topics and perceptions can be used by providers and public health agencies to better understand the common opinions of their population denominators in regard to DDEO, and reflect upon those opinions accordingly.
Methods
Our approach uses semantic and linguistics analyses for disclosing health characteristics of opinions in tweets containing DDEO words. The present study included three phases: data collection, topic discovery, and topic-content analysis.
Data Collection
This phase collected tweets using Twitter's Application Programming Interfaces (API) BIBREF43 . Within the Twitter API, diabetes, diet, exercise, and obesity were selected as the related words BIBREF4 and the related health areas BIBREF19 . Twitter's APIs provides both historic and real-time data collections. The latter method randomly collects 1% of publicly available tweets. This paper used the real-time method to randomly collect 10% of publicly available English tweets using several pre-defined DDEO-related queries (Table TABREF6 ) within a specific time frame. We used the queries to collect approximately 4.5 million related tweets between 06/01/2016 and 06/30/2016. The data will be available in the first author's website. Figure FIGREF3 shows a sample of collected tweets in this research.
Topic Discovery
To discover topics from the collected tweets, we used a topic modeling approach that fuzzy clusters the semantically related words such as assigning “diabetes", “cancer", and “influenza" into a topic that has an overall “disease" theme BIBREF44 , BIBREF45 . Topic modeling has a wide range of applications in health and medical domains such as predicting protein-protein relationships based on the literature knowledge BIBREF46 , discovering relevant clinical concepts and structures in patients' health records BIBREF47 , and identifying patterns of clinical events in a cohort of brain cancer patients BIBREF48 .
Among topic models, Latent Dirichlet Allocation (LDA) BIBREF49 is the most popular effective model BIBREF50 , BIBREF19 as studies have shown that LDA is an effective computational linguistics model for discovering topics in a corpus BIBREF51 , BIBREF52 . LDA assumes that a corpus contains topics such that each word in each document can be assigned to the topics with different degrees of membership BIBREF53 , BIBREF54 , BIBREF55 .
Twitter users can post their opinions or share information about a subject to the public. Identifying the main topics of users' tweets provides an interesting point of reference, but conceptualizing larger subtopics of millions of tweets can reveal valuable insight to users' opinions. The topic discovery component of the study approach uses LDA to find main topics, themes, and opinions in the collected tweets.
We used the Mallet implementation of LDA BIBREF49 , BIBREF56 with its default settings to explore opinions in the tweets. Before identifying the opinions, two pre-processing steps were implemented: (1) using a standard list for removing stop words, that do not have semantic value for analysis (such as “the"); and, (2) finding the optimum number of topics. To determine a proper number of topics, log-likelihood estimation with 80% of tweets for training and 20% of tweets for testing was used to find the highest log-likelihood, as it is the optimum number of topics BIBREF57 . The highest log-likelihood was determined 425 topics.
Topic Content Analysis
The topic content analysis component used an objective interpretation approach with a lexicon-based approach to analyze the content of topics. The lexicon-based approach uses dictionaries to disclose the semantic orientation of words in a topic. Linguistic Inquiry and Word Count (LIWC) is a linguistics analysis tool that reveals thoughts, feelings, personality, and motivations in a corpus BIBREF58 , BIBREF59 , BIBREF60 . LIWC has accepted rate of sensitivity, specificity, and English proficiency measures BIBREF61 . LIWC has a health related dictionary that can help to find whether a topic contains words associated with health. In this analysis, we used LIWC to find health related topics.
Results
Obesity and Diabetes showed the highest and the lowest number of tweets (51.7% and 8.0%). Diet and Exercise formed 23.7% and 16.6% of the tweets (Table TABREF6 ).
Out of all 4.5 million DDEO-related tweets returned by Tweeter's API, the LDA found 425 topics. We used LIWC to filter the detected 425 topics and found 222 health-related topics. Additionally, we labeled topics based on the availability of DDEO words. For example, if a topic had “diet", we labeled it as a diet-related topic. As expected and driven by the initial Tweeter API's query, common topics were Diabetes, Diet, Exercise, and Obesity (DDEO). (Table TABREF7 ) shows that the highest and the lowest number of topics were related to exercise and diabetes (80 and 21 out of 222). Diet and Obesity had almost similar rates (58 and 63 out of 222).
Each of the DDEO topics included several common subtopics including both DDEO and non-DDEO terms discovered by the LDA algorithm (Table TABREF7 ). Common subtopics for “Diabetes", in order of frequency, included type 2 diabetes, obesity, diet, exercise, blood pressure, heart attack, yoga, and Alzheimer. Common subtopics for “Diet" included obesity, exercise, weight loss [medicine], celebrities, vegetarian, diabetes, religious diet, pregnancy, and mental health. Frequent subtopics for “Exercise" included fitness, obesity, daily plan, diet, brain, diabetes, and computer games. And finally, the most common subtopics for “Obesity" included diet, exercise, children, diabetes, Alzheimer, and cancer (Table TABREF7 ). Table TABREF8 provides illustrative examples for each of the topics and subtopics.
Further exploration of the subtopics revealed additional patterns of interest (Tables TABREF7 and TABREF8 ). We found 21 diabetes-related topics with 8 subtopics. While type 2 diabetes was the most frequent of the sub-topics, heart attack, Yoga, and Alzheimer are the least frequent subtopics for diabetes. Diet had a wide variety of emerging themes ranging from celebrity diet (e.g., Beyonce) to religious diet (e.g., Ramadan). Diet was detected in 63 topics with 10 subtopics; obesity, and pregnancy and mental health were the most and the least discussed obesity-related topics, respectively. Exploring the themes for Exercise subtopics revealed subjects such as computer games (e.g., Pokemon-Go) and brain exercises (e.g., memory improvement). Exercise had 7 subtopics with fitness as the most discussed subtopic and computer games as the least discussed subtopic. Finally, Obesity themes showed topics such as Alzheimer (e.g., research studies) and cancer (e.g., breast cancer). Obesity had the lowest diversity of subtopics: six with diet as the most discussed subtopic, and Alzheimer and cancer as the least discussed subtopics.
Diabetes subtopics show the relation between diabetes and exercise, diet, and obesity. Subtopics of diabetes revealed that users post about the relationship between diabetes and other diseases such as heart attack (Tables TABREF7 and TABREF8 ). The subtopic Alzheimer is also shown in the obesity subtopics. This overlap between categories prompts the discussion of research and linkages among obesity, diabetes, and Alzheimer's disease. Type 2 diabetes was another subtopic expressed by users and scientifically documented in the literature.
The main DDEO topics showed some level of interrelationship by appearing as subtopics of other DDEO topics. The words with italic and underline styles in Table 2 demonstrate the relation among the four DDEO areas. Our results show users' interest about posting their opinions, sharing information, and conversing about exercise & diabetes, exercise & diet, diabetes & diet, diabetes & obesity, and diet & obesity (Figure FIGREF9 ). The strongest correlation among the topics was determined to be between exercise and obesity ( INLINEFORM0 ). Other notable correlations were: diabetes and obesity ( INLINEFORM1 ), and diet and obesity ( INLINEFORM2 ).
Discussion
Diabetes, diet, exercise, and obesity are common public health related opinions. Analyzing individual- level opinions by automated algorithmic techniques can be a useful approach to better characterize health opinions of a population. Traditional public health polls and surveys are limited by a small sample size; however, Twitter provides a platform to capture an array of opinions and shared information a expressed in the words of the tweeter. Studies show that Twitter data can be used to discover trending topics, and that there is a strong correlation between Twitter health conversations and Centers for Disease Control and Prevention (CDC) statistics BIBREF62 .
This research provides a computational content analysis approach to conduct a deep analysis using a large data set of tweets. Our framework decodes public health opinions in DDEO related tweets, which can be applied to other public health issues. Among health-related subtopics, there are a wide range of topics from diseases to personal experiences such as participating in religious activities or vegetarian diets.
Diabetes subtopics showed the relationship between diabetes and exercise, diet, and obesity (Tables TABREF7 and TABREF8 ). Subtopics of diabetes revealed that users posted about the relation between diabetes and other diseases such as heart attack. The subtopic Alzheimer is also shown in the obesity subtopics. This overlap between categories prompts the discussion of research and linkages among obesity, diabetes, and Alzheimer's disease. Type 2 diabetes was another subtopic that was also expressed by users and scientifically documented in the literature. The inclusion of Yoga in posts about diabetes is interesting. While yoga would certainly be labeled as a form of fitness, when considering the post, it was insightful to see discussion on the mental health benefits that yoga offers to those living with diabetes BIBREF63 .
Diet had the highest number of subtopics. For example, religious diet activities such as fasting during the month of Ramadan for Muslims incorporated two subtopics categorized under the diet topic (Tables TABREF7 and TABREF8 ). This information has implications for the type of diets that are being practiced in the religious community, but may help inform religious scholars who focus on health and psychological conditions during fasting. Other religions such as Judaism, Christianity, and Taoism have periods of fasting that were not captured in our data collection, which may have been due to lack of posts or the timeframe in which we collected data. The diet plans of celebrities were also considered influential to explaining and informing diet opinions of Twitter users BIBREF64 .
Exercise themes show the Twitter users' association of exercise with “brain" benefits such as increased memory and cognitive performance (Tables TABREF7 and TABREF8 ) BIBREF65 . The topics also confirm that exercising is associated with controlling diabetes and assisting with meal planning BIBREF66 , BIBREF9 , and obesity BIBREF67 . Additionally, we found the Twitter users mentioned exercise topics about the use of computer games that assist with exercising. The recent mobile gaming phenomenon Pokeman-Go game BIBREF68 was highly associated with the exercise topic. Pokemon-Go allows users to operate in a virtual environment while simultaneously functioning in the real word. Capturing Pokemons, battling characters, and finding physical locations for meeting other users required physically activity to reach predefined locations. These themes reflect on the potential of augmented reality in increasing patients' physical activity levels BIBREF69 .
Obesity had the lowest number of subtopics in our study. Three of the subtopics were related to other diseases such as diabetes (Tables TABREF7 and TABREF8 ). The scholarly literature has well documented the possible linkages between obesity and chronic diseases such as diabetes BIBREF1 as supported by the study results. The topic of children is another prominent subtopic associated with obesity. There has been an increasing number of opinions in regard to child obesity and national health campaigns that have been developed to encourage physical activity among children BIBREF70 . Alzheimer was also identified as a topic under obesity. Although considered a perplexing finding, recent studies have been conducted to identify possible correlation between obesity and Alzheimer's disease BIBREF71 , BIBREF72 , BIBREF73 . Indeed, Twitter users have expressed opinions about the study of Alzheimer's disease and the linkage between these two topics.
This paper addresses a need for clinical providers, public health experts, and social scientists to utilize a large conversational dataset to collect and utilize population level opinions and information needs. Although our framework is applied to Twitter, the applications from this study can be used in patient communication devices monitored by physicians or weight management interventions with social media accounts, and support large scale population-wide initiatives to promote healthy behaviors and preventative measures for diabetes, diet, exercise, and obesity.
This research has some limitations. First, our DDEO analysis does not take geographical location of the Twitter users into consideration and thus does not reveal if certain geographical differences exists. Second, we used a limited number of queries to select the initial pool of tweets, thus perhaps missing tweets that may have been relevant to DDEO but have used unusual terms referenced. Third, our analysis only included tweets generated in one month; however, as our previous work has demonstrated BIBREF42 , public opinion can change during a year. Additionally, we did not track individuals across time to detect changes in common themes discussed. Our future research plans includes introducing a dynamic framework to collect and analyze DDEO related tweets during extended time periods (multiple months) and incorporating spatial analysis of DDEO-related tweets.
Conclusion
This study represents the first step in developing routine processes to collect, analyze, and interpret DDEO-related posts to social media around health-related topics and presents a transdisciplinary approach to analyzing public discussions around health topics. With 2.34 billion social media users in 2016, the ability to collect and synthesize social media data will continue to grow. Developing methods to make this process more streamlined and robust will allow for more rapid identification of public health trends in real time.
Note: Amir Karami will handle correspondence at all stages of refereeing and publication.
Conflict of interest
The authors state that they have no conflict of interest.
Acknowledgement
This research was partially supported by the first author's startup research funding provided by the University of South Carolina, School of Library and Information Science. We thank Jill Chappell-Fail and Jeff Salter at the University of South Carolina College of Information and Communications for assistance with technical support.
References | using topic modeling model Latent Dirichlet Allocation (LDA) |
c45a160d31ca8eddbfea79907ec8e59f543aab86 | c45a160d31ca8eddbfea79907ec8e59f543aab86_0 | Q: What datasets are used for evaluation?
Text: Introduction
Since their early days, representation in random utility behavior models has followed generally quite clear principles. For example, numeric quantities like travel time and cost may be directly used or transformed depending on observed non-linear efects (e.g. using log). Numeric variables that are not “quantities" per se, such as age or even geographic coordinates tend to be discretized and then transformed into vectors of dummy variables. Similarly, categorical variables such as education level or trip purpose are already discrete, and thus are also usually “dummyfied". Then, we may interact any subset of the above by combining (typically, multiplying) them, as long as we get in the end a vector of numeric values that can be incorporated in a statistical model, a linear one in the case of the most common logit model.
There are however phenomena that are hard to represent, and modelers end up struggling to find the right representation. For example, influence of social interactions between different persons, hierarchical decision making, autocorrelated nature of time and space, or abstract concepts such as accessibility, attitudes, personality traits and so on. The point here, is that the nature of our models seems to enforce a compromise between the true semantics of a variable (i.e. the “meaning" of a certain information for the decision making process) and its realisation in practice. And that further research should be done to find new representation paradigms.
Historically speaking, the natural language processing (NLP) field has had similar dilemmas for decades, and for a while two general trends were competing: the statistical modeling approaches, and the linguistic theory based approaches. The former relied on simple representations, such as vector frequencies, or dummy variables, to become practical, while the latter used domain knowledge such as grammars or logic. Until recently, neither had considerable success in making machines able to understand or generate human language, but developments in deep neural networks together with overwhelmingly massive amounts of data (i.e. the World Wide Web) brought them to a new area, where the two are approaching each other and achieving hitherto results considered extremely hard, such as question answering, translation, next word prediction. One of the key concepts in this revolution is that of embeddings, which will be further explained in this paper.
Our focus here is on the representation of categorical variables. The default paradigm is dummy variables (also known as “one-hot-encoding" in machine learning literature), which have well-known limitations, namely the explosion of dimensionality and enforced ortogonality. The former happens because we assign one new “dummy" variable to each of D-1 categories, and easily go from a small original variable specification to one with hundreds of variables, bringing problems in model estimation and analysis. This often affects the data collection process itself. Since one doesn't want to end up with too many categories, we might as well give less options in a survey, or decrease the resolution of a sensor. The problem of enforced ortogonality relates to the fact that, in a dummy encoding, all categories become equidistant. The similarity between “student" and “employed" is the same as between “student" and “retired", which in many cases (e.g. mode choice, departure time choice) goes against intuition. Other encoding methods exist, such as contrasted encoding or principal components analysis (PCA). The former ends up being a subtle variation on the dummy approach, but the latter already provides an interesting answer to the problem: categories are no longer forcibly equidistant, and the number of variables can be much reduced. However, it is a non-supervised approach. The distance between “student" and “employed" will always be the same, regardless of the problem we are solving, but this may be intuitively illogical if we consider car ownership versus departure time choice models for example.
The key idea in this paper is to introduce a method, called Travel Behavior embeddings, that borrows much from the NLP concept. This method serves to encode categorical variables, and is dependent on the problem at hand. We will focus on mode choice, and test on a well-known dataset, by comparing with both dummy and PCA encoding. All the dataset and code are made openly available, and the reader can follow and generate results him/herself using an iPython notebook included. Our ultimate goal is certainly that the reader reuses our PyTre package for own purposes.
This paper presents some results and conclusions, after a relatively long exploration and analysis process, including other datasets and code variations not mentioned here for interest of clarity and replicability. While we show these concepts to be promising and innovative in this paper, one should be wary of over-hyping yet another Machine Learning/Artificial Intelligence concept: after all, Machine Learning is still essentially based on statistics. In NLP, the number of different words in consideration at a given moment can be in order of tens of thousands, while our categorical variables rarely go beyond a few dozens. This means that, for example, it becomes clear later that the least number of original categories, the less the benefit of embeddings (in the limit, a binary variable like gender, is useless to do embeddings with), and also that if we do get a significantly large and statistically representative dataset, a dummy variables representation is sufficient. We will quickly see, however, that complexity can grow quick enough to justify an embeddings based method even if without the shockingly better performance observed in NLP applications.
Representing categorical variables
We are generally concerned with random utility maximization (RUM) models, for they have a dominant role in travel behavior modeling. The nature of such models is predominantly numeric, linear, and quite often strictly flat (notwithstanding hierarchical variations, such as nested models BIBREF1, hierarchical Bayes BIBREF2, or non-linear transformations). As a consequence, while numerical variables (e.g. travel time, cost, or income) can be directly used as available, perhaps subject to transformations or segmentation, nominal ones bring about a greater challenge. We tend to enforce a limited set of treatments such as:
Dummy variables, or one-hot encoding - for each categorical variable $v$ with D categories, we get D-1 binary variables (the “dummies"). At each input vector $x_n$, with categorical value $v=d$, the value “1" will be assigned to the corresponding dummy, while “0" to all others. If $v$ corresponds to the “default" category, all dummies are “0".
Contrast encoding BIBREF3 - same as dummy encoding, but instead of “1" for each category, we have a value that results from a contrasting formula. There are many different formulas (e.g. Helmert, Sum, Backward Difference), but all consist of subtracting the mean of the target variable, for a given category, with a general stastic (e.g. the mean of the dependent variable for all categories; the mean of the dependent variable in the previous category in an ordered list).
Principal Components Analysis (PCA) - run the PCA algorithm on the data matrix obtained by dummy representation of the categorical variable, then re-represent it with the corresponding projected eigenvector coefficients. One selects K eigenvectors (e.g. according to a variance explained rule), and thus each category is mapped to a vector of K real values.
Segmenting models, mixture models - A general alternative to categorical data representation is in fact to avoid it in the first place. One obvious method would be through creating hierarchical disaggregate methods (e.g. one per category). This is not in itself a representation paradigm, but an alternative way to see this problem. It certainly raises scalability and inference concerns.
In datasets where behavior heterogeneity is high, and number of observations is significantly smaller than population size, increasing dimensionality by adding a variable per each category is very risky because the amount of data that is in practice usable to estimate each new coefficient becomes insufficient. A simple intuition here is by considering that, for a dummy variable that is only “1" for a few observations in the dataset, its coefficient will be “activated" only that small number of times. If there is a lot of variance in the associated behavior, the variance of the coefficient will also be large, and the coefficient will be considered statistically insignificant.
The benefit of representations that map into a latent space, like embeddings and PCA, is that such a space is inevitably shared, and thus every observation contributes indirectly to all category variables. This comes with no interpretability cost, because one can always map to the “dummy" space and analyse the individual coefficients, as will be shown in our experiments.
The concept of text embeddings
The idea of text embeddings comes from a simple re-representation necessity. A natural-language processing system is itself also a numeric machine, therefore it requires each individual word in a dictionary to match its own numeric representation. Just as in our travel models, a possible solution has been to use dummy variables, and it is quite obvious that the dimensionality of such a one-hot encoding vector, quickly becomes overwhelming. Think for example next word prediction algorithm, like the one we have in our smartphones. It is essentially a skip-gram BIBREF4 model that predicts the next word, given the n words before. The English dictionary has about 300000 words, and if we have about 5 words before for context, the number of independent variables of the model would become 1.5 million!
The goal of text embeddings algorithms (e.g. Word2Vec BIBREF5) is to a) reduce the representation of each word to a computationally acceptable dimension, while simultaneously b) learning the semantic distance between different words. In other words, the euclidean distance of semantically related words (e.g. “dog" and “cat") in this new space should be smaller than unrelated words (e.g. “dog" and “optimize"). As mentioned before, in a dummy (or one-hot) encoding, all distances between words are equal by definition.
The word embeddings methodology is very well explained in several webpages such as BIBREF6, so the reader is strongly encouraged to visit them first. However, for the sake of completeness, we summarize here the general idea.
Imagine the following task: given a word $w_i$ in a text, predict the next word $w_o$. If we solve it with a neural network model, we could have the architecture in Figure FIGREF8, where the input consists simply of the one-hot-encoding representation of the word (i.e. one dummy variable for each word in a dictionary of dimensionality $D$), and the output corresponds to the probability of each word in the dictionary being the next one (also a vector with dimensionality $D$).
The output layer thus consists simply of a softmax function. In other words, exactly the classical multinomial logit formulation that we would have in an RUM, in which each different word corresponds to an “alternative".
The concept of embeddings is directly associated to the hidden layer, which is a set of linear activation neurons, typically with a dimensionality $K<<D$. Each such neuron is simply an identity function: it sums all inputs; then propagates this sum to the output layer. Since only one input neuron is activated at a time (remember that the input is a one-hot-encoding vector, with one “1" and the rest with “0"), each hidden layer neuron just propagates the (single) weight that links to that input neuron. If we have enough data for training this model, we will eventually land on a situation where, for each input word, there is a fixed vector of weights that are directly used in the output (softmax) function, to generate the prediction. With more data, this weight vector will not change (down to some small delta threshold). These stable vectors are what we call embeddings, and the dimensionality of these vectors is called embedding size.
Formally, we have a dataset $\mathcal {D}=\lbrace x_n, y_n\rbrace , n=1\ldots N$, where each $x_n$ and $y_n$ are one-hot (dummy) encodings of categorical variables. The dimensionality of $x_n$ is $D\times 1$, with $D$ being the number of different categories in $x_n$, while the dimensionality of $y_n$ is $C\times 1$, with $C$ being the number of categories (alternatives) in $y_n$. The full expression for the embeddings model as described is:
where $W$ is the embeddings matrix of size $K\times D$, where $K$ is called the embeddings size. $B$ is a matrix of coefficients ($C\times K$) for the softmax layer, so $B_c$ is simply the coefficients (row) vector for output class (alternative) $c$, and $\alpha _c$ is the corresponding intercept. The typical loss function used in such models is called the categorical cross entropy:
Where $\delta _{i}$ is the kronecker delta ($\delta _{true}=1; \delta _{false}=0$), and $\mathcal {L}(n)$ is the cumulative loss for an individual data point. This formalization is the simplest version, without loss of generality. In practice, as seen below, we will model multiple embeddings matrices simultaneously, and will add regularization terms to the loss function, so the models tested in this paper consist of compositions of the above.
So these so called embeddings are in fact a relatively shallow data representation in a simple neural network. What is their added value? Obviously, the first practical benefit is dimensionality reduction, because now there is a mapping between each of the $C$ words to a unique vector of size $K$. The second aspect is that this new representation is the one that maximizes the performance towards a specific task (in our example, prediction of the next word), therefore it is a supervised process, as opposed for example to PCA. The third and more interesting aspect relates with semantic similarity. A natural consequence of the mentioned algorithm is that words that have similar output distributions (i.e. next words) will tend to be close to each other. Figure FIGREF10 shows a 2D visualization (t-SNE) with a subset of english words. In such a visualization, data is projected in 2D space by maintaining the same vector-to-vector distances as in the original ($K$ order space). Therefore the X and Y axes have no specific meaning, only distances between every pair of points are relevant.
We can see that semantically similar concepts, more specifically concepts that tend to have the same distribution of “next words", are placed closer. Another intriguing consequence is that, since the words are now in the $K$ dimensional, embeddings space, we can also do some linear algebra on them. A well known formulation is $King-Man+Woman=Queen$. Essentially, the vector $King-Man$ corresponds to the concept of “crowning" (therefore $Woman+crowning=Queen$). The same could be done with many other concept pairs. Figure FIGREF11 show also an alternative interpretation of “man-female", as well as examples with cities and verb tense.
Finally, another relevant note on the embeddings representation is that, just like the PCA encoding, one can always project back into the original space and use this for interpretability. In other words, since there is a 1-to-1 mapping from each category to its encoding, there is also a 1-to-1 mapping between a model that uses dummy variables and a model using such encodings. This may be useful for interpretability, since in the case of dummy variables we have a direct interpretation (e.g. a beta coefficient value in a logit model) for the effect of a given category, while the same doesn't happen for an encoded variable (i.e. there is no meaning for the value of a single beta coefficient in an embeddings encoding when K>1). In order to preserve statistical significance information (e.g. p-values) we only need to follow the well known rules of normal random variables.
There are open databases available (e.g. GLoVe BIBREF9, FastText BIBREF7) that provide word embedding tables for the entire English language (Glove provides several embedding tables, up to embedding size between 100 and 300). In our next word application example, we now talk about models with 500-1500 variables, which is very manageable for our machines today.
Summarizing, the general idea of word embeddings is to re-represent a categorical variable into a lower dimensional representation with continuous values . Whenever such a variable is to be used in a model, one can simply replace it with the corresponding embeddings vector. We have previously demonstrated the value of such word embeddings in demand prediction in special events BIBREF10, where we collected event textual descriptions, and used Glove embedding vectors to incorporate such information in a neural network model.
Finally, an interesting point to mention relates to the typical difference in dataset size between the original embeddings training model (Glove, approximately 6 billion input word vectors from 37 million texts) and the model one implements to solve a particular problem (in our special events case, less than 1000 short event descriptions, with at most few hundred words each). Instead of creating ourselves a new embeddings model using the events dataset, we reused the pre-trained GloVe dataset. The benefit is significant because in practice we trained our model to deal with all words in the dictionary, much beyond the limited vocabulary that we obtained in our 1000 short texts. In practice we have used a very small percentage of the english dictionary. When, in an out-of-sample test, our model finds words that were not in the training set, it still works perfectly well.
Travel behaviour embeddings
Differently to textual data, our goal in this paper is to explore the large amount of categorical data that is often collected in travel surveys. This includes trip purpose, education level, or family type. We also consider other variables that are not necessarily of categorical nature, but typically end up as dummy encoding, due to segmentation, such as age, income, or even origin/destination pair.
Our hypothesis is that, given the limitations of dummy variables that are commonly used and the unsupervised nature of PCA, using instead an embeddings mechanism should improve significantly the quality of our models, both in terms of loglikelihood but also in terms of allowing for lower complexity (i.e. less variables). Ultimately, one could think of a framework such as GLoVe, where embeddings for such variables could be trivially shared with the community. For example, we could have a “Travel behavior embeddings" database, incrementally built from travel surveys from around the world. Such database could have embeddings for mode choice target variables, but also for departure time, destination choice, car ownership, and so on. Whenever a modeler wanted to estimate a new model, she could just download the right encodings and use them directly. This is particularly relevant if one considers the complicated challenges for opening or sharing travel survey datasets in our field. Of course, a major question arises: are behaviors that consistent across the world? There are certainly nuances across the world, but we believe that general patterns would emerge (e.g. a “business" trip purpose will be closer to “work" than “leisure", in a departure time choice model; “student" will be closer to “unemployed" than to “retired" in a car ownership model).
Travel behaviour embeddings ::: The general idea
We believe that, as with word embeddings, a mapping that preserves semantic distance relative to a certain choice problem, should be useful for modeling. As with a PCA encoding, another benefit is that by sharing parameters in the learning process, the model can generalize better, as opposed to a dummy encoding, where each categorical value has its own parameter, that is only active when observed.
The general idea is thus to create a mapping between a variable for which we want to find an embeddings representation, and a target variable, as in Figure FIGREF15. We call the mapping function “PyTre Embeddings", because that is the name of the object in our proposed Python “Travel Embeddings" package.
From an experimental design and application perspective, the approach followed in this paper is the following:
Create list of categorical variables to encode (the encoding set)
Split dataset into train, development and test sets
For each variable in encoding set, learn the new embeddings using the embeddings train set . This should be done simultaneously (all variable embeddings estimated at once, as explained in the next section).
Encode choice models for train, development and test sets using the learned embeddings
Estimate choice model accordingly using its train set
Evaluate the new model using the test set
Since there is stochasticity in the embeddings training model, we will repeat the above multiple times, for the different experiments in the paper (and report the respective mean and standard deviation statistics). Whenever we want to analyse a particular model (e.g. to check the coefficients of a choice model), we select the one with the highest likelihood at the development set (i.e. in practice, its out-of-sample generalization performance), and report its performance on the test set.
Travel behaviour embeddings ::: Methodology
Since a choice model will typically involve other variables than the categorical ones that we learn the embeddings for, it is important to take into account their effects. Figure FIGREF24 shows the simplest travel embeddings model. As an example, the categorical variable is trip purpose, and there are a few other variables such as gender, cost of the alternatives, distance, and so on. Notice that they are directly fed into the softmax output layer, together with the embeddings output.
The dataset sizes in transportation behavior modeling are substantially smaller than typical word embeddings ones, and the risk of overfitting is therefore higher. To mitigate this problem, besides adding regularization penalties in the objective function, we add what we call a regularizer layer for each embedding, which is no more than a softmax layer that penalizes whenever it cannot recover the original one-hot-encoding vectors (Figure FIGREF25, left). We call the combination of embeddings and its regularizer network, a Travel Embeddings layer. Finally, it is obviously better to train all embeddings simultaneously, so that they accommodate each other's effects (Figure FIGREF25, right).
An experiment with mode choice
The goal of this paper is to test the potential of embeddings in a simple and well-known choice model context, comparing it to well-known baseline techniques. Therefore, the general model specification follows quite simple assumptions. We expect that in future work from us or others, more elaborate derivations can take advantage of embeddings such as nested, mixed logit or latent class choice models (LCCM), for example.
We will apply the methodology to the well-known “Swissmetro" dataset. We will compare it with a dummy variables and PCA baselines. We will follow the 3-way experimental design mentioned before: split the dataset into train, development and test sets, so that the embeddings, PCA eigenvectors and the choice model are estimated from the same train and development sets, and validate it out-of-sample. For the sake of interpretability, we will also project back coefficients from the embeddings as well as PCA models into the dummy variable space.
All experiment code is available as a jupyter notebook in a package we created for this work (to which we called PyTre). For estimating the multinomial logit model (MNL) we used the PyLogit BIBREF11 package.
An experiment with mode choice ::: The Swissmetro dataset
The Swissmetro dataset consists of survey data collected on the trains between St. Gallen and Geneva, Switzerland, during March 1998. According to its description BIBREF0, the respondents provided information in order to analyze the impact of the modal innovation in transportation, represented by the Swissmetro, a revolutionary mag-lev underground system, against the usual transport modes represented by car and train. After discarding respondents for which some variables were not available (e.g. age, purpose), a total of 10469 responses from 1188 individuals were used for the experiments.
We split the dataset into 3 different parts:
Embeddings train set: 60% of the dataset (6373 vectors)
Development set: 20% of the dataset (2003 vectors)
Test set: 20% of the dataset (2003 vectors)
An experiment with mode choice ::: Principles for the model specification
The PyLogit package BIBREF11 also uses Swissmetro as an example. Therefore, our model specifications will extend the default one from this package. We re-estimated this model with the train set and validated with testset. The results are shown in tables TABREF31 and TABREF32. Since we are comparing the models at the test set, the key indicators should be pseudo R-square and log-likelihood. Indicators that consider model complexity (robust r-square and AIC) are less important on the test set in our view because the overfitting effect (i.e. improving fit just by adding more variables) will no longer be verifiable in this way. Instead, one sees overfitting if test set performance is considerably inferior to the training set. | Swissmetro dataset |
7358a1ce2eae380af423d4feeaa67d2bd23ae9dd | 7358a1ce2eae380af423d4feeaa67d2bd23ae9dd_0 | Q: How do their train their embeddings?
Text: Introduction
Since their early days, representation in random utility behavior models has followed generally quite clear principles. For example, numeric quantities like travel time and cost may be directly used or transformed depending on observed non-linear efects (e.g. using log). Numeric variables that are not “quantities" per se, such as age or even geographic coordinates tend to be discretized and then transformed into vectors of dummy variables. Similarly, categorical variables such as education level or trip purpose are already discrete, and thus are also usually “dummyfied". Then, we may interact any subset of the above by combining (typically, multiplying) them, as long as we get in the end a vector of numeric values that can be incorporated in a statistical model, a linear one in the case of the most common logit model.
There are however phenomena that are hard to represent, and modelers end up struggling to find the right representation. For example, influence of social interactions between different persons, hierarchical decision making, autocorrelated nature of time and space, or abstract concepts such as accessibility, attitudes, personality traits and so on. The point here, is that the nature of our models seems to enforce a compromise between the true semantics of a variable (i.e. the “meaning" of a certain information for the decision making process) and its realisation in practice. And that further research should be done to find new representation paradigms.
Historically speaking, the natural language processing (NLP) field has had similar dilemmas for decades, and for a while two general trends were competing: the statistical modeling approaches, and the linguistic theory based approaches. The former relied on simple representations, such as vector frequencies, or dummy variables, to become practical, while the latter used domain knowledge such as grammars or logic. Until recently, neither had considerable success in making machines able to understand or generate human language, but developments in deep neural networks together with overwhelmingly massive amounts of data (i.e. the World Wide Web) brought them to a new area, where the two are approaching each other and achieving hitherto results considered extremely hard, such as question answering, translation, next word prediction. One of the key concepts in this revolution is that of embeddings, which will be further explained in this paper.
Our focus here is on the representation of categorical variables. The default paradigm is dummy variables (also known as “one-hot-encoding" in machine learning literature), which have well-known limitations, namely the explosion of dimensionality and enforced ortogonality. The former happens because we assign one new “dummy" variable to each of D-1 categories, and easily go from a small original variable specification to one with hundreds of variables, bringing problems in model estimation and analysis. This often affects the data collection process itself. Since one doesn't want to end up with too many categories, we might as well give less options in a survey, or decrease the resolution of a sensor. The problem of enforced ortogonality relates to the fact that, in a dummy encoding, all categories become equidistant. The similarity between “student" and “employed" is the same as between “student" and “retired", which in many cases (e.g. mode choice, departure time choice) goes against intuition. Other encoding methods exist, such as contrasted encoding or principal components analysis (PCA). The former ends up being a subtle variation on the dummy approach, but the latter already provides an interesting answer to the problem: categories are no longer forcibly equidistant, and the number of variables can be much reduced. However, it is a non-supervised approach. The distance between “student" and “employed" will always be the same, regardless of the problem we are solving, but this may be intuitively illogical if we consider car ownership versus departure time choice models for example.
The key idea in this paper is to introduce a method, called Travel Behavior embeddings, that borrows much from the NLP concept. This method serves to encode categorical variables, and is dependent on the problem at hand. We will focus on mode choice, and test on a well-known dataset, by comparing with both dummy and PCA encoding. All the dataset and code are made openly available, and the reader can follow and generate results him/herself using an iPython notebook included. Our ultimate goal is certainly that the reader reuses our PyTre package for own purposes.
This paper presents some results and conclusions, after a relatively long exploration and analysis process, including other datasets and code variations not mentioned here for interest of clarity and replicability. While we show these concepts to be promising and innovative in this paper, one should be wary of over-hyping yet another Machine Learning/Artificial Intelligence concept: after all, Machine Learning is still essentially based on statistics. In NLP, the number of different words in consideration at a given moment can be in order of tens of thousands, while our categorical variables rarely go beyond a few dozens. This means that, for example, it becomes clear later that the least number of original categories, the less the benefit of embeddings (in the limit, a binary variable like gender, is useless to do embeddings with), and also that if we do get a significantly large and statistically representative dataset, a dummy variables representation is sufficient. We will quickly see, however, that complexity can grow quick enough to justify an embeddings based method even if without the shockingly better performance observed in NLP applications.
Representing categorical variables
We are generally concerned with random utility maximization (RUM) models, for they have a dominant role in travel behavior modeling. The nature of such models is predominantly numeric, linear, and quite often strictly flat (notwithstanding hierarchical variations, such as nested models BIBREF1, hierarchical Bayes BIBREF2, or non-linear transformations). As a consequence, while numerical variables (e.g. travel time, cost, or income) can be directly used as available, perhaps subject to transformations or segmentation, nominal ones bring about a greater challenge. We tend to enforce a limited set of treatments such as:
Dummy variables, or one-hot encoding - for each categorical variable $v$ with D categories, we get D-1 binary variables (the “dummies"). At each input vector $x_n$, with categorical value $v=d$, the value “1" will be assigned to the corresponding dummy, while “0" to all others. If $v$ corresponds to the “default" category, all dummies are “0".
Contrast encoding BIBREF3 - same as dummy encoding, but instead of “1" for each category, we have a value that results from a contrasting formula. There are many different formulas (e.g. Helmert, Sum, Backward Difference), but all consist of subtracting the mean of the target variable, for a given category, with a general stastic (e.g. the mean of the dependent variable for all categories; the mean of the dependent variable in the previous category in an ordered list).
Principal Components Analysis (PCA) - run the PCA algorithm on the data matrix obtained by dummy representation of the categorical variable, then re-represent it with the corresponding projected eigenvector coefficients. One selects K eigenvectors (e.g. according to a variance explained rule), and thus each category is mapped to a vector of K real values.
Segmenting models, mixture models - A general alternative to categorical data representation is in fact to avoid it in the first place. One obvious method would be through creating hierarchical disaggregate methods (e.g. one per category). This is not in itself a representation paradigm, but an alternative way to see this problem. It certainly raises scalability and inference concerns.
In datasets where behavior heterogeneity is high, and number of observations is significantly smaller than population size, increasing dimensionality by adding a variable per each category is very risky because the amount of data that is in practice usable to estimate each new coefficient becomes insufficient. A simple intuition here is by considering that, for a dummy variable that is only “1" for a few observations in the dataset, its coefficient will be “activated" only that small number of times. If there is a lot of variance in the associated behavior, the variance of the coefficient will also be large, and the coefficient will be considered statistically insignificant.
The benefit of representations that map into a latent space, like embeddings and PCA, is that such a space is inevitably shared, and thus every observation contributes indirectly to all category variables. This comes with no interpretability cost, because one can always map to the “dummy" space and analyse the individual coefficients, as will be shown in our experiments.
The concept of text embeddings
The idea of text embeddings comes from a simple re-representation necessity. A natural-language processing system is itself also a numeric machine, therefore it requires each individual word in a dictionary to match its own numeric representation. Just as in our travel models, a possible solution has been to use dummy variables, and it is quite obvious that the dimensionality of such a one-hot encoding vector, quickly becomes overwhelming. Think for example next word prediction algorithm, like the one we have in our smartphones. It is essentially a skip-gram BIBREF4 model that predicts the next word, given the n words before. The English dictionary has about 300000 words, and if we have about 5 words before for context, the number of independent variables of the model would become 1.5 million!
The goal of text embeddings algorithms (e.g. Word2Vec BIBREF5) is to a) reduce the representation of each word to a computationally acceptable dimension, while simultaneously b) learning the semantic distance between different words. In other words, the euclidean distance of semantically related words (e.g. “dog" and “cat") in this new space should be smaller than unrelated words (e.g. “dog" and “optimize"). As mentioned before, in a dummy (or one-hot) encoding, all distances between words are equal by definition.
The word embeddings methodology is very well explained in several webpages such as BIBREF6, so the reader is strongly encouraged to visit them first. However, for the sake of completeness, we summarize here the general idea.
Imagine the following task: given a word $w_i$ in a text, predict the next word $w_o$. If we solve it with a neural network model, we could have the architecture in Figure FIGREF8, where the input consists simply of the one-hot-encoding representation of the word (i.e. one dummy variable for each word in a dictionary of dimensionality $D$), and the output corresponds to the probability of each word in the dictionary being the next one (also a vector with dimensionality $D$).
The output layer thus consists simply of a softmax function. In other words, exactly the classical multinomial logit formulation that we would have in an RUM, in which each different word corresponds to an “alternative".
The concept of embeddings is directly associated to the hidden layer, which is a set of linear activation neurons, typically with a dimensionality $K<<D$. Each such neuron is simply an identity function: it sums all inputs; then propagates this sum to the output layer. Since only one input neuron is activated at a time (remember that the input is a one-hot-encoding vector, with one “1" and the rest with “0"), each hidden layer neuron just propagates the (single) weight that links to that input neuron. If we have enough data for training this model, we will eventually land on a situation where, for each input word, there is a fixed vector of weights that are directly used in the output (softmax) function, to generate the prediction. With more data, this weight vector will not change (down to some small delta threshold). These stable vectors are what we call embeddings, and the dimensionality of these vectors is called embedding size.
Formally, we have a dataset $\mathcal {D}=\lbrace x_n, y_n\rbrace , n=1\ldots N$, where each $x_n$ and $y_n$ are one-hot (dummy) encodings of categorical variables. The dimensionality of $x_n$ is $D\times 1$, with $D$ being the number of different categories in $x_n$, while the dimensionality of $y_n$ is $C\times 1$, with $C$ being the number of categories (alternatives) in $y_n$. The full expression for the embeddings model as described is:
where $W$ is the embeddings matrix of size $K\times D$, where $K$ is called the embeddings size. $B$ is a matrix of coefficients ($C\times K$) for the softmax layer, so $B_c$ is simply the coefficients (row) vector for output class (alternative) $c$, and $\alpha _c$ is the corresponding intercept. The typical loss function used in such models is called the categorical cross entropy:
Where $\delta _{i}$ is the kronecker delta ($\delta _{true}=1; \delta _{false}=0$), and $\mathcal {L}(n)$ is the cumulative loss for an individual data point. This formalization is the simplest version, without loss of generality. In practice, as seen below, we will model multiple embeddings matrices simultaneously, and will add regularization terms to the loss function, so the models tested in this paper consist of compositions of the above.
So these so called embeddings are in fact a relatively shallow data representation in a simple neural network. What is their added value? Obviously, the first practical benefit is dimensionality reduction, because now there is a mapping between each of the $C$ words to a unique vector of size $K$. The second aspect is that this new representation is the one that maximizes the performance towards a specific task (in our example, prediction of the next word), therefore it is a supervised process, as opposed for example to PCA. The third and more interesting aspect relates with semantic similarity. A natural consequence of the mentioned algorithm is that words that have similar output distributions (i.e. next words) will tend to be close to each other. Figure FIGREF10 shows a 2D visualization (t-SNE) with a subset of english words. In such a visualization, data is projected in 2D space by maintaining the same vector-to-vector distances as in the original ($K$ order space). Therefore the X and Y axes have no specific meaning, only distances between every pair of points are relevant.
We can see that semantically similar concepts, more specifically concepts that tend to have the same distribution of “next words", are placed closer. Another intriguing consequence is that, since the words are now in the $K$ dimensional, embeddings space, we can also do some linear algebra on them. A well known formulation is $King-Man+Woman=Queen$. Essentially, the vector $King-Man$ corresponds to the concept of “crowning" (therefore $Woman+crowning=Queen$). The same could be done with many other concept pairs. Figure FIGREF11 show also an alternative interpretation of “man-female", as well as examples with cities and verb tense.
Finally, another relevant note on the embeddings representation is that, just like the PCA encoding, one can always project back into the original space and use this for interpretability. In other words, since there is a 1-to-1 mapping from each category to its encoding, there is also a 1-to-1 mapping between a model that uses dummy variables and a model using such encodings. This may be useful for interpretability, since in the case of dummy variables we have a direct interpretation (e.g. a beta coefficient value in a logit model) for the effect of a given category, while the same doesn't happen for an encoded variable (i.e. there is no meaning for the value of a single beta coefficient in an embeddings encoding when K>1). In order to preserve statistical significance information (e.g. p-values) we only need to follow the well known rules of normal random variables.
There are open databases available (e.g. GLoVe BIBREF9, FastText BIBREF7) that provide word embedding tables for the entire English language (Glove provides several embedding tables, up to embedding size between 100 and 300). In our next word application example, we now talk about models with 500-1500 variables, which is very manageable for our machines today.
Summarizing, the general idea of word embeddings is to re-represent a categorical variable into a lower dimensional representation with continuous values . Whenever such a variable is to be used in a model, one can simply replace it with the corresponding embeddings vector. We have previously demonstrated the value of such word embeddings in demand prediction in special events BIBREF10, where we collected event textual descriptions, and used Glove embedding vectors to incorporate such information in a neural network model.
Finally, an interesting point to mention relates to the typical difference in dataset size between the original embeddings training model (Glove, approximately 6 billion input word vectors from 37 million texts) and the model one implements to solve a particular problem (in our special events case, less than 1000 short event descriptions, with at most few hundred words each). Instead of creating ourselves a new embeddings model using the events dataset, we reused the pre-trained GloVe dataset. The benefit is significant because in practice we trained our model to deal with all words in the dictionary, much beyond the limited vocabulary that we obtained in our 1000 short texts. In practice we have used a very small percentage of the english dictionary. When, in an out-of-sample test, our model finds words that were not in the training set, it still works perfectly well.
Travel behaviour embeddings
Differently to textual data, our goal in this paper is to explore the large amount of categorical data that is often collected in travel surveys. This includes trip purpose, education level, or family type. We also consider other variables that are not necessarily of categorical nature, but typically end up as dummy encoding, due to segmentation, such as age, income, or even origin/destination pair.
Our hypothesis is that, given the limitations of dummy variables that are commonly used and the unsupervised nature of PCA, using instead an embeddings mechanism should improve significantly the quality of our models, both in terms of loglikelihood but also in terms of allowing for lower complexity (i.e. less variables). Ultimately, one could think of a framework such as GLoVe, where embeddings for such variables could be trivially shared with the community. For example, we could have a “Travel behavior embeddings" database, incrementally built from travel surveys from around the world. Such database could have embeddings for mode choice target variables, but also for departure time, destination choice, car ownership, and so on. Whenever a modeler wanted to estimate a new model, she could just download the right encodings and use them directly. This is particularly relevant if one considers the complicated challenges for opening or sharing travel survey datasets in our field. Of course, a major question arises: are behaviors that consistent across the world? There are certainly nuances across the world, but we believe that general patterns would emerge (e.g. a “business" trip purpose will be closer to “work" than “leisure", in a departure time choice model; “student" will be closer to “unemployed" than to “retired" in a car ownership model).
Travel behaviour embeddings ::: The general idea
We believe that, as with word embeddings, a mapping that preserves semantic distance relative to a certain choice problem, should be useful for modeling. As with a PCA encoding, another benefit is that by sharing parameters in the learning process, the model can generalize better, as opposed to a dummy encoding, where each categorical value has its own parameter, that is only active when observed.
The general idea is thus to create a mapping between a variable for which we want to find an embeddings representation, and a target variable, as in Figure FIGREF15. We call the mapping function “PyTre Embeddings", because that is the name of the object in our proposed Python “Travel Embeddings" package.
From an experimental design and application perspective, the approach followed in this paper is the following:
Create list of categorical variables to encode (the encoding set)
Split dataset into train, development and test sets
For each variable in encoding set, learn the new embeddings using the embeddings train set . This should be done simultaneously (all variable embeddings estimated at once, as explained in the next section).
Encode choice models for train, development and test sets using the learned embeddings
Estimate choice model accordingly using its train set
Evaluate the new model using the test set
Since there is stochasticity in the embeddings training model, we will repeat the above multiple times, for the different experiments in the paper (and report the respective mean and standard deviation statistics). Whenever we want to analyse a particular model (e.g. to check the coefficients of a choice model), we select the one with the highest likelihood at the development set (i.e. in practice, its out-of-sample generalization performance), and report its performance on the test set.
Travel behaviour embeddings ::: Methodology
Since a choice model will typically involve other variables than the categorical ones that we learn the embeddings for, it is important to take into account their effects. Figure FIGREF24 shows the simplest travel embeddings model. As an example, the categorical variable is trip purpose, and there are a few other variables such as gender, cost of the alternatives, distance, and so on. Notice that they are directly fed into the softmax output layer, together with the embeddings output.
The dataset sizes in transportation behavior modeling are substantially smaller than typical word embeddings ones, and the risk of overfitting is therefore higher. To mitigate this problem, besides adding regularization penalties in the objective function, we add what we call a regularizer layer for each embedding, which is no more than a softmax layer that penalizes whenever it cannot recover the original one-hot-encoding vectors (Figure FIGREF25, left). We call the combination of embeddings and its regularizer network, a Travel Embeddings layer. Finally, it is obviously better to train all embeddings simultaneously, so that they accommodate each other's effects (Figure FIGREF25, right).
An experiment with mode choice
The goal of this paper is to test the potential of embeddings in a simple and well-known choice model context, comparing it to well-known baseline techniques. Therefore, the general model specification follows quite simple assumptions. We expect that in future work from us or others, more elaborate derivations can take advantage of embeddings such as nested, mixed logit or latent class choice models (LCCM), for example.
We will apply the methodology to the well-known “Swissmetro" dataset. We will compare it with a dummy variables and PCA baselines. We will follow the 3-way experimental design mentioned before: split the dataset into train, development and test sets, so that the embeddings, PCA eigenvectors and the choice model are estimated from the same train and development sets, and validate it out-of-sample. For the sake of interpretability, we will also project back coefficients from the embeddings as well as PCA models into the dummy variable space.
All experiment code is available as a jupyter notebook in a package we created for this work (to which we called PyTre). For estimating the multinomial logit model (MNL) we used the PyLogit BIBREF11 package.
An experiment with mode choice ::: The Swissmetro dataset
The Swissmetro dataset consists of survey data collected on the trains between St. Gallen and Geneva, Switzerland, during March 1998. According to its description BIBREF0, the respondents provided information in order to analyze the impact of the modal innovation in transportation, represented by the Swissmetro, a revolutionary mag-lev underground system, against the usual transport modes represented by car and train. After discarding respondents for which some variables were not available (e.g. age, purpose), a total of 10469 responses from 1188 individuals were used for the experiments.
We split the dataset into 3 different parts:
Embeddings train set: 60% of the dataset (6373 vectors)
Development set: 20% of the dataset (2003 vectors)
Test set: 20% of the dataset (2003 vectors)
An experiment with mode choice ::: Principles for the model specification
The PyLogit package BIBREF11 also uses Swissmetro as an example. Therefore, our model specifications will extend the default one from this package. We re-estimated this model with the train set and validated with testset. The results are shown in tables TABREF31 and TABREF32. Since we are comparing the models at the test set, the key indicators should be pseudo R-square and log-likelihood. Indicators that consider model complexity (robust r-square and AIC) are less important on the test set in our view because the overfitting effect (i.e. improving fit just by adding more variables) will no longer be verifiable in this way. Instead, one sees overfitting if test set performance is considerably inferior to the training set. | The embeddings are learned several times using the training set, then the average is taken. |
1165fb0b400ec1c521c1aef7a4e590f76fee1279 | 1165fb0b400ec1c521c1aef7a4e590f76fee1279_0 | Q: How do they model travel behavior?
Text: Introduction
Since their early days, representation in random utility behavior models has followed generally quite clear principles. For example, numeric quantities like travel time and cost may be directly used or transformed depending on observed non-linear efects (e.g. using log). Numeric variables that are not “quantities" per se, such as age or even geographic coordinates tend to be discretized and then transformed into vectors of dummy variables. Similarly, categorical variables such as education level or trip purpose are already discrete, and thus are also usually “dummyfied". Then, we may interact any subset of the above by combining (typically, multiplying) them, as long as we get in the end a vector of numeric values that can be incorporated in a statistical model, a linear one in the case of the most common logit model.
There are however phenomena that are hard to represent, and modelers end up struggling to find the right representation. For example, influence of social interactions between different persons, hierarchical decision making, autocorrelated nature of time and space, or abstract concepts such as accessibility, attitudes, personality traits and so on. The point here, is that the nature of our models seems to enforce a compromise between the true semantics of a variable (i.e. the “meaning" of a certain information for the decision making process) and its realisation in practice. And that further research should be done to find new representation paradigms.
Historically speaking, the natural language processing (NLP) field has had similar dilemmas for decades, and for a while two general trends were competing: the statistical modeling approaches, and the linguistic theory based approaches. The former relied on simple representations, such as vector frequencies, or dummy variables, to become practical, while the latter used domain knowledge such as grammars or logic. Until recently, neither had considerable success in making machines able to understand or generate human language, but developments in deep neural networks together with overwhelmingly massive amounts of data (i.e. the World Wide Web) brought them to a new area, where the two are approaching each other and achieving hitherto results considered extremely hard, such as question answering, translation, next word prediction. One of the key concepts in this revolution is that of embeddings, which will be further explained in this paper.
Our focus here is on the representation of categorical variables. The default paradigm is dummy variables (also known as “one-hot-encoding" in machine learning literature), which have well-known limitations, namely the explosion of dimensionality and enforced ortogonality. The former happens because we assign one new “dummy" variable to each of D-1 categories, and easily go from a small original variable specification to one with hundreds of variables, bringing problems in model estimation and analysis. This often affects the data collection process itself. Since one doesn't want to end up with too many categories, we might as well give less options in a survey, or decrease the resolution of a sensor. The problem of enforced ortogonality relates to the fact that, in a dummy encoding, all categories become equidistant. The similarity between “student" and “employed" is the same as between “student" and “retired", which in many cases (e.g. mode choice, departure time choice) goes against intuition. Other encoding methods exist, such as contrasted encoding or principal components analysis (PCA). The former ends up being a subtle variation on the dummy approach, but the latter already provides an interesting answer to the problem: categories are no longer forcibly equidistant, and the number of variables can be much reduced. However, it is a non-supervised approach. The distance between “student" and “employed" will always be the same, regardless of the problem we are solving, but this may be intuitively illogical if we consider car ownership versus departure time choice models for example.
The key idea in this paper is to introduce a method, called Travel Behavior embeddings, that borrows much from the NLP concept. This method serves to encode categorical variables, and is dependent on the problem at hand. We will focus on mode choice, and test on a well-known dataset, by comparing with both dummy and PCA encoding. All the dataset and code are made openly available, and the reader can follow and generate results him/herself using an iPython notebook included. Our ultimate goal is certainly that the reader reuses our PyTre package for own purposes.
This paper presents some results and conclusions, after a relatively long exploration and analysis process, including other datasets and code variations not mentioned here for interest of clarity and replicability. While we show these concepts to be promising and innovative in this paper, one should be wary of over-hyping yet another Machine Learning/Artificial Intelligence concept: after all, Machine Learning is still essentially based on statistics. In NLP, the number of different words in consideration at a given moment can be in order of tens of thousands, while our categorical variables rarely go beyond a few dozens. This means that, for example, it becomes clear later that the least number of original categories, the less the benefit of embeddings (in the limit, a binary variable like gender, is useless to do embeddings with), and also that if we do get a significantly large and statistically representative dataset, a dummy variables representation is sufficient. We will quickly see, however, that complexity can grow quick enough to justify an embeddings based method even if without the shockingly better performance observed in NLP applications.
Representing categorical variables
We are generally concerned with random utility maximization (RUM) models, for they have a dominant role in travel behavior modeling. The nature of such models is predominantly numeric, linear, and quite often strictly flat (notwithstanding hierarchical variations, such as nested models BIBREF1, hierarchical Bayes BIBREF2, or non-linear transformations). As a consequence, while numerical variables (e.g. travel time, cost, or income) can be directly used as available, perhaps subject to transformations or segmentation, nominal ones bring about a greater challenge. We tend to enforce a limited set of treatments such as:
Dummy variables, or one-hot encoding - for each categorical variable $v$ with D categories, we get D-1 binary variables (the “dummies"). At each input vector $x_n$, with categorical value $v=d$, the value “1" will be assigned to the corresponding dummy, while “0" to all others. If $v$ corresponds to the “default" category, all dummies are “0".
Contrast encoding BIBREF3 - same as dummy encoding, but instead of “1" for each category, we have a value that results from a contrasting formula. There are many different formulas (e.g. Helmert, Sum, Backward Difference), but all consist of subtracting the mean of the target variable, for a given category, with a general stastic (e.g. the mean of the dependent variable for all categories; the mean of the dependent variable in the previous category in an ordered list).
Principal Components Analysis (PCA) - run the PCA algorithm on the data matrix obtained by dummy representation of the categorical variable, then re-represent it with the corresponding projected eigenvector coefficients. One selects K eigenvectors (e.g. according to a variance explained rule), and thus each category is mapped to a vector of K real values.
Segmenting models, mixture models - A general alternative to categorical data representation is in fact to avoid it in the first place. One obvious method would be through creating hierarchical disaggregate methods (e.g. one per category). This is not in itself a representation paradigm, but an alternative way to see this problem. It certainly raises scalability and inference concerns.
In datasets where behavior heterogeneity is high, and number of observations is significantly smaller than population size, increasing dimensionality by adding a variable per each category is very risky because the amount of data that is in practice usable to estimate each new coefficient becomes insufficient. A simple intuition here is by considering that, for a dummy variable that is only “1" for a few observations in the dataset, its coefficient will be “activated" only that small number of times. If there is a lot of variance in the associated behavior, the variance of the coefficient will also be large, and the coefficient will be considered statistically insignificant.
The benefit of representations that map into a latent space, like embeddings and PCA, is that such a space is inevitably shared, and thus every observation contributes indirectly to all category variables. This comes with no interpretability cost, because one can always map to the “dummy" space and analyse the individual coefficients, as will be shown in our experiments.
The concept of text embeddings
The idea of text embeddings comes from a simple re-representation necessity. A natural-language processing system is itself also a numeric machine, therefore it requires each individual word in a dictionary to match its own numeric representation. Just as in our travel models, a possible solution has been to use dummy variables, and it is quite obvious that the dimensionality of such a one-hot encoding vector, quickly becomes overwhelming. Think for example next word prediction algorithm, like the one we have in our smartphones. It is essentially a skip-gram BIBREF4 model that predicts the next word, given the n words before. The English dictionary has about 300000 words, and if we have about 5 words before for context, the number of independent variables of the model would become 1.5 million!
The goal of text embeddings algorithms (e.g. Word2Vec BIBREF5) is to a) reduce the representation of each word to a computationally acceptable dimension, while simultaneously b) learning the semantic distance between different words. In other words, the euclidean distance of semantically related words (e.g. “dog" and “cat") in this new space should be smaller than unrelated words (e.g. “dog" and “optimize"). As mentioned before, in a dummy (or one-hot) encoding, all distances between words are equal by definition.
The word embeddings methodology is very well explained in several webpages such as BIBREF6, so the reader is strongly encouraged to visit them first. However, for the sake of completeness, we summarize here the general idea.
Imagine the following task: given a word $w_i$ in a text, predict the next word $w_o$. If we solve it with a neural network model, we could have the architecture in Figure FIGREF8, where the input consists simply of the one-hot-encoding representation of the word (i.e. one dummy variable for each word in a dictionary of dimensionality $D$), and the output corresponds to the probability of each word in the dictionary being the next one (also a vector with dimensionality $D$).
The output layer thus consists simply of a softmax function. In other words, exactly the classical multinomial logit formulation that we would have in an RUM, in which each different word corresponds to an “alternative".
The concept of embeddings is directly associated to the hidden layer, which is a set of linear activation neurons, typically with a dimensionality $K<<D$. Each such neuron is simply an identity function: it sums all inputs; then propagates this sum to the output layer. Since only one input neuron is activated at a time (remember that the input is a one-hot-encoding vector, with one “1" and the rest with “0"), each hidden layer neuron just propagates the (single) weight that links to that input neuron. If we have enough data for training this model, we will eventually land on a situation where, for each input word, there is a fixed vector of weights that are directly used in the output (softmax) function, to generate the prediction. With more data, this weight vector will not change (down to some small delta threshold). These stable vectors are what we call embeddings, and the dimensionality of these vectors is called embedding size.
Formally, we have a dataset $\mathcal {D}=\lbrace x_n, y_n\rbrace , n=1\ldots N$, where each $x_n$ and $y_n$ are one-hot (dummy) encodings of categorical variables. The dimensionality of $x_n$ is $D\times 1$, with $D$ being the number of different categories in $x_n$, while the dimensionality of $y_n$ is $C\times 1$, with $C$ being the number of categories (alternatives) in $y_n$. The full expression for the embeddings model as described is:
where $W$ is the embeddings matrix of size $K\times D$, where $K$ is called the embeddings size. $B$ is a matrix of coefficients ($C\times K$) for the softmax layer, so $B_c$ is simply the coefficients (row) vector for output class (alternative) $c$, and $\alpha _c$ is the corresponding intercept. The typical loss function used in such models is called the categorical cross entropy:
Where $\delta _{i}$ is the kronecker delta ($\delta _{true}=1; \delta _{false}=0$), and $\mathcal {L}(n)$ is the cumulative loss for an individual data point. This formalization is the simplest version, without loss of generality. In practice, as seen below, we will model multiple embeddings matrices simultaneously, and will add regularization terms to the loss function, so the models tested in this paper consist of compositions of the above.
So these so called embeddings are in fact a relatively shallow data representation in a simple neural network. What is their added value? Obviously, the first practical benefit is dimensionality reduction, because now there is a mapping between each of the $C$ words to a unique vector of size $K$. The second aspect is that this new representation is the one that maximizes the performance towards a specific task (in our example, prediction of the next word), therefore it is a supervised process, as opposed for example to PCA. The third and more interesting aspect relates with semantic similarity. A natural consequence of the mentioned algorithm is that words that have similar output distributions (i.e. next words) will tend to be close to each other. Figure FIGREF10 shows a 2D visualization (t-SNE) with a subset of english words. In such a visualization, data is projected in 2D space by maintaining the same vector-to-vector distances as in the original ($K$ order space). Therefore the X and Y axes have no specific meaning, only distances between every pair of points are relevant.
We can see that semantically similar concepts, more specifically concepts that tend to have the same distribution of “next words", are placed closer. Another intriguing consequence is that, since the words are now in the $K$ dimensional, embeddings space, we can also do some linear algebra on them. A well known formulation is $King-Man+Woman=Queen$. Essentially, the vector $King-Man$ corresponds to the concept of “crowning" (therefore $Woman+crowning=Queen$). The same could be done with many other concept pairs. Figure FIGREF11 show also an alternative interpretation of “man-female", as well as examples with cities and verb tense.
Finally, another relevant note on the embeddings representation is that, just like the PCA encoding, one can always project back into the original space and use this for interpretability. In other words, since there is a 1-to-1 mapping from each category to its encoding, there is also a 1-to-1 mapping between a model that uses dummy variables and a model using such encodings. This may be useful for interpretability, since in the case of dummy variables we have a direct interpretation (e.g. a beta coefficient value in a logit model) for the effect of a given category, while the same doesn't happen for an encoded variable (i.e. there is no meaning for the value of a single beta coefficient in an embeddings encoding when K>1). In order to preserve statistical significance information (e.g. p-values) we only need to follow the well known rules of normal random variables.
There are open databases available (e.g. GLoVe BIBREF9, FastText BIBREF7) that provide word embedding tables for the entire English language (Glove provides several embedding tables, up to embedding size between 100 and 300). In our next word application example, we now talk about models with 500-1500 variables, which is very manageable for our machines today.
Summarizing, the general idea of word embeddings is to re-represent a categorical variable into a lower dimensional representation with continuous values . Whenever such a variable is to be used in a model, one can simply replace it with the corresponding embeddings vector. We have previously demonstrated the value of such word embeddings in demand prediction in special events BIBREF10, where we collected event textual descriptions, and used Glove embedding vectors to incorporate such information in a neural network model.
Finally, an interesting point to mention relates to the typical difference in dataset size between the original embeddings training model (Glove, approximately 6 billion input word vectors from 37 million texts) and the model one implements to solve a particular problem (in our special events case, less than 1000 short event descriptions, with at most few hundred words each). Instead of creating ourselves a new embeddings model using the events dataset, we reused the pre-trained GloVe dataset. The benefit is significant because in practice we trained our model to deal with all words in the dictionary, much beyond the limited vocabulary that we obtained in our 1000 short texts. In practice we have used a very small percentage of the english dictionary. When, in an out-of-sample test, our model finds words that were not in the training set, it still works perfectly well.
Travel behaviour embeddings
Differently to textual data, our goal in this paper is to explore the large amount of categorical data that is often collected in travel surveys. This includes trip purpose, education level, or family type. We also consider other variables that are not necessarily of categorical nature, but typically end up as dummy encoding, due to segmentation, such as age, income, or even origin/destination pair.
Our hypothesis is that, given the limitations of dummy variables that are commonly used and the unsupervised nature of PCA, using instead an embeddings mechanism should improve significantly the quality of our models, both in terms of loglikelihood but also in terms of allowing for lower complexity (i.e. less variables). Ultimately, one could think of a framework such as GLoVe, where embeddings for such variables could be trivially shared with the community. For example, we could have a “Travel behavior embeddings" database, incrementally built from travel surveys from around the world. Such database could have embeddings for mode choice target variables, but also for departure time, destination choice, car ownership, and so on. Whenever a modeler wanted to estimate a new model, she could just download the right encodings and use them directly. This is particularly relevant if one considers the complicated challenges for opening or sharing travel survey datasets in our field. Of course, a major question arises: are behaviors that consistent across the world? There are certainly nuances across the world, but we believe that general patterns would emerge (e.g. a “business" trip purpose will be closer to “work" than “leisure", in a departure time choice model; “student" will be closer to “unemployed" than to “retired" in a car ownership model).
Travel behaviour embeddings ::: The general idea
We believe that, as with word embeddings, a mapping that preserves semantic distance relative to a certain choice problem, should be useful for modeling. As with a PCA encoding, another benefit is that by sharing parameters in the learning process, the model can generalize better, as opposed to a dummy encoding, where each categorical value has its own parameter, that is only active when observed.
The general idea is thus to create a mapping between a variable for which we want to find an embeddings representation, and a target variable, as in Figure FIGREF15. We call the mapping function “PyTre Embeddings", because that is the name of the object in our proposed Python “Travel Embeddings" package.
From an experimental design and application perspective, the approach followed in this paper is the following:
Create list of categorical variables to encode (the encoding set)
Split dataset into train, development and test sets
For each variable in encoding set, learn the new embeddings using the embeddings train set . This should be done simultaneously (all variable embeddings estimated at once, as explained in the next section).
Encode choice models for train, development and test sets using the learned embeddings
Estimate choice model accordingly using its train set
Evaluate the new model using the test set
Since there is stochasticity in the embeddings training model, we will repeat the above multiple times, for the different experiments in the paper (and report the respective mean and standard deviation statistics). Whenever we want to analyse a particular model (e.g. to check the coefficients of a choice model), we select the one with the highest likelihood at the development set (i.e. in practice, its out-of-sample generalization performance), and report its performance on the test set.
Travel behaviour embeddings ::: Methodology
Since a choice model will typically involve other variables than the categorical ones that we learn the embeddings for, it is important to take into account their effects. Figure FIGREF24 shows the simplest travel embeddings model. As an example, the categorical variable is trip purpose, and there are a few other variables such as gender, cost of the alternatives, distance, and so on. Notice that they are directly fed into the softmax output layer, together with the embeddings output.
The dataset sizes in transportation behavior modeling are substantially smaller than typical word embeddings ones, and the risk of overfitting is therefore higher. To mitigate this problem, besides adding regularization penalties in the objective function, we add what we call a regularizer layer for each embedding, which is no more than a softmax layer that penalizes whenever it cannot recover the original one-hot-encoding vectors (Figure FIGREF25, left). We call the combination of embeddings and its regularizer network, a Travel Embeddings layer. Finally, it is obviously better to train all embeddings simultaneously, so that they accommodate each other's effects (Figure FIGREF25, right).
An experiment with mode choice
The goal of this paper is to test the potential of embeddings in a simple and well-known choice model context, comparing it to well-known baseline techniques. Therefore, the general model specification follows quite simple assumptions. We expect that in future work from us or others, more elaborate derivations can take advantage of embeddings such as nested, mixed logit or latent class choice models (LCCM), for example.
We will apply the methodology to the well-known “Swissmetro" dataset. We will compare it with a dummy variables and PCA baselines. We will follow the 3-way experimental design mentioned before: split the dataset into train, development and test sets, so that the embeddings, PCA eigenvectors and the choice model are estimated from the same train and development sets, and validate it out-of-sample. For the sake of interpretability, we will also project back coefficients from the embeddings as well as PCA models into the dummy variable space.
All experiment code is available as a jupyter notebook in a package we created for this work (to which we called PyTre). For estimating the multinomial logit model (MNL) we used the PyLogit BIBREF11 package.
An experiment with mode choice ::: The Swissmetro dataset
The Swissmetro dataset consists of survey data collected on the trains between St. Gallen and Geneva, Switzerland, during March 1998. According to its description BIBREF0, the respondents provided information in order to analyze the impact of the modal innovation in transportation, represented by the Swissmetro, a revolutionary mag-lev underground system, against the usual transport modes represented by car and train. After discarding respondents for which some variables were not available (e.g. age, purpose), a total of 10469 responses from 1188 individuals were used for the experiments.
We split the dataset into 3 different parts:
Embeddings train set: 60% of the dataset (6373 vectors)
Development set: 20% of the dataset (2003 vectors)
Test set: 20% of the dataset (2003 vectors)
An experiment with mode choice ::: Principles for the model specification
The PyLogit package BIBREF11 also uses Swissmetro as an example. Therefore, our model specifications will extend the default one from this package. We re-estimated this model with the train set and validated with testset. The results are shown in tables TABREF31 and TABREF32. Since we are comparing the models at the test set, the key indicators should be pseudo R-square and log-likelihood. Indicators that consider model complexity (robust r-square and AIC) are less important on the test set in our view because the overfitting effect (i.e. improving fit just by adding more variables) will no longer be verifiable in this way. Instead, one sees overfitting if test set performance is considerably inferior to the training set. | The data from collected travel surveys is used to model travel behavior. |
f2c5da398e601e53f9f545947f61de5f40ede1ee | f2c5da398e601e53f9f545947f61de5f40ede1ee_0 | Q: How do their interpret the coefficients?
Text: Introduction
Since their early days, representation in random utility behavior models has followed generally quite clear principles. For example, numeric quantities like travel time and cost may be directly used or transformed depending on observed non-linear efects (e.g. using log). Numeric variables that are not “quantities" per se, such as age or even geographic coordinates tend to be discretized and then transformed into vectors of dummy variables. Similarly, categorical variables such as education level or trip purpose are already discrete, and thus are also usually “dummyfied". Then, we may interact any subset of the above by combining (typically, multiplying) them, as long as we get in the end a vector of numeric values that can be incorporated in a statistical model, a linear one in the case of the most common logit model.
There are however phenomena that are hard to represent, and modelers end up struggling to find the right representation. For example, influence of social interactions between different persons, hierarchical decision making, autocorrelated nature of time and space, or abstract concepts such as accessibility, attitudes, personality traits and so on. The point here, is that the nature of our models seems to enforce a compromise between the true semantics of a variable (i.e. the “meaning" of a certain information for the decision making process) and its realisation in practice. And that further research should be done to find new representation paradigms.
Historically speaking, the natural language processing (NLP) field has had similar dilemmas for decades, and for a while two general trends were competing: the statistical modeling approaches, and the linguistic theory based approaches. The former relied on simple representations, such as vector frequencies, or dummy variables, to become practical, while the latter used domain knowledge such as grammars or logic. Until recently, neither had considerable success in making machines able to understand or generate human language, but developments in deep neural networks together with overwhelmingly massive amounts of data (i.e. the World Wide Web) brought them to a new area, where the two are approaching each other and achieving hitherto results considered extremely hard, such as question answering, translation, next word prediction. One of the key concepts in this revolution is that of embeddings, which will be further explained in this paper.
Our focus here is on the representation of categorical variables. The default paradigm is dummy variables (also known as “one-hot-encoding" in machine learning literature), which have well-known limitations, namely the explosion of dimensionality and enforced ortogonality. The former happens because we assign one new “dummy" variable to each of D-1 categories, and easily go from a small original variable specification to one with hundreds of variables, bringing problems in model estimation and analysis. This often affects the data collection process itself. Since one doesn't want to end up with too many categories, we might as well give less options in a survey, or decrease the resolution of a sensor. The problem of enforced ortogonality relates to the fact that, in a dummy encoding, all categories become equidistant. The similarity between “student" and “employed" is the same as between “student" and “retired", which in many cases (e.g. mode choice, departure time choice) goes against intuition. Other encoding methods exist, such as contrasted encoding or principal components analysis (PCA). The former ends up being a subtle variation on the dummy approach, but the latter already provides an interesting answer to the problem: categories are no longer forcibly equidistant, and the number of variables can be much reduced. However, it is a non-supervised approach. The distance between “student" and “employed" will always be the same, regardless of the problem we are solving, but this may be intuitively illogical if we consider car ownership versus departure time choice models for example.
The key idea in this paper is to introduce a method, called Travel Behavior embeddings, that borrows much from the NLP concept. This method serves to encode categorical variables, and is dependent on the problem at hand. We will focus on mode choice, and test on a well-known dataset, by comparing with both dummy and PCA encoding. All the dataset and code are made openly available, and the reader can follow and generate results him/herself using an iPython notebook included. Our ultimate goal is certainly that the reader reuses our PyTre package for own purposes.
This paper presents some results and conclusions, after a relatively long exploration and analysis process, including other datasets and code variations not mentioned here for interest of clarity and replicability. While we show these concepts to be promising and innovative in this paper, one should be wary of over-hyping yet another Machine Learning/Artificial Intelligence concept: after all, Machine Learning is still essentially based on statistics. In NLP, the number of different words in consideration at a given moment can be in order of tens of thousands, while our categorical variables rarely go beyond a few dozens. This means that, for example, it becomes clear later that the least number of original categories, the less the benefit of embeddings (in the limit, a binary variable like gender, is useless to do embeddings with), and also that if we do get a significantly large and statistically representative dataset, a dummy variables representation is sufficient. We will quickly see, however, that complexity can grow quick enough to justify an embeddings based method even if without the shockingly better performance observed in NLP applications.
Representing categorical variables
We are generally concerned with random utility maximization (RUM) models, for they have a dominant role in travel behavior modeling. The nature of such models is predominantly numeric, linear, and quite often strictly flat (notwithstanding hierarchical variations, such as nested models BIBREF1, hierarchical Bayes BIBREF2, or non-linear transformations). As a consequence, while numerical variables (e.g. travel time, cost, or income) can be directly used as available, perhaps subject to transformations or segmentation, nominal ones bring about a greater challenge. We tend to enforce a limited set of treatments such as:
Dummy variables, or one-hot encoding - for each categorical variable $v$ with D categories, we get D-1 binary variables (the “dummies"). At each input vector $x_n$, with categorical value $v=d$, the value “1" will be assigned to the corresponding dummy, while “0" to all others. If $v$ corresponds to the “default" category, all dummies are “0".
Contrast encoding BIBREF3 - same as dummy encoding, but instead of “1" for each category, we have a value that results from a contrasting formula. There are many different formulas (e.g. Helmert, Sum, Backward Difference), but all consist of subtracting the mean of the target variable, for a given category, with a general stastic (e.g. the mean of the dependent variable for all categories; the mean of the dependent variable in the previous category in an ordered list).
Principal Components Analysis (PCA) - run the PCA algorithm on the data matrix obtained by dummy representation of the categorical variable, then re-represent it with the corresponding projected eigenvector coefficients. One selects K eigenvectors (e.g. according to a variance explained rule), and thus each category is mapped to a vector of K real values.
Segmenting models, mixture models - A general alternative to categorical data representation is in fact to avoid it in the first place. One obvious method would be through creating hierarchical disaggregate methods (e.g. one per category). This is not in itself a representation paradigm, but an alternative way to see this problem. It certainly raises scalability and inference concerns.
In datasets where behavior heterogeneity is high, and number of observations is significantly smaller than population size, increasing dimensionality by adding a variable per each category is very risky because the amount of data that is in practice usable to estimate each new coefficient becomes insufficient. A simple intuition here is by considering that, for a dummy variable that is only “1" for a few observations in the dataset, its coefficient will be “activated" only that small number of times. If there is a lot of variance in the associated behavior, the variance of the coefficient will also be large, and the coefficient will be considered statistically insignificant.
The benefit of representations that map into a latent space, like embeddings and PCA, is that such a space is inevitably shared, and thus every observation contributes indirectly to all category variables. This comes with no interpretability cost, because one can always map to the “dummy" space and analyse the individual coefficients, as will be shown in our experiments.
The concept of text embeddings
The idea of text embeddings comes from a simple re-representation necessity. A natural-language processing system is itself also a numeric machine, therefore it requires each individual word in a dictionary to match its own numeric representation. Just as in our travel models, a possible solution has been to use dummy variables, and it is quite obvious that the dimensionality of such a one-hot encoding vector, quickly becomes overwhelming. Think for example next word prediction algorithm, like the one we have in our smartphones. It is essentially a skip-gram BIBREF4 model that predicts the next word, given the n words before. The English dictionary has about 300000 words, and if we have about 5 words before for context, the number of independent variables of the model would become 1.5 million!
The goal of text embeddings algorithms (e.g. Word2Vec BIBREF5) is to a) reduce the representation of each word to a computationally acceptable dimension, while simultaneously b) learning the semantic distance between different words. In other words, the euclidean distance of semantically related words (e.g. “dog" and “cat") in this new space should be smaller than unrelated words (e.g. “dog" and “optimize"). As mentioned before, in a dummy (or one-hot) encoding, all distances between words are equal by definition.
The word embeddings methodology is very well explained in several webpages such as BIBREF6, so the reader is strongly encouraged to visit them first. However, for the sake of completeness, we summarize here the general idea.
Imagine the following task: given a word $w_i$ in a text, predict the next word $w_o$. If we solve it with a neural network model, we could have the architecture in Figure FIGREF8, where the input consists simply of the one-hot-encoding representation of the word (i.e. one dummy variable for each word in a dictionary of dimensionality $D$), and the output corresponds to the probability of each word in the dictionary being the next one (also a vector with dimensionality $D$).
The output layer thus consists simply of a softmax function. In other words, exactly the classical multinomial logit formulation that we would have in an RUM, in which each different word corresponds to an “alternative".
The concept of embeddings is directly associated to the hidden layer, which is a set of linear activation neurons, typically with a dimensionality $K<<D$. Each such neuron is simply an identity function: it sums all inputs; then propagates this sum to the output layer. Since only one input neuron is activated at a time (remember that the input is a one-hot-encoding vector, with one “1" and the rest with “0"), each hidden layer neuron just propagates the (single) weight that links to that input neuron. If we have enough data for training this model, we will eventually land on a situation where, for each input word, there is a fixed vector of weights that are directly used in the output (softmax) function, to generate the prediction. With more data, this weight vector will not change (down to some small delta threshold). These stable vectors are what we call embeddings, and the dimensionality of these vectors is called embedding size.
Formally, we have a dataset $\mathcal {D}=\lbrace x_n, y_n\rbrace , n=1\ldots N$, where each $x_n$ and $y_n$ are one-hot (dummy) encodings of categorical variables. The dimensionality of $x_n$ is $D\times 1$, with $D$ being the number of different categories in $x_n$, while the dimensionality of $y_n$ is $C\times 1$, with $C$ being the number of categories (alternatives) in $y_n$. The full expression for the embeddings model as described is:
where $W$ is the embeddings matrix of size $K\times D$, where $K$ is called the embeddings size. $B$ is a matrix of coefficients ($C\times K$) for the softmax layer, so $B_c$ is simply the coefficients (row) vector for output class (alternative) $c$, and $\alpha _c$ is the corresponding intercept. The typical loss function used in such models is called the categorical cross entropy:
Where $\delta _{i}$ is the kronecker delta ($\delta _{true}=1; \delta _{false}=0$), and $\mathcal {L}(n)$ is the cumulative loss for an individual data point. This formalization is the simplest version, without loss of generality. In practice, as seen below, we will model multiple embeddings matrices simultaneously, and will add regularization terms to the loss function, so the models tested in this paper consist of compositions of the above.
So these so called embeddings are in fact a relatively shallow data representation in a simple neural network. What is their added value? Obviously, the first practical benefit is dimensionality reduction, because now there is a mapping between each of the $C$ words to a unique vector of size $K$. The second aspect is that this new representation is the one that maximizes the performance towards a specific task (in our example, prediction of the next word), therefore it is a supervised process, as opposed for example to PCA. The third and more interesting aspect relates with semantic similarity. A natural consequence of the mentioned algorithm is that words that have similar output distributions (i.e. next words) will tend to be close to each other. Figure FIGREF10 shows a 2D visualization (t-SNE) with a subset of english words. In such a visualization, data is projected in 2D space by maintaining the same vector-to-vector distances as in the original ($K$ order space). Therefore the X and Y axes have no specific meaning, only distances between every pair of points are relevant.
We can see that semantically similar concepts, more specifically concepts that tend to have the same distribution of “next words", are placed closer. Another intriguing consequence is that, since the words are now in the $K$ dimensional, embeddings space, we can also do some linear algebra on them. A well known formulation is $King-Man+Woman=Queen$. Essentially, the vector $King-Man$ corresponds to the concept of “crowning" (therefore $Woman+crowning=Queen$). The same could be done with many other concept pairs. Figure FIGREF11 show also an alternative interpretation of “man-female", as well as examples with cities and verb tense.
Finally, another relevant note on the embeddings representation is that, just like the PCA encoding, one can always project back into the original space and use this for interpretability. In other words, since there is a 1-to-1 mapping from each category to its encoding, there is also a 1-to-1 mapping between a model that uses dummy variables and a model using such encodings. This may be useful for interpretability, since in the case of dummy variables we have a direct interpretation (e.g. a beta coefficient value in a logit model) for the effect of a given category, while the same doesn't happen for an encoded variable (i.e. there is no meaning for the value of a single beta coefficient in an embeddings encoding when K>1). In order to preserve statistical significance information (e.g. p-values) we only need to follow the well known rules of normal random variables.
There are open databases available (e.g. GLoVe BIBREF9, FastText BIBREF7) that provide word embedding tables for the entire English language (Glove provides several embedding tables, up to embedding size between 100 and 300). In our next word application example, we now talk about models with 500-1500 variables, which is very manageable for our machines today.
Summarizing, the general idea of word embeddings is to re-represent a categorical variable into a lower dimensional representation with continuous values . Whenever such a variable is to be used in a model, one can simply replace it with the corresponding embeddings vector. We have previously demonstrated the value of such word embeddings in demand prediction in special events BIBREF10, where we collected event textual descriptions, and used Glove embedding vectors to incorporate such information in a neural network model.
Finally, an interesting point to mention relates to the typical difference in dataset size between the original embeddings training model (Glove, approximately 6 billion input word vectors from 37 million texts) and the model one implements to solve a particular problem (in our special events case, less than 1000 short event descriptions, with at most few hundred words each). Instead of creating ourselves a new embeddings model using the events dataset, we reused the pre-trained GloVe dataset. The benefit is significant because in practice we trained our model to deal with all words in the dictionary, much beyond the limited vocabulary that we obtained in our 1000 short texts. In practice we have used a very small percentage of the english dictionary. When, in an out-of-sample test, our model finds words that were not in the training set, it still works perfectly well.
Travel behaviour embeddings
Differently to textual data, our goal in this paper is to explore the large amount of categorical data that is often collected in travel surveys. This includes trip purpose, education level, or family type. We also consider other variables that are not necessarily of categorical nature, but typically end up as dummy encoding, due to segmentation, such as age, income, or even origin/destination pair.
Our hypothesis is that, given the limitations of dummy variables that are commonly used and the unsupervised nature of PCA, using instead an embeddings mechanism should improve significantly the quality of our models, both in terms of loglikelihood but also in terms of allowing for lower complexity (i.e. less variables). Ultimately, one could think of a framework such as GLoVe, where embeddings for such variables could be trivially shared with the community. For example, we could have a “Travel behavior embeddings" database, incrementally built from travel surveys from around the world. Such database could have embeddings for mode choice target variables, but also for departure time, destination choice, car ownership, and so on. Whenever a modeler wanted to estimate a new model, she could just download the right encodings and use them directly. This is particularly relevant if one considers the complicated challenges for opening or sharing travel survey datasets in our field. Of course, a major question arises: are behaviors that consistent across the world? There are certainly nuances across the world, but we believe that general patterns would emerge (e.g. a “business" trip purpose will be closer to “work" than “leisure", in a departure time choice model; “student" will be closer to “unemployed" than to “retired" in a car ownership model).
Travel behaviour embeddings ::: The general idea
We believe that, as with word embeddings, a mapping that preserves semantic distance relative to a certain choice problem, should be useful for modeling. As with a PCA encoding, another benefit is that by sharing parameters in the learning process, the model can generalize better, as opposed to a dummy encoding, where each categorical value has its own parameter, that is only active when observed.
The general idea is thus to create a mapping between a variable for which we want to find an embeddings representation, and a target variable, as in Figure FIGREF15. We call the mapping function “PyTre Embeddings", because that is the name of the object in our proposed Python “Travel Embeddings" package.
From an experimental design and application perspective, the approach followed in this paper is the following:
Create list of categorical variables to encode (the encoding set)
Split dataset into train, development and test sets
For each variable in encoding set, learn the new embeddings using the embeddings train set . This should be done simultaneously (all variable embeddings estimated at once, as explained in the next section).
Encode choice models for train, development and test sets using the learned embeddings
Estimate choice model accordingly using its train set
Evaluate the new model using the test set
Since there is stochasticity in the embeddings training model, we will repeat the above multiple times, for the different experiments in the paper (and report the respective mean and standard deviation statistics). Whenever we want to analyse a particular model (e.g. to check the coefficients of a choice model), we select the one with the highest likelihood at the development set (i.e. in practice, its out-of-sample generalization performance), and report its performance on the test set.
Travel behaviour embeddings ::: Methodology
Since a choice model will typically involve other variables than the categorical ones that we learn the embeddings for, it is important to take into account their effects. Figure FIGREF24 shows the simplest travel embeddings model. As an example, the categorical variable is trip purpose, and there are a few other variables such as gender, cost of the alternatives, distance, and so on. Notice that they are directly fed into the softmax output layer, together with the embeddings output.
The dataset sizes in transportation behavior modeling are substantially smaller than typical word embeddings ones, and the risk of overfitting is therefore higher. To mitigate this problem, besides adding regularization penalties in the objective function, we add what we call a regularizer layer for each embedding, which is no more than a softmax layer that penalizes whenever it cannot recover the original one-hot-encoding vectors (Figure FIGREF25, left). We call the combination of embeddings and its regularizer network, a Travel Embeddings layer. Finally, it is obviously better to train all embeddings simultaneously, so that they accommodate each other's effects (Figure FIGREF25, right).
An experiment with mode choice
The goal of this paper is to test the potential of embeddings in a simple and well-known choice model context, comparing it to well-known baseline techniques. Therefore, the general model specification follows quite simple assumptions. We expect that in future work from us or others, more elaborate derivations can take advantage of embeddings such as nested, mixed logit or latent class choice models (LCCM), for example.
We will apply the methodology to the well-known “Swissmetro" dataset. We will compare it with a dummy variables and PCA baselines. We will follow the 3-way experimental design mentioned before: split the dataset into train, development and test sets, so that the embeddings, PCA eigenvectors and the choice model are estimated from the same train and development sets, and validate it out-of-sample. For the sake of interpretability, we will also project back coefficients from the embeddings as well as PCA models into the dummy variable space.
All experiment code is available as a jupyter notebook in a package we created for this work (to which we called PyTre). For estimating the multinomial logit model (MNL) we used the PyLogit BIBREF11 package.
An experiment with mode choice ::: The Swissmetro dataset
The Swissmetro dataset consists of survey data collected on the trains between St. Gallen and Geneva, Switzerland, during March 1998. According to its description BIBREF0, the respondents provided information in order to analyze the impact of the modal innovation in transportation, represented by the Swissmetro, a revolutionary mag-lev underground system, against the usual transport modes represented by car and train. After discarding respondents for which some variables were not available (e.g. age, purpose), a total of 10469 responses from 1188 individuals were used for the experiments.
We split the dataset into 3 different parts:
Embeddings train set: 60% of the dataset (6373 vectors)
Development set: 20% of the dataset (2003 vectors)
Test set: 20% of the dataset (2003 vectors)
An experiment with mode choice ::: Principles for the model specification
The PyLogit package BIBREF11 also uses Swissmetro as an example. Therefore, our model specifications will extend the default one from this package. We re-estimated this model with the train set and validated with testset. The results are shown in tables TABREF31 and TABREF32. Since we are comparing the models at the test set, the key indicators should be pseudo R-square and log-likelihood. Indicators that consider model complexity (robust r-square and AIC) are less important on the test set in our view because the overfitting effect (i.e. improving fit just by adding more variables) will no longer be verifiable in this way. Instead, one sees overfitting if test set performance is considerably inferior to the training set. | The coefficients are projected back to the dummy variable space. |
2d4d0735c50749aa8087d1502ab7499faa2f0dd8 | 2d4d0735c50749aa8087d1502ab7499faa2f0dd8_0 | Q: By how much do they outperform previous state-of-the-art models?
Text: Introduction
Globally, human trafficking is one of the fastest growing crimes and, with annual profits estimated to be in excess of 150 billion USD, it is also among the most lucrative BIBREF0 . Sex trafficking is a form of human trafficking which involves sexual exploitation through coercion. Recent estimates suggest that nearly 4 million adults and 1 million children are being victimized globally on any given day; furthermore, it is estimated that 99 percent of victims are female BIBREF1 . Escort websites are an increasingly popular vehicle for selling the services of trafficking victims. According to a recent survivor survey BIBREF2 , 38% of underage trafficking victims who were enslaved prior to 2004 were advertised online, and that number rose to 75% for those enslaved after 2004. Prior to its shutdown in April 2018, the website Backpage was the most frequently used online advertising platform; other popular escort websites include Craigslist, Redbook, SugarDaddy, and Facebook BIBREF2 . Despite the seizure of Backpage, there were nearly 150,000 new online sex advertisements posted per day in the U.S. alone in late 2018 BIBREF3 ; even with many of these new ads being re-posts of existing ads and traffickers often posting multiple ads for the same victims BIBREF2 , this volume is staggering.
Because of their ubiquity and public access, escort websites are a rich resource for anti-trafficking operations. However, many law enforcement agencies do not have the resources to sift through the volume of escort ads to identify those coming from potential traffickers. One scalable and efficient solution is to build a statistical model to predict the likelihood of an ad coming from a trafficker using a dataset annotated by anti-trafficking experts. We propose an ordinal regression neural network tailored for text input. This model comprises three components: (i) a Word2Vec model BIBREF4 that maps each word from the text input to a numeric vector, (ii) a gated-feedback recurrent neural network BIBREF5 that sequentially processes the word vectors, and (iii) an ordinal regression layer BIBREF6 that produces a predicted ordinal label. We use a modified cost function to mitigate inconsistencies in predictions associated with nonparametric ordinal regression. We also leverage several regularization techniques for deep neural networks to further improve model performance, such as residual connection BIBREF7 and batch normalization BIBREF8 . We conduct our experiments on Trafficking-10k BIBREF9 , a dataset of escort ads for which anti-trafficking experts assigned each sample one of seven ordered labels ranging from “1: Very Unlikely (to come from traffickers)” to “7: Very Likely”. Our proposed model significantly outperforms previously published models BIBREF9 on Trafficking-10k as well as a variety of baseline ordinal regression models. In addition, we analyze the emojis used in escort ads with Word2Vec and t-SNE BIBREF10 , and we show that the lexicon of trafficking-related emojis can be subsequently expanded.
In Section SECREF2 , we discuss related work on human trafficking detection and ordinal regression. In Section SECREF3 , we present our proposed model and detail its components. In Section SECREF4 , we present the experimental results, including the Trafficking-10K benchmark, a qualitative analysis of the predictions on raw data, and the emoji analysis. In Section SECREF5 , we summarize our findings and discuss future work.
Related Work
Trafficking detection: There have been several software products designed to aid anti-trafficking efforts. Examples include Memex which focuses on search functionalities in the dark web; Spotlight which flags suspicious ads and links images appearing in multiple ads; Traffic Jam which seeks to identify patterns that connect multiple ads to the same trafficking organization; and TraffickCam which aims to construct a crowd-sourced database of hotel room images to geo-locate victims. These research efforts have largely been isolated, and few research articles on machine learning for trafficking detection have been published. Closest to our work is the Human Trafficking Deep Network (HTDN) BIBREF9 . HTDN has three main components: a language network that uses pretrained word embeddings and a long short-term memory network (LSTM) to process text input; a vision network that uses a convolutional network to process image input; and another convolutional network to combine the output of the previous two networks and produce a binary classification. Compared to the language network in HTDN, our model replaces LSTM with a gated-feedback recurrent neural network, adopts certain regularizations, and uses an ordinal regression layer on top. It significantly improves HTDN's benchmark despite only using text input. As in the work of E. Tong et al. ( BIBREF9 ), we pre-train word embeddings using a skip-gram model BIBREF4 applied to unlabeled data from escort ads, however, we go further by analyzing the emojis' embeddings and thereby expand the trafficking lexicon.
Ordinal regression: We briefly review ordinal regression before introducing the proposed methodology. We assume that the training data are INLINEFORM0 , where INLINEFORM1 are the features and INLINEFORM2 is the response; INLINEFORM3 is the set of INLINEFORM4 ordered labels INLINEFORM5 with INLINEFORM6 . Many ordinal regression methods learn a composite map INLINEFORM7 , where INLINEFORM8 and INLINEFORM9 have the interpretation that INLINEFORM10 is a latent “score” which is subsequently discretized into a category by INLINEFORM11 . INLINEFORM12 is often estimated by empirical risk minimization, i.e., by minimizing a loss function INLINEFORM13 averaged over the training data. Standard choices of INLINEFORM14 and INLINEFORM15 are reviewed by J. Rennie & N. Srebro ( BIBREF11 ).
Another common approach to ordinal regression, which we adopt in our proposed method, is to transform the label prediction into a series of INLINEFORM0 binary classification sub-problems, wherein the INLINEFORM1 th sub-problem is to predict whether or not the true label exceeds INLINEFORM2 BIBREF12 , BIBREF13 . For example, one might use a series of logistic regression models to estimate the conditional probabilities INLINEFORM3 for each INLINEFORM4 . J. Cheng et al. ( BIBREF6 ) estimated these probabilities jointly using a neural network; this was later extended to image data BIBREF14 as well as text data BIBREF15 , BIBREF16 . However, as acknowledged by J. Cheng et al. ( BIBREF6 ), the estimated probabilities need not respect the ordering INLINEFORM5 for all INLINEFORM6 and INLINEFORM7 . We force our estimator to respect this ordering through a penalty on its violation.
Method
Our proposed ordinal regression model consists of the following three components: Word embeddings pre-trained by a Skip-gram model, a gated-feedback recurrent neural network that constructs summary features from sentences, and a multi-labeled logistic regression layer tailored for ordinal regression. See Figure SECREF3 for a schematic. The details of its components and their respective alternatives are discussed below.
figure Overview of the ordinal regression neural network for text input. INLINEFORM0 represents a hidden state in a gated-feedback recurrent neural network.
Word Embeddings
Vector representations of words, also known as word embeddings, can be obtained through unsupervised learning on a large text corpus so that certain linguistic regularities and patterns are encoded. Compared to Latent Semantic Analysis BIBREF17 , embedding algorithms using neural networks are particularly good at preserving linear regularities among words in addition to grouping similar words together BIBREF18 . Such embeddings can in turn help other algorithms achieve better performances in various natural language processing tasks BIBREF4 .
Unfortunately, the escort ads contain a plethora of emojis, acronyms, and (sometimes deliberate) typographical errors that are not encountered in more standard text data, which suggests that it is likely better to learn word embeddings from scratch on a large collection of escort ads instead of using previously published embeddings BIBREF9 . We use 168,337 ads scraped from Backpage as our training corpus and the Skip-gram model with Negative sampling BIBREF4 as our model.
Gated-Feedback Recurrent Neural Network
To process entire sentences and paragraphs after mapping the words to embeddings, we need a model to handle sequential data. Recurrent neural networks (RNNs) have recently seen great success at modeling sequential data, especially in natural language processing tasks BIBREF19 . On a high level, an RNN is a neural network that processes a sequence of inputs one at a time, taking the summary of the sequence seen so far from the previous time point as an additional input and producing a summary for the next time point. One of the most widely used variations of RNNs, a Long short-term memory network (LSTM), uses various gates to control the information flow and is able to better preserve long-term dependencies in the running summary compared to a basic RNN BIBREF20 . In our implementation, we use a further refinement of multi-layed LSTMs, Gated-feedback recurrent neural networks (GF-RNNs), which tend to capture dependencies across different timescales more easily BIBREF5 .
Regularization techniques for neural networks including Dropout BIBREF21 , Residual connection BIBREF7 , and Batch normalization BIBREF8 are added to GF-RNN for further improvements.
After GF-RNN processes an entire escort ad, the average of the hidden states of the last layer becomes the input for the multi-labeled logistic regression layer which we discuss next.
Multi-Labeled Logistic Regression Layer
As noted previously, the ordinal regression problem can be cast into a series of binary classification problems and thereby utilize the large repository of available classification algorithms BIBREF12 , BIBREF13 , BIBREF14 . One formulation is as follows. Given INLINEFORM0 total ranks, the INLINEFORM1 -th binary classifier is trained to predict the probability that a sample INLINEFORM2 has rank larger than INLINEFORM3 . Then the predicted rank is INLINEFORM4
In a classification task, the final layer of a deep neural network is typically a softmax layer with dimension equal to the number of classes BIBREF20 . Using the ordinal-regression-to-binary-classifications formulation described above, J. Cheng et al. ( BIBREF6 ) replaced the softmax layer in their neural network with a INLINEFORM0 -dimensional sigmoid layer, where each neuron serves as a binary classifier (see Figure SECREF7 but without the order penalty to be discussed later).
With the sigmoid activation function, the output of the INLINEFORM0 th neuron can be viewed as the predicted probability that the sample has rank greater than INLINEFORM5 . Alternatively, the entire sigmoid layer can be viewed as performing multi-labeled logistic regression, where the INLINEFORM6 th label is the indicator of the sample's rank being greater than INLINEFORM7 . The training data are thus re-formatted accordingly so that response variable for a sample with rank INLINEFORM8 becomes INLINEFORM9 k-1 INLINEFORM10 Y Y INLINEFORM11 Y - Y INLINEFORM12 J. Cheng et al.'s ( BIBREF6 ) final layer was preceded by a simple feed-forward network. In our case, word embeddings and GF-RNN allow us to construct a feature vector of fixed length from text input, so we can simply attach the multi-labeled logistic regression layer to the output of GF-RNN to complete an ordinal regression neural network for text input.
The violation of the monotonicity in the estimated probabilities (e.g., INLINEFORM0 for some INLINEFORM1 and INLINEFORM2 ) has remained an open issue since the original ordinal regression neural network proposal of J. Cheng et al ( BIBREF6 ). This is perhaps owed in part to the belief that correcting this issue would significantly increase training complexity BIBREF14 . We propose an effective and computationally efficient solution to avoid the conflicting predictions as follows: penalize such conflicts in the training phase by adding INLINEFORM3
to the loss function for a sample INLINEFORM0 , where INLINEFORM1 is a penalty parameter (Figure SECREF7 ). For sufficiently large INLINEFORM2 the estimated probabilities will respect the monotonicity condition; respecting this condition improves the interpretability of the predictions, which is vital in applications like the one we consider here as stakeholders are given the estimated probabilities. We also hypothesize that the order penalty may serve as a regularizer to improve each binary classifier (see the ablation test in Section SECREF15 ).
figure Ordinal regression layer with order penalty.
All three components of our model (word embeddings, GF-RNN, and multi-labeled logistic regression layer) can be trained jointly, with word embeddings optionally held fixed or given a smaller learning rate for fine-tuning. The hyperparameters for all components are given in the Appendix. They are selected according to either literature or grid-search.
Experiments
We first describe the datasets we use to train and evaluate our models. Then we present a detailed comparison of our proposed model with commonly used ordinal regression models as well as the previous state-of-the-art classification model by E. Tong et al. ( BIBREF9 ). To assess the effect of each component in our model, we perform an ablation test where the components are swapped by their more standard alternatives one at a time. Next, we perform a qualitative analysis on the model predictions on the raw data, which are scraped from a different escort website than the one that provides the labeled training data. Finally, we conduct an emoji analysis using the word embeddings trained on raw escort ads.
Datasets
We use raw texts scraped from Backpage and TNABoard to pre-train the word embeddings, and use the same labeled texts E. Tong et al. ( BIBREF9 ) used to conduct model comparisons. The raw text dataset consists of 44,105 ads from TNABoard and 124,220 ads from Backpage. Data cleaning/preprocessing includes joining the title and the body of an ad; adding white spaces around every emoji so that it can be tokenized properly; stripping tabs, line breaks, punctuations, and extra white spaces; removing phone numbers; and converting all letters to lower case. We have ensured that the raw dataset has no overlap with the labeled dataset to avoid bias in test accuracy. While it is possible to scrape more raw data, we did not observe significant improvements in model performances when the size of raw data increased from INLINEFORM0 70,000 to INLINEFORM1 170,000, hence we assume that the current raw dataset is sufficiently large.
The labeled dataset is called Trafficking-10k. It consists of 12,350 ads from Backpage labeled by experts in human trafficking detection BIBREF9 . Each label is one of seven ordered levels of likelihood that the corresponding ad comes from a human trafficker. Descriptions and sample proportions of the labels are in Table TABREF11 . The original Trafficking-10K includes both texts and images, but as mentioned in Section SECREF1 , only the texts are used in our case. We apply the same preprocessing to Trafficking-10k as we do to raw data.
Comparison with Baselines
We compare our proposed ordinal regression neural network (ORNN) to Immediate-Threshold ordinal logistic regression (IT) BIBREF11 , All-Threshold ordinal logistic regression (AT) BIBREF11 , Least Absolute Deviation (LAD) BIBREF22 , BIBREF23 , and multi-class logistic regression (MC) which ignores the ordering. The primary evaluation metrics are Mean Absolute Error (MAE) and macro-averaged Mean Absolute Error ( INLINEFORM0 ) BIBREF24 . To compare our model with the previous state-of-the-art classification model for escort ads, the Human Trafficking Deep Network (HTDN) BIBREF9 , we also polarize the true and predicted labels into two classes, “1-4: Unlikely” and “5-7: Likely”; then we compute the binary classification accuracy (Acc.) as well as the weighted binary classification accuracy (Wt. Acc.) given by INLINEFORM1
Note that for applications in human trafficking detection, MAE and Acc. are of primary interest. Whereas for a more general comparison among the models, the class imbalance robust metrics, INLINEFORM0 and Wt. Acc., might be more suitable. Bootstrapping or increasing the weight of samples in smaller classes can improve INLINEFORM1 and Wt. Acc. at the cost of MAE and Acc..
The text data need to be vectorized before they can be fed into the baseline models (whereas vectorization is built into ORNN). The standard practice is to tokenize the texts using n-grams and then create weighted term frequency vectors using the term frequency (TF)-inverse document frequency (IDF) scheme BIBREF25 , BIBREF26 . The specific variation we use is the recommended unigram + sublinear TF + smooth IDF BIBREF26 , BIBREF27 . Dimension reduction techniques such as Latent Semantic Analysis BIBREF17 can be optionally applied to the frequency vectors, but B. Schuller et al. ( BIBREF28 ) concluded from their experiments that dimension reduction on frequency vectors actually hurts model performance, which our preliminary experiments agree with.
All models are trained and evaluated using the same (w.r.t. data shuffle and split) 10-fold cross-validation (CV) on Trafficking-10k, except for HTDN, whose result is read from the original paper BIBREF9 . During each train-test split, INLINEFORM0 of the training set is further reserved as the validation set for tuning hyperparameters such as L2-penalty in IT, AT and LAD, and learning rate in ORNN. So the overall train-validation-test ratio is 70%-20%-10%. We report the mean metrics from the CV in Table TABREF14 . As previous research has pointed out that there is no unbiased estimator of the variance of CV BIBREF29 , we report the naive standard error treating metrics across CV as independent.
We can see that ORNN has the best MAE, INLINEFORM0 and Acc. as well as a close 2nd best Wt. Acc. among all models. Its Wt. Acc. is a substantial improvement over HTDN despite the fact that the latter use both text and image data. It is important to note that HTDN is trained using binary labels, whereas the other models are trained using ordinal labels and then have their ordinal predictions converted to binary predictions. This is most likely the reason that even the baseline models except for LAD can yield better Wt. Acc. than HTDN, confirming our earlier claim that polarizing the ordinal labels during training may lead to information loss.
Ablation Test
To ensure that we do not unnecessarily complicate our ORNN model, and to assess the impact of each component on the final model performance, we perform an ablation test. Using the same CV and evaluation metrics, we make the following replacements separately and re-evaluate the model: 1. Replace word embeddings pre-trained from skip-gram model with randomly initialized word embeddings; 2. replace gated-feedback recurrent neural network with long short-term memory network (LSTM); 3. disable batch normalization; 4. disable residual connection; 5. replace the multi-labeled logistic regression layer with a softmax layer (i.e., let the model perform classification, treating the ordinal response variable as a categorical variable with INLINEFORM0 classes); 6. replace the multi-labeled logistic regression layer with a 1-dimensional linear layer (i.e., let the model perform regression, treating the ordinal response variable as a continuous variable) and round the prediction to the nearest integer during testing; 7. set the order penalty to 0. The results are shown in Table TABREF16 .
The proposed ORNN once again has all the best metrics except for Wt. Acc. which is the 2nd best. This suggests that each component indeed makes a contribution. Note that if we disregard the ordinal labels and perform classification or regression, MAE falls off by a large margin. Setting order penalty to 0 does not deteriorate the performance by much, however, the percent of conflicting binary predictions (see Section SECREF7 ) rises from 1.4% to 5.2%. So adding an order penalty helps produce more interpretable results.
Qualitative Analysis of Predictions
To qualitatively evaluate how well our model predicts on raw data and observe potential patterns in the flagged samples, we obtain predictions on the 44,105 unlabelled ads from TNABoard with the ORNN model trained on Trafficking-10k, then we examine the samples with high predicted likelihood to come from traffickers. Below are the top three samples that the model considers likely:
[itemsep=0pt]
“amazing reviewed crystal only here till fri book now please check our site for the services the girls provide all updates specials photos rates reviews njfantasygirls ...look who s back amazing reviewed model samantha...brand new spinner jessica special rate today 250 hr 21 5 4 120 34b total gfe total anything goes no limits...”
“2 hot toght 18y o spinners 4 amazing providers today specials...”
“asian college girl is visiting bellevue service type escort hair color brown eyes brown age 23 height 5 4 body type slim cup size c cup ethnicity asian service type escort i am here for you settle men i am a tiny asian girl who is waiting for a gentlemen...”
Some interesting patterns in the samples with high predicted likelihood (here we only showed three) include: mentioning of multiple names or INLINEFORM0 providers in a single ad; possibly intentional typos and abbreviations for the sensitive words such as “tight” INLINEFORM1 “toght” and “18 year old” INLINEFORM2 “18y o”; keywords that indicate traveling of the providers such as “till fri”, “look who s back”, and “visiting”; keywords that hint on the providers potentially being underage such as “18y o”, “college girl”, and “tiny”; and switching between third person and first person narratives.
Emoji Analysis
The fight against human traffickers is adversarial and dynamic. Traffickers often avoid using explicit keywords when advertising victims, but instead use acronyms, intentional typos, and emojis BIBREF9 . Law enforcement maintains a lexicon of trafficking flags mapping certain emojis to their potential true meanings (e.g., the cherry emoji can indicate an underaged victim), but compiling such a lexicon manually is expensive, requires frequent updating, and relies on domain expertise that is hard to obtain (e.g., insider information from traffickers or their victims). To make matters worse, traffickers change their dictionaries over time and regularly switch to new emojis to replace certain keywords BIBREF9 . In such a dynamic and adversarial environment, the need for a data-driven approach in updating the existing lexicon is evident.
As mentioned in Section SECREF5 , training a skip-gram model on a text corpus can map words (including emojis) used in similar contexts to similar numeric vectors. Besides using the vectors learned from the raw escort ads to train ORNN, we can directly visualize the vectors for the emojis to help identify their relationships, by mapping the vectors to a 2-dimensional space using t-SNE BIBREF10 (Figure FIGREF24 ).
We can first empirically assess the quality of the emoji map by noting that similar emojis do seem clustered together: the smileys near the coordinate (2, 3), the flowers near (-6, -1), the heart shapes near (-8, 1), the phones near (-2, 4) and so on. It is worth emphasizing that the skip-gram model learns the vectors of these emojis based on their contexts in escort ads and not their visual representations, so the fact that the visually similar emojis are close to one another in the map suggests that the vectors have been learned as desired.
The emoji map can assist anti-trafficking experts in expanding the existing lexicon of trafficking flags. For example, according to the lexicon we obtained from Global Emancipation Network, the cherry emoji and the lollipop emoji are both flags for underaged victims. Near (-3, -4) in the map, right next to these two emojis are the porcelain dolls emoji, the grapes emoji, the strawberry emoji, the candy emoji, the ice cream emojis, and maybe the 18-slash emoji, indicating that they are all used in similar contexts and perhaps should all be flags for underaged victims in the updated lexicon.
If we re-train the skip-gram model and update the emoji map periodically on new escort ads, when traffickers switch to new emojis, the map can link the new emojis to the old ones, assisting anti-trafficking experts in expanding the lexicon of trafficking flags. This approach also works for acronyms and deliberate typos.
Discussion
Human trafficking is a form of modern day slavery that victimizes millions of people. It has become the norm for sex traffickers to use escort websites to openly advertise their victims. We designed an ordinal regression neural network (ORNN) to predict the likelihood that an escort ad comes from a trafficker, which can drastically narrow down the set of possible leads for law enforcement. Our ORNN achieved the state-of-the-art performance on Trafficking-10K BIBREF9 , outperforming all baseline ordinal regression models as well as improving the classification accuracy over the Human Trafficking Deep Network BIBREF9 . We also conducted an emoji analysis and showed how to use word embeddings learned from raw text data to help expand the lexicon of trafficking flags.
Since our experiments, there have been considerable advancements in language representation models, such as BERT BIBREF30 . The new language representation models can be combined with our ordinal regression layer, replacing the skip-gram model and GF-RNN, to potentially further improve our results. However, our contributions of improving the cost function for ordinal regression neural networks, qualitatively analyzing patterns in the predicted samples, and expanding the trafficking lexicon through a data-driven approach are not dependent on a particular choice of language representation model.
As for future work in trafficking detection, we can design multi-modal ordinal regression networks that utilize both image and text data. But given the time and resources required to label escort ads, we may explore more unsupervised learning or transfer learning algorithms, such as using object detection BIBREF31 and matching algorithms to match hotel rooms in the images.
Acknowledgments
We thank Cara Jones and Marinus Analytics LLC for sharing the Trafficking-10K dataset. We thank Praveen Bodigutla for his suggestions on Natural Language Processing literature.
Hyperparameters of the proposed ordinal regression neural network
Word Embeddings: pretraining model type: Skip-gram; speedup method: negative sampling; number of negative samples: 100; noise distribution: unigram distribution raised to 3/4rd; batch size: 16; window size: 5; minimum word count: 5; number of epochs: 50; embedding size: 128; pretraining learning rate: 0.2; fine-tuning learning rate scale: 1.0.
GF-RNN: hidden size: 128; dropout: 0.2; number of layers: 3; gradient clipping norm: 0.25; L2 penalty: 0.00001; learning rate decay factor: 2.0; learning rate decay patience: 3; early stop patience: 9; batch size: 200; batch normalization: true; residual connection: true; output layer type: mean-pooling; minimum word count: 5; maximum input length: 120.
Multi-labeled logistic regression layer: task weight scheme: uniform; conflict penalty: 0.5.
Access to the source materials
The fight against human trafficking is adversarial, hence the access to the source materials in anti-trafficking research is typically not available to the general public by choice, but granted to researchers and law enforcement individually upon request.
Source code:
https://gitlab.com/BlazingBlade/TrafficKill
Trafficking-10k: Contact
[email protected]
Trafficking lexicon: Contact
[email protected] | Proposed ORNN has 0.769, 1.238, 0.818, 0.772 compared to 0.778, 1.244, 0.813, 0.781 of best state of the art result on Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.) |
43761478c26ad65bec4f0fd511ec3181a100681c | 43761478c26ad65bec4f0fd511ec3181a100681c_0 | Q: Do they use pretrained word embeddings?
Text: Introduction
Globally, human trafficking is one of the fastest growing crimes and, with annual profits estimated to be in excess of 150 billion USD, it is also among the most lucrative BIBREF0 . Sex trafficking is a form of human trafficking which involves sexual exploitation through coercion. Recent estimates suggest that nearly 4 million adults and 1 million children are being victimized globally on any given day; furthermore, it is estimated that 99 percent of victims are female BIBREF1 . Escort websites are an increasingly popular vehicle for selling the services of trafficking victims. According to a recent survivor survey BIBREF2 , 38% of underage trafficking victims who were enslaved prior to 2004 were advertised online, and that number rose to 75% for those enslaved after 2004. Prior to its shutdown in April 2018, the website Backpage was the most frequently used online advertising platform; other popular escort websites include Craigslist, Redbook, SugarDaddy, and Facebook BIBREF2 . Despite the seizure of Backpage, there were nearly 150,000 new online sex advertisements posted per day in the U.S. alone in late 2018 BIBREF3 ; even with many of these new ads being re-posts of existing ads and traffickers often posting multiple ads for the same victims BIBREF2 , this volume is staggering.
Because of their ubiquity and public access, escort websites are a rich resource for anti-trafficking operations. However, many law enforcement agencies do not have the resources to sift through the volume of escort ads to identify those coming from potential traffickers. One scalable and efficient solution is to build a statistical model to predict the likelihood of an ad coming from a trafficker using a dataset annotated by anti-trafficking experts. We propose an ordinal regression neural network tailored for text input. This model comprises three components: (i) a Word2Vec model BIBREF4 that maps each word from the text input to a numeric vector, (ii) a gated-feedback recurrent neural network BIBREF5 that sequentially processes the word vectors, and (iii) an ordinal regression layer BIBREF6 that produces a predicted ordinal label. We use a modified cost function to mitigate inconsistencies in predictions associated with nonparametric ordinal regression. We also leverage several regularization techniques for deep neural networks to further improve model performance, such as residual connection BIBREF7 and batch normalization BIBREF8 . We conduct our experiments on Trafficking-10k BIBREF9 , a dataset of escort ads for which anti-trafficking experts assigned each sample one of seven ordered labels ranging from “1: Very Unlikely (to come from traffickers)” to “7: Very Likely”. Our proposed model significantly outperforms previously published models BIBREF9 on Trafficking-10k as well as a variety of baseline ordinal regression models. In addition, we analyze the emojis used in escort ads with Word2Vec and t-SNE BIBREF10 , and we show that the lexicon of trafficking-related emojis can be subsequently expanded.
In Section SECREF2 , we discuss related work on human trafficking detection and ordinal regression. In Section SECREF3 , we present our proposed model and detail its components. In Section SECREF4 , we present the experimental results, including the Trafficking-10K benchmark, a qualitative analysis of the predictions on raw data, and the emoji analysis. In Section SECREF5 , we summarize our findings and discuss future work.
Related Work
Trafficking detection: There have been several software products designed to aid anti-trafficking efforts. Examples include Memex which focuses on search functionalities in the dark web; Spotlight which flags suspicious ads and links images appearing in multiple ads; Traffic Jam which seeks to identify patterns that connect multiple ads to the same trafficking organization; and TraffickCam which aims to construct a crowd-sourced database of hotel room images to geo-locate victims. These research efforts have largely been isolated, and few research articles on machine learning for trafficking detection have been published. Closest to our work is the Human Trafficking Deep Network (HTDN) BIBREF9 . HTDN has three main components: a language network that uses pretrained word embeddings and a long short-term memory network (LSTM) to process text input; a vision network that uses a convolutional network to process image input; and another convolutional network to combine the output of the previous two networks and produce a binary classification. Compared to the language network in HTDN, our model replaces LSTM with a gated-feedback recurrent neural network, adopts certain regularizations, and uses an ordinal regression layer on top. It significantly improves HTDN's benchmark despite only using text input. As in the work of E. Tong et al. ( BIBREF9 ), we pre-train word embeddings using a skip-gram model BIBREF4 applied to unlabeled data from escort ads, however, we go further by analyzing the emojis' embeddings and thereby expand the trafficking lexicon.
Ordinal regression: We briefly review ordinal regression before introducing the proposed methodology. We assume that the training data are INLINEFORM0 , where INLINEFORM1 are the features and INLINEFORM2 is the response; INLINEFORM3 is the set of INLINEFORM4 ordered labels INLINEFORM5 with INLINEFORM6 . Many ordinal regression methods learn a composite map INLINEFORM7 , where INLINEFORM8 and INLINEFORM9 have the interpretation that INLINEFORM10 is a latent “score” which is subsequently discretized into a category by INLINEFORM11 . INLINEFORM12 is often estimated by empirical risk minimization, i.e., by minimizing a loss function INLINEFORM13 averaged over the training data. Standard choices of INLINEFORM14 and INLINEFORM15 are reviewed by J. Rennie & N. Srebro ( BIBREF11 ).
Another common approach to ordinal regression, which we adopt in our proposed method, is to transform the label prediction into a series of INLINEFORM0 binary classification sub-problems, wherein the INLINEFORM1 th sub-problem is to predict whether or not the true label exceeds INLINEFORM2 BIBREF12 , BIBREF13 . For example, one might use a series of logistic regression models to estimate the conditional probabilities INLINEFORM3 for each INLINEFORM4 . J. Cheng et al. ( BIBREF6 ) estimated these probabilities jointly using a neural network; this was later extended to image data BIBREF14 as well as text data BIBREF15 , BIBREF16 . However, as acknowledged by J. Cheng et al. ( BIBREF6 ), the estimated probabilities need not respect the ordering INLINEFORM5 for all INLINEFORM6 and INLINEFORM7 . We force our estimator to respect this ordering through a penalty on its violation.
Method
Our proposed ordinal regression model consists of the following three components: Word embeddings pre-trained by a Skip-gram model, a gated-feedback recurrent neural network that constructs summary features from sentences, and a multi-labeled logistic regression layer tailored for ordinal regression. See Figure SECREF3 for a schematic. The details of its components and their respective alternatives are discussed below.
figure Overview of the ordinal regression neural network for text input. INLINEFORM0 represents a hidden state in a gated-feedback recurrent neural network.
Word Embeddings
Vector representations of words, also known as word embeddings, can be obtained through unsupervised learning on a large text corpus so that certain linguistic regularities and patterns are encoded. Compared to Latent Semantic Analysis BIBREF17 , embedding algorithms using neural networks are particularly good at preserving linear regularities among words in addition to grouping similar words together BIBREF18 . Such embeddings can in turn help other algorithms achieve better performances in various natural language processing tasks BIBREF4 .
Unfortunately, the escort ads contain a plethora of emojis, acronyms, and (sometimes deliberate) typographical errors that are not encountered in more standard text data, which suggests that it is likely better to learn word embeddings from scratch on a large collection of escort ads instead of using previously published embeddings BIBREF9 . We use 168,337 ads scraped from Backpage as our training corpus and the Skip-gram model with Negative sampling BIBREF4 as our model.
Gated-Feedback Recurrent Neural Network
To process entire sentences and paragraphs after mapping the words to embeddings, we need a model to handle sequential data. Recurrent neural networks (RNNs) have recently seen great success at modeling sequential data, especially in natural language processing tasks BIBREF19 . On a high level, an RNN is a neural network that processes a sequence of inputs one at a time, taking the summary of the sequence seen so far from the previous time point as an additional input and producing a summary for the next time point. One of the most widely used variations of RNNs, a Long short-term memory network (LSTM), uses various gates to control the information flow and is able to better preserve long-term dependencies in the running summary compared to a basic RNN BIBREF20 . In our implementation, we use a further refinement of multi-layed LSTMs, Gated-feedback recurrent neural networks (GF-RNNs), which tend to capture dependencies across different timescales more easily BIBREF5 .
Regularization techniques for neural networks including Dropout BIBREF21 , Residual connection BIBREF7 , and Batch normalization BIBREF8 are added to GF-RNN for further improvements.
After GF-RNN processes an entire escort ad, the average of the hidden states of the last layer becomes the input for the multi-labeled logistic regression layer which we discuss next.
Multi-Labeled Logistic Regression Layer
As noted previously, the ordinal regression problem can be cast into a series of binary classification problems and thereby utilize the large repository of available classification algorithms BIBREF12 , BIBREF13 , BIBREF14 . One formulation is as follows. Given INLINEFORM0 total ranks, the INLINEFORM1 -th binary classifier is trained to predict the probability that a sample INLINEFORM2 has rank larger than INLINEFORM3 . Then the predicted rank is INLINEFORM4
In a classification task, the final layer of a deep neural network is typically a softmax layer with dimension equal to the number of classes BIBREF20 . Using the ordinal-regression-to-binary-classifications formulation described above, J. Cheng et al. ( BIBREF6 ) replaced the softmax layer in their neural network with a INLINEFORM0 -dimensional sigmoid layer, where each neuron serves as a binary classifier (see Figure SECREF7 but without the order penalty to be discussed later).
With the sigmoid activation function, the output of the INLINEFORM0 th neuron can be viewed as the predicted probability that the sample has rank greater than INLINEFORM5 . Alternatively, the entire sigmoid layer can be viewed as performing multi-labeled logistic regression, where the INLINEFORM6 th label is the indicator of the sample's rank being greater than INLINEFORM7 . The training data are thus re-formatted accordingly so that response variable for a sample with rank INLINEFORM8 becomes INLINEFORM9 k-1 INLINEFORM10 Y Y INLINEFORM11 Y - Y INLINEFORM12 J. Cheng et al.'s ( BIBREF6 ) final layer was preceded by a simple feed-forward network. In our case, word embeddings and GF-RNN allow us to construct a feature vector of fixed length from text input, so we can simply attach the multi-labeled logistic regression layer to the output of GF-RNN to complete an ordinal regression neural network for text input.
The violation of the monotonicity in the estimated probabilities (e.g., INLINEFORM0 for some INLINEFORM1 and INLINEFORM2 ) has remained an open issue since the original ordinal regression neural network proposal of J. Cheng et al ( BIBREF6 ). This is perhaps owed in part to the belief that correcting this issue would significantly increase training complexity BIBREF14 . We propose an effective and computationally efficient solution to avoid the conflicting predictions as follows: penalize such conflicts in the training phase by adding INLINEFORM3
to the loss function for a sample INLINEFORM0 , where INLINEFORM1 is a penalty parameter (Figure SECREF7 ). For sufficiently large INLINEFORM2 the estimated probabilities will respect the monotonicity condition; respecting this condition improves the interpretability of the predictions, which is vital in applications like the one we consider here as stakeholders are given the estimated probabilities. We also hypothesize that the order penalty may serve as a regularizer to improve each binary classifier (see the ablation test in Section SECREF15 ).
figure Ordinal regression layer with order penalty.
All three components of our model (word embeddings, GF-RNN, and multi-labeled logistic regression layer) can be trained jointly, with word embeddings optionally held fixed or given a smaller learning rate for fine-tuning. The hyperparameters for all components are given in the Appendix. They are selected according to either literature or grid-search.
Experiments
We first describe the datasets we use to train and evaluate our models. Then we present a detailed comparison of our proposed model with commonly used ordinal regression models as well as the previous state-of-the-art classification model by E. Tong et al. ( BIBREF9 ). To assess the effect of each component in our model, we perform an ablation test where the components are swapped by their more standard alternatives one at a time. Next, we perform a qualitative analysis on the model predictions on the raw data, which are scraped from a different escort website than the one that provides the labeled training data. Finally, we conduct an emoji analysis using the word embeddings trained on raw escort ads.
Datasets
We use raw texts scraped from Backpage and TNABoard to pre-train the word embeddings, and use the same labeled texts E. Tong et al. ( BIBREF9 ) used to conduct model comparisons. The raw text dataset consists of 44,105 ads from TNABoard and 124,220 ads from Backpage. Data cleaning/preprocessing includes joining the title and the body of an ad; adding white spaces around every emoji so that it can be tokenized properly; stripping tabs, line breaks, punctuations, and extra white spaces; removing phone numbers; and converting all letters to lower case. We have ensured that the raw dataset has no overlap with the labeled dataset to avoid bias in test accuracy. While it is possible to scrape more raw data, we did not observe significant improvements in model performances when the size of raw data increased from INLINEFORM0 70,000 to INLINEFORM1 170,000, hence we assume that the current raw dataset is sufficiently large.
The labeled dataset is called Trafficking-10k. It consists of 12,350 ads from Backpage labeled by experts in human trafficking detection BIBREF9 . Each label is one of seven ordered levels of likelihood that the corresponding ad comes from a human trafficker. Descriptions and sample proportions of the labels are in Table TABREF11 . The original Trafficking-10K includes both texts and images, but as mentioned in Section SECREF1 , only the texts are used in our case. We apply the same preprocessing to Trafficking-10k as we do to raw data.
Comparison with Baselines
We compare our proposed ordinal regression neural network (ORNN) to Immediate-Threshold ordinal logistic regression (IT) BIBREF11 , All-Threshold ordinal logistic regression (AT) BIBREF11 , Least Absolute Deviation (LAD) BIBREF22 , BIBREF23 , and multi-class logistic regression (MC) which ignores the ordering. The primary evaluation metrics are Mean Absolute Error (MAE) and macro-averaged Mean Absolute Error ( INLINEFORM0 ) BIBREF24 . To compare our model with the previous state-of-the-art classification model for escort ads, the Human Trafficking Deep Network (HTDN) BIBREF9 , we also polarize the true and predicted labels into two classes, “1-4: Unlikely” and “5-7: Likely”; then we compute the binary classification accuracy (Acc.) as well as the weighted binary classification accuracy (Wt. Acc.) given by INLINEFORM1
Note that for applications in human trafficking detection, MAE and Acc. are of primary interest. Whereas for a more general comparison among the models, the class imbalance robust metrics, INLINEFORM0 and Wt. Acc., might be more suitable. Bootstrapping or increasing the weight of samples in smaller classes can improve INLINEFORM1 and Wt. Acc. at the cost of MAE and Acc..
The text data need to be vectorized before they can be fed into the baseline models (whereas vectorization is built into ORNN). The standard practice is to tokenize the texts using n-grams and then create weighted term frequency vectors using the term frequency (TF)-inverse document frequency (IDF) scheme BIBREF25 , BIBREF26 . The specific variation we use is the recommended unigram + sublinear TF + smooth IDF BIBREF26 , BIBREF27 . Dimension reduction techniques such as Latent Semantic Analysis BIBREF17 can be optionally applied to the frequency vectors, but B. Schuller et al. ( BIBREF28 ) concluded from their experiments that dimension reduction on frequency vectors actually hurts model performance, which our preliminary experiments agree with.
All models are trained and evaluated using the same (w.r.t. data shuffle and split) 10-fold cross-validation (CV) on Trafficking-10k, except for HTDN, whose result is read from the original paper BIBREF9 . During each train-test split, INLINEFORM0 of the training set is further reserved as the validation set for tuning hyperparameters such as L2-penalty in IT, AT and LAD, and learning rate in ORNN. So the overall train-validation-test ratio is 70%-20%-10%. We report the mean metrics from the CV in Table TABREF14 . As previous research has pointed out that there is no unbiased estimator of the variance of CV BIBREF29 , we report the naive standard error treating metrics across CV as independent.
We can see that ORNN has the best MAE, INLINEFORM0 and Acc. as well as a close 2nd best Wt. Acc. among all models. Its Wt. Acc. is a substantial improvement over HTDN despite the fact that the latter use both text and image data. It is important to note that HTDN is trained using binary labels, whereas the other models are trained using ordinal labels and then have their ordinal predictions converted to binary predictions. This is most likely the reason that even the baseline models except for LAD can yield better Wt. Acc. than HTDN, confirming our earlier claim that polarizing the ordinal labels during training may lead to information loss.
Ablation Test
To ensure that we do not unnecessarily complicate our ORNN model, and to assess the impact of each component on the final model performance, we perform an ablation test. Using the same CV and evaluation metrics, we make the following replacements separately and re-evaluate the model: 1. Replace word embeddings pre-trained from skip-gram model with randomly initialized word embeddings; 2. replace gated-feedback recurrent neural network with long short-term memory network (LSTM); 3. disable batch normalization; 4. disable residual connection; 5. replace the multi-labeled logistic regression layer with a softmax layer (i.e., let the model perform classification, treating the ordinal response variable as a categorical variable with INLINEFORM0 classes); 6. replace the multi-labeled logistic regression layer with a 1-dimensional linear layer (i.e., let the model perform regression, treating the ordinal response variable as a continuous variable) and round the prediction to the nearest integer during testing; 7. set the order penalty to 0. The results are shown in Table TABREF16 .
The proposed ORNN once again has all the best metrics except for Wt. Acc. which is the 2nd best. This suggests that each component indeed makes a contribution. Note that if we disregard the ordinal labels and perform classification or regression, MAE falls off by a large margin. Setting order penalty to 0 does not deteriorate the performance by much, however, the percent of conflicting binary predictions (see Section SECREF7 ) rises from 1.4% to 5.2%. So adding an order penalty helps produce more interpretable results.
Qualitative Analysis of Predictions
To qualitatively evaluate how well our model predicts on raw data and observe potential patterns in the flagged samples, we obtain predictions on the 44,105 unlabelled ads from TNABoard with the ORNN model trained on Trafficking-10k, then we examine the samples with high predicted likelihood to come from traffickers. Below are the top three samples that the model considers likely:
[itemsep=0pt]
“amazing reviewed crystal only here till fri book now please check our site for the services the girls provide all updates specials photos rates reviews njfantasygirls ...look who s back amazing reviewed model samantha...brand new spinner jessica special rate today 250 hr 21 5 4 120 34b total gfe total anything goes no limits...”
“2 hot toght 18y o spinners 4 amazing providers today specials...”
“asian college girl is visiting bellevue service type escort hair color brown eyes brown age 23 height 5 4 body type slim cup size c cup ethnicity asian service type escort i am here for you settle men i am a tiny asian girl who is waiting for a gentlemen...”
Some interesting patterns in the samples with high predicted likelihood (here we only showed three) include: mentioning of multiple names or INLINEFORM0 providers in a single ad; possibly intentional typos and abbreviations for the sensitive words such as “tight” INLINEFORM1 “toght” and “18 year old” INLINEFORM2 “18y o”; keywords that indicate traveling of the providers such as “till fri”, “look who s back”, and “visiting”; keywords that hint on the providers potentially being underage such as “18y o”, “college girl”, and “tiny”; and switching between third person and first person narratives.
Emoji Analysis
The fight against human traffickers is adversarial and dynamic. Traffickers often avoid using explicit keywords when advertising victims, but instead use acronyms, intentional typos, and emojis BIBREF9 . Law enforcement maintains a lexicon of trafficking flags mapping certain emojis to their potential true meanings (e.g., the cherry emoji can indicate an underaged victim), but compiling such a lexicon manually is expensive, requires frequent updating, and relies on domain expertise that is hard to obtain (e.g., insider information from traffickers or their victims). To make matters worse, traffickers change their dictionaries over time and regularly switch to new emojis to replace certain keywords BIBREF9 . In such a dynamic and adversarial environment, the need for a data-driven approach in updating the existing lexicon is evident.
As mentioned in Section SECREF5 , training a skip-gram model on a text corpus can map words (including emojis) used in similar contexts to similar numeric vectors. Besides using the vectors learned from the raw escort ads to train ORNN, we can directly visualize the vectors for the emojis to help identify their relationships, by mapping the vectors to a 2-dimensional space using t-SNE BIBREF10 (Figure FIGREF24 ).
We can first empirically assess the quality of the emoji map by noting that similar emojis do seem clustered together: the smileys near the coordinate (2, 3), the flowers near (-6, -1), the heart shapes near (-8, 1), the phones near (-2, 4) and so on. It is worth emphasizing that the skip-gram model learns the vectors of these emojis based on their contexts in escort ads and not their visual representations, so the fact that the visually similar emojis are close to one another in the map suggests that the vectors have been learned as desired.
The emoji map can assist anti-trafficking experts in expanding the existing lexicon of trafficking flags. For example, according to the lexicon we obtained from Global Emancipation Network, the cherry emoji and the lollipop emoji are both flags for underaged victims. Near (-3, -4) in the map, right next to these two emojis are the porcelain dolls emoji, the grapes emoji, the strawberry emoji, the candy emoji, the ice cream emojis, and maybe the 18-slash emoji, indicating that they are all used in similar contexts and perhaps should all be flags for underaged victims in the updated lexicon.
If we re-train the skip-gram model and update the emoji map periodically on new escort ads, when traffickers switch to new emojis, the map can link the new emojis to the old ones, assisting anti-trafficking experts in expanding the lexicon of trafficking flags. This approach also works for acronyms and deliberate typos.
Discussion
Human trafficking is a form of modern day slavery that victimizes millions of people. It has become the norm for sex traffickers to use escort websites to openly advertise their victims. We designed an ordinal regression neural network (ORNN) to predict the likelihood that an escort ad comes from a trafficker, which can drastically narrow down the set of possible leads for law enforcement. Our ORNN achieved the state-of-the-art performance on Trafficking-10K BIBREF9 , outperforming all baseline ordinal regression models as well as improving the classification accuracy over the Human Trafficking Deep Network BIBREF9 . We also conducted an emoji analysis and showed how to use word embeddings learned from raw text data to help expand the lexicon of trafficking flags.
Since our experiments, there have been considerable advancements in language representation models, such as BERT BIBREF30 . The new language representation models can be combined with our ordinal regression layer, replacing the skip-gram model and GF-RNN, to potentially further improve our results. However, our contributions of improving the cost function for ordinal regression neural networks, qualitatively analyzing patterns in the predicted samples, and expanding the trafficking lexicon through a data-driven approach are not dependent on a particular choice of language representation model.
As for future work in trafficking detection, we can design multi-modal ordinal regression networks that utilize both image and text data. But given the time and resources required to label escort ads, we may explore more unsupervised learning or transfer learning algorithms, such as using object detection BIBREF31 and matching algorithms to match hotel rooms in the images.
Acknowledgments
We thank Cara Jones and Marinus Analytics LLC for sharing the Trafficking-10K dataset. We thank Praveen Bodigutla for his suggestions on Natural Language Processing literature.
Hyperparameters of the proposed ordinal regression neural network
Word Embeddings: pretraining model type: Skip-gram; speedup method: negative sampling; number of negative samples: 100; noise distribution: unigram distribution raised to 3/4rd; batch size: 16; window size: 5; minimum word count: 5; number of epochs: 50; embedding size: 128; pretraining learning rate: 0.2; fine-tuning learning rate scale: 1.0.
GF-RNN: hidden size: 128; dropout: 0.2; number of layers: 3; gradient clipping norm: 0.25; L2 penalty: 0.00001; learning rate decay factor: 2.0; learning rate decay patience: 3; early stop patience: 9; batch size: 200; batch normalization: true; residual connection: true; output layer type: mean-pooling; minimum word count: 5; maximum input length: 120.
Multi-labeled logistic regression layer: task weight scheme: uniform; conflict penalty: 0.5.
Access to the source materials
The fight against human trafficking is adversarial, hence the access to the source materials in anti-trafficking research is typically not available to the general public by choice, but granted to researchers and law enforcement individually upon request.
Source code:
https://gitlab.com/BlazingBlade/TrafficKill
Trafficking-10k: Contact
[email protected]
Trafficking lexicon: Contact
[email protected] | Yes |
01866fe392d9196dda1d0b472290edbd48a99f66 | 01866fe392d9196dda1d0b472290edbd48a99f66_0 | Q: How is the lexicon of trafficking flags expanded?
Text: Introduction
Globally, human trafficking is one of the fastest growing crimes and, with annual profits estimated to be in excess of 150 billion USD, it is also among the most lucrative BIBREF0 . Sex trafficking is a form of human trafficking which involves sexual exploitation through coercion. Recent estimates suggest that nearly 4 million adults and 1 million children are being victimized globally on any given day; furthermore, it is estimated that 99 percent of victims are female BIBREF1 . Escort websites are an increasingly popular vehicle for selling the services of trafficking victims. According to a recent survivor survey BIBREF2 , 38% of underage trafficking victims who were enslaved prior to 2004 were advertised online, and that number rose to 75% for those enslaved after 2004. Prior to its shutdown in April 2018, the website Backpage was the most frequently used online advertising platform; other popular escort websites include Craigslist, Redbook, SugarDaddy, and Facebook BIBREF2 . Despite the seizure of Backpage, there were nearly 150,000 new online sex advertisements posted per day in the U.S. alone in late 2018 BIBREF3 ; even with many of these new ads being re-posts of existing ads and traffickers often posting multiple ads for the same victims BIBREF2 , this volume is staggering.
Because of their ubiquity and public access, escort websites are a rich resource for anti-trafficking operations. However, many law enforcement agencies do not have the resources to sift through the volume of escort ads to identify those coming from potential traffickers. One scalable and efficient solution is to build a statistical model to predict the likelihood of an ad coming from a trafficker using a dataset annotated by anti-trafficking experts. We propose an ordinal regression neural network tailored for text input. This model comprises three components: (i) a Word2Vec model BIBREF4 that maps each word from the text input to a numeric vector, (ii) a gated-feedback recurrent neural network BIBREF5 that sequentially processes the word vectors, and (iii) an ordinal regression layer BIBREF6 that produces a predicted ordinal label. We use a modified cost function to mitigate inconsistencies in predictions associated with nonparametric ordinal regression. We also leverage several regularization techniques for deep neural networks to further improve model performance, such as residual connection BIBREF7 and batch normalization BIBREF8 . We conduct our experiments on Trafficking-10k BIBREF9 , a dataset of escort ads for which anti-trafficking experts assigned each sample one of seven ordered labels ranging from “1: Very Unlikely (to come from traffickers)” to “7: Very Likely”. Our proposed model significantly outperforms previously published models BIBREF9 on Trafficking-10k as well as a variety of baseline ordinal regression models. In addition, we analyze the emojis used in escort ads with Word2Vec and t-SNE BIBREF10 , and we show that the lexicon of trafficking-related emojis can be subsequently expanded.
In Section SECREF2 , we discuss related work on human trafficking detection and ordinal regression. In Section SECREF3 , we present our proposed model and detail its components. In Section SECREF4 , we present the experimental results, including the Trafficking-10K benchmark, a qualitative analysis of the predictions on raw data, and the emoji analysis. In Section SECREF5 , we summarize our findings and discuss future work.
Related Work
Trafficking detection: There have been several software products designed to aid anti-trafficking efforts. Examples include Memex which focuses on search functionalities in the dark web; Spotlight which flags suspicious ads and links images appearing in multiple ads; Traffic Jam which seeks to identify patterns that connect multiple ads to the same trafficking organization; and TraffickCam which aims to construct a crowd-sourced database of hotel room images to geo-locate victims. These research efforts have largely been isolated, and few research articles on machine learning for trafficking detection have been published. Closest to our work is the Human Trafficking Deep Network (HTDN) BIBREF9 . HTDN has three main components: a language network that uses pretrained word embeddings and a long short-term memory network (LSTM) to process text input; a vision network that uses a convolutional network to process image input; and another convolutional network to combine the output of the previous two networks and produce a binary classification. Compared to the language network in HTDN, our model replaces LSTM with a gated-feedback recurrent neural network, adopts certain regularizations, and uses an ordinal regression layer on top. It significantly improves HTDN's benchmark despite only using text input. As in the work of E. Tong et al. ( BIBREF9 ), we pre-train word embeddings using a skip-gram model BIBREF4 applied to unlabeled data from escort ads, however, we go further by analyzing the emojis' embeddings and thereby expand the trafficking lexicon.
Ordinal regression: We briefly review ordinal regression before introducing the proposed methodology. We assume that the training data are INLINEFORM0 , where INLINEFORM1 are the features and INLINEFORM2 is the response; INLINEFORM3 is the set of INLINEFORM4 ordered labels INLINEFORM5 with INLINEFORM6 . Many ordinal regression methods learn a composite map INLINEFORM7 , where INLINEFORM8 and INLINEFORM9 have the interpretation that INLINEFORM10 is a latent “score” which is subsequently discretized into a category by INLINEFORM11 . INLINEFORM12 is often estimated by empirical risk minimization, i.e., by minimizing a loss function INLINEFORM13 averaged over the training data. Standard choices of INLINEFORM14 and INLINEFORM15 are reviewed by J. Rennie & N. Srebro ( BIBREF11 ).
Another common approach to ordinal regression, which we adopt in our proposed method, is to transform the label prediction into a series of INLINEFORM0 binary classification sub-problems, wherein the INLINEFORM1 th sub-problem is to predict whether or not the true label exceeds INLINEFORM2 BIBREF12 , BIBREF13 . For example, one might use a series of logistic regression models to estimate the conditional probabilities INLINEFORM3 for each INLINEFORM4 . J. Cheng et al. ( BIBREF6 ) estimated these probabilities jointly using a neural network; this was later extended to image data BIBREF14 as well as text data BIBREF15 , BIBREF16 . However, as acknowledged by J. Cheng et al. ( BIBREF6 ), the estimated probabilities need not respect the ordering INLINEFORM5 for all INLINEFORM6 and INLINEFORM7 . We force our estimator to respect this ordering through a penalty on its violation.
Method
Our proposed ordinal regression model consists of the following three components: Word embeddings pre-trained by a Skip-gram model, a gated-feedback recurrent neural network that constructs summary features from sentences, and a multi-labeled logistic regression layer tailored for ordinal regression. See Figure SECREF3 for a schematic. The details of its components and their respective alternatives are discussed below.
figure Overview of the ordinal regression neural network for text input. INLINEFORM0 represents a hidden state in a gated-feedback recurrent neural network.
Word Embeddings
Vector representations of words, also known as word embeddings, can be obtained through unsupervised learning on a large text corpus so that certain linguistic regularities and patterns are encoded. Compared to Latent Semantic Analysis BIBREF17 , embedding algorithms using neural networks are particularly good at preserving linear regularities among words in addition to grouping similar words together BIBREF18 . Such embeddings can in turn help other algorithms achieve better performances in various natural language processing tasks BIBREF4 .
Unfortunately, the escort ads contain a plethora of emojis, acronyms, and (sometimes deliberate) typographical errors that are not encountered in more standard text data, which suggests that it is likely better to learn word embeddings from scratch on a large collection of escort ads instead of using previously published embeddings BIBREF9 . We use 168,337 ads scraped from Backpage as our training corpus and the Skip-gram model with Negative sampling BIBREF4 as our model.
Gated-Feedback Recurrent Neural Network
To process entire sentences and paragraphs after mapping the words to embeddings, we need a model to handle sequential data. Recurrent neural networks (RNNs) have recently seen great success at modeling sequential data, especially in natural language processing tasks BIBREF19 . On a high level, an RNN is a neural network that processes a sequence of inputs one at a time, taking the summary of the sequence seen so far from the previous time point as an additional input and producing a summary for the next time point. One of the most widely used variations of RNNs, a Long short-term memory network (LSTM), uses various gates to control the information flow and is able to better preserve long-term dependencies in the running summary compared to a basic RNN BIBREF20 . In our implementation, we use a further refinement of multi-layed LSTMs, Gated-feedback recurrent neural networks (GF-RNNs), which tend to capture dependencies across different timescales more easily BIBREF5 .
Regularization techniques for neural networks including Dropout BIBREF21 , Residual connection BIBREF7 , and Batch normalization BIBREF8 are added to GF-RNN for further improvements.
After GF-RNN processes an entire escort ad, the average of the hidden states of the last layer becomes the input for the multi-labeled logistic regression layer which we discuss next.
Multi-Labeled Logistic Regression Layer
As noted previously, the ordinal regression problem can be cast into a series of binary classification problems and thereby utilize the large repository of available classification algorithms BIBREF12 , BIBREF13 , BIBREF14 . One formulation is as follows. Given INLINEFORM0 total ranks, the INLINEFORM1 -th binary classifier is trained to predict the probability that a sample INLINEFORM2 has rank larger than INLINEFORM3 . Then the predicted rank is INLINEFORM4
In a classification task, the final layer of a deep neural network is typically a softmax layer with dimension equal to the number of classes BIBREF20 . Using the ordinal-regression-to-binary-classifications formulation described above, J. Cheng et al. ( BIBREF6 ) replaced the softmax layer in their neural network with a INLINEFORM0 -dimensional sigmoid layer, where each neuron serves as a binary classifier (see Figure SECREF7 but without the order penalty to be discussed later).
With the sigmoid activation function, the output of the INLINEFORM0 th neuron can be viewed as the predicted probability that the sample has rank greater than INLINEFORM5 . Alternatively, the entire sigmoid layer can be viewed as performing multi-labeled logistic regression, where the INLINEFORM6 th label is the indicator of the sample's rank being greater than INLINEFORM7 . The training data are thus re-formatted accordingly so that response variable for a sample with rank INLINEFORM8 becomes INLINEFORM9 k-1 INLINEFORM10 Y Y INLINEFORM11 Y - Y INLINEFORM12 J. Cheng et al.'s ( BIBREF6 ) final layer was preceded by a simple feed-forward network. In our case, word embeddings and GF-RNN allow us to construct a feature vector of fixed length from text input, so we can simply attach the multi-labeled logistic regression layer to the output of GF-RNN to complete an ordinal regression neural network for text input.
The violation of the monotonicity in the estimated probabilities (e.g., INLINEFORM0 for some INLINEFORM1 and INLINEFORM2 ) has remained an open issue since the original ordinal regression neural network proposal of J. Cheng et al ( BIBREF6 ). This is perhaps owed in part to the belief that correcting this issue would significantly increase training complexity BIBREF14 . We propose an effective and computationally efficient solution to avoid the conflicting predictions as follows: penalize such conflicts in the training phase by adding INLINEFORM3
to the loss function for a sample INLINEFORM0 , where INLINEFORM1 is a penalty parameter (Figure SECREF7 ). For sufficiently large INLINEFORM2 the estimated probabilities will respect the monotonicity condition; respecting this condition improves the interpretability of the predictions, which is vital in applications like the one we consider here as stakeholders are given the estimated probabilities. We also hypothesize that the order penalty may serve as a regularizer to improve each binary classifier (see the ablation test in Section SECREF15 ).
figure Ordinal regression layer with order penalty.
All three components of our model (word embeddings, GF-RNN, and multi-labeled logistic regression layer) can be trained jointly, with word embeddings optionally held fixed or given a smaller learning rate for fine-tuning. The hyperparameters for all components are given in the Appendix. They are selected according to either literature or grid-search.
Experiments
We first describe the datasets we use to train and evaluate our models. Then we present a detailed comparison of our proposed model with commonly used ordinal regression models as well as the previous state-of-the-art classification model by E. Tong et al. ( BIBREF9 ). To assess the effect of each component in our model, we perform an ablation test where the components are swapped by their more standard alternatives one at a time. Next, we perform a qualitative analysis on the model predictions on the raw data, which are scraped from a different escort website than the one that provides the labeled training data. Finally, we conduct an emoji analysis using the word embeddings trained on raw escort ads.
Datasets
We use raw texts scraped from Backpage and TNABoard to pre-train the word embeddings, and use the same labeled texts E. Tong et al. ( BIBREF9 ) used to conduct model comparisons. The raw text dataset consists of 44,105 ads from TNABoard and 124,220 ads from Backpage. Data cleaning/preprocessing includes joining the title and the body of an ad; adding white spaces around every emoji so that it can be tokenized properly; stripping tabs, line breaks, punctuations, and extra white spaces; removing phone numbers; and converting all letters to lower case. We have ensured that the raw dataset has no overlap with the labeled dataset to avoid bias in test accuracy. While it is possible to scrape more raw data, we did not observe significant improvements in model performances when the size of raw data increased from INLINEFORM0 70,000 to INLINEFORM1 170,000, hence we assume that the current raw dataset is sufficiently large.
The labeled dataset is called Trafficking-10k. It consists of 12,350 ads from Backpage labeled by experts in human trafficking detection BIBREF9 . Each label is one of seven ordered levels of likelihood that the corresponding ad comes from a human trafficker. Descriptions and sample proportions of the labels are in Table TABREF11 . The original Trafficking-10K includes both texts and images, but as mentioned in Section SECREF1 , only the texts are used in our case. We apply the same preprocessing to Trafficking-10k as we do to raw data.
Comparison with Baselines
We compare our proposed ordinal regression neural network (ORNN) to Immediate-Threshold ordinal logistic regression (IT) BIBREF11 , All-Threshold ordinal logistic regression (AT) BIBREF11 , Least Absolute Deviation (LAD) BIBREF22 , BIBREF23 , and multi-class logistic regression (MC) which ignores the ordering. The primary evaluation metrics are Mean Absolute Error (MAE) and macro-averaged Mean Absolute Error ( INLINEFORM0 ) BIBREF24 . To compare our model with the previous state-of-the-art classification model for escort ads, the Human Trafficking Deep Network (HTDN) BIBREF9 , we also polarize the true and predicted labels into two classes, “1-4: Unlikely” and “5-7: Likely”; then we compute the binary classification accuracy (Acc.) as well as the weighted binary classification accuracy (Wt. Acc.) given by INLINEFORM1
Note that for applications in human trafficking detection, MAE and Acc. are of primary interest. Whereas for a more general comparison among the models, the class imbalance robust metrics, INLINEFORM0 and Wt. Acc., might be more suitable. Bootstrapping or increasing the weight of samples in smaller classes can improve INLINEFORM1 and Wt. Acc. at the cost of MAE and Acc..
The text data need to be vectorized before they can be fed into the baseline models (whereas vectorization is built into ORNN). The standard practice is to tokenize the texts using n-grams and then create weighted term frequency vectors using the term frequency (TF)-inverse document frequency (IDF) scheme BIBREF25 , BIBREF26 . The specific variation we use is the recommended unigram + sublinear TF + smooth IDF BIBREF26 , BIBREF27 . Dimension reduction techniques such as Latent Semantic Analysis BIBREF17 can be optionally applied to the frequency vectors, but B. Schuller et al. ( BIBREF28 ) concluded from their experiments that dimension reduction on frequency vectors actually hurts model performance, which our preliminary experiments agree with.
All models are trained and evaluated using the same (w.r.t. data shuffle and split) 10-fold cross-validation (CV) on Trafficking-10k, except for HTDN, whose result is read from the original paper BIBREF9 . During each train-test split, INLINEFORM0 of the training set is further reserved as the validation set for tuning hyperparameters such as L2-penalty in IT, AT and LAD, and learning rate in ORNN. So the overall train-validation-test ratio is 70%-20%-10%. We report the mean metrics from the CV in Table TABREF14 . As previous research has pointed out that there is no unbiased estimator of the variance of CV BIBREF29 , we report the naive standard error treating metrics across CV as independent.
We can see that ORNN has the best MAE, INLINEFORM0 and Acc. as well as a close 2nd best Wt. Acc. among all models. Its Wt. Acc. is a substantial improvement over HTDN despite the fact that the latter use both text and image data. It is important to note that HTDN is trained using binary labels, whereas the other models are trained using ordinal labels and then have their ordinal predictions converted to binary predictions. This is most likely the reason that even the baseline models except for LAD can yield better Wt. Acc. than HTDN, confirming our earlier claim that polarizing the ordinal labels during training may lead to information loss.
Ablation Test
To ensure that we do not unnecessarily complicate our ORNN model, and to assess the impact of each component on the final model performance, we perform an ablation test. Using the same CV and evaluation metrics, we make the following replacements separately and re-evaluate the model: 1. Replace word embeddings pre-trained from skip-gram model with randomly initialized word embeddings; 2. replace gated-feedback recurrent neural network with long short-term memory network (LSTM); 3. disable batch normalization; 4. disable residual connection; 5. replace the multi-labeled logistic regression layer with a softmax layer (i.e., let the model perform classification, treating the ordinal response variable as a categorical variable with INLINEFORM0 classes); 6. replace the multi-labeled logistic regression layer with a 1-dimensional linear layer (i.e., let the model perform regression, treating the ordinal response variable as a continuous variable) and round the prediction to the nearest integer during testing; 7. set the order penalty to 0. The results are shown in Table TABREF16 .
The proposed ORNN once again has all the best metrics except for Wt. Acc. which is the 2nd best. This suggests that each component indeed makes a contribution. Note that if we disregard the ordinal labels and perform classification or regression, MAE falls off by a large margin. Setting order penalty to 0 does not deteriorate the performance by much, however, the percent of conflicting binary predictions (see Section SECREF7 ) rises from 1.4% to 5.2%. So adding an order penalty helps produce more interpretable results.
Qualitative Analysis of Predictions
To qualitatively evaluate how well our model predicts on raw data and observe potential patterns in the flagged samples, we obtain predictions on the 44,105 unlabelled ads from TNABoard with the ORNN model trained on Trafficking-10k, then we examine the samples with high predicted likelihood to come from traffickers. Below are the top three samples that the model considers likely:
[itemsep=0pt]
“amazing reviewed crystal only here till fri book now please check our site for the services the girls provide all updates specials photos rates reviews njfantasygirls ...look who s back amazing reviewed model samantha...brand new spinner jessica special rate today 250 hr 21 5 4 120 34b total gfe total anything goes no limits...”
“2 hot toght 18y o spinners 4 amazing providers today specials...”
“asian college girl is visiting bellevue service type escort hair color brown eyes brown age 23 height 5 4 body type slim cup size c cup ethnicity asian service type escort i am here for you settle men i am a tiny asian girl who is waiting for a gentlemen...”
Some interesting patterns in the samples with high predicted likelihood (here we only showed three) include: mentioning of multiple names or INLINEFORM0 providers in a single ad; possibly intentional typos and abbreviations for the sensitive words such as “tight” INLINEFORM1 “toght” and “18 year old” INLINEFORM2 “18y o”; keywords that indicate traveling of the providers such as “till fri”, “look who s back”, and “visiting”; keywords that hint on the providers potentially being underage such as “18y o”, “college girl”, and “tiny”; and switching between third person and first person narratives.
Emoji Analysis
The fight against human traffickers is adversarial and dynamic. Traffickers often avoid using explicit keywords when advertising victims, but instead use acronyms, intentional typos, and emojis BIBREF9 . Law enforcement maintains a lexicon of trafficking flags mapping certain emojis to their potential true meanings (e.g., the cherry emoji can indicate an underaged victim), but compiling such a lexicon manually is expensive, requires frequent updating, and relies on domain expertise that is hard to obtain (e.g., insider information from traffickers or their victims). To make matters worse, traffickers change their dictionaries over time and regularly switch to new emojis to replace certain keywords BIBREF9 . In such a dynamic and adversarial environment, the need for a data-driven approach in updating the existing lexicon is evident.
As mentioned in Section SECREF5 , training a skip-gram model on a text corpus can map words (including emojis) used in similar contexts to similar numeric vectors. Besides using the vectors learned from the raw escort ads to train ORNN, we can directly visualize the vectors for the emojis to help identify their relationships, by mapping the vectors to a 2-dimensional space using t-SNE BIBREF10 (Figure FIGREF24 ).
We can first empirically assess the quality of the emoji map by noting that similar emojis do seem clustered together: the smileys near the coordinate (2, 3), the flowers near (-6, -1), the heart shapes near (-8, 1), the phones near (-2, 4) and so on. It is worth emphasizing that the skip-gram model learns the vectors of these emojis based on their contexts in escort ads and not their visual representations, so the fact that the visually similar emojis are close to one another in the map suggests that the vectors have been learned as desired.
The emoji map can assist anti-trafficking experts in expanding the existing lexicon of trafficking flags. For example, according to the lexicon we obtained from Global Emancipation Network, the cherry emoji and the lollipop emoji are both flags for underaged victims. Near (-3, -4) in the map, right next to these two emojis are the porcelain dolls emoji, the grapes emoji, the strawberry emoji, the candy emoji, the ice cream emojis, and maybe the 18-slash emoji, indicating that they are all used in similar contexts and perhaps should all be flags for underaged victims in the updated lexicon.
If we re-train the skip-gram model and update the emoji map periodically on new escort ads, when traffickers switch to new emojis, the map can link the new emojis to the old ones, assisting anti-trafficking experts in expanding the lexicon of trafficking flags. This approach also works for acronyms and deliberate typos.
Discussion
Human trafficking is a form of modern day slavery that victimizes millions of people. It has become the norm for sex traffickers to use escort websites to openly advertise their victims. We designed an ordinal regression neural network (ORNN) to predict the likelihood that an escort ad comes from a trafficker, which can drastically narrow down the set of possible leads for law enforcement. Our ORNN achieved the state-of-the-art performance on Trafficking-10K BIBREF9 , outperforming all baseline ordinal regression models as well as improving the classification accuracy over the Human Trafficking Deep Network BIBREF9 . We also conducted an emoji analysis and showed how to use word embeddings learned from raw text data to help expand the lexicon of trafficking flags.
Since our experiments, there have been considerable advancements in language representation models, such as BERT BIBREF30 . The new language representation models can be combined with our ordinal regression layer, replacing the skip-gram model and GF-RNN, to potentially further improve our results. However, our contributions of improving the cost function for ordinal regression neural networks, qualitatively analyzing patterns in the predicted samples, and expanding the trafficking lexicon through a data-driven approach are not dependent on a particular choice of language representation model.
As for future work in trafficking detection, we can design multi-modal ordinal regression networks that utilize both image and text data. But given the time and resources required to label escort ads, we may explore more unsupervised learning or transfer learning algorithms, such as using object detection BIBREF31 and matching algorithms to match hotel rooms in the images.
Acknowledgments
We thank Cara Jones and Marinus Analytics LLC for sharing the Trafficking-10K dataset. We thank Praveen Bodigutla for his suggestions on Natural Language Processing literature.
Hyperparameters of the proposed ordinal regression neural network
Word Embeddings: pretraining model type: Skip-gram; speedup method: negative sampling; number of negative samples: 100; noise distribution: unigram distribution raised to 3/4rd; batch size: 16; window size: 5; minimum word count: 5; number of epochs: 50; embedding size: 128; pretraining learning rate: 0.2; fine-tuning learning rate scale: 1.0.
GF-RNN: hidden size: 128; dropout: 0.2; number of layers: 3; gradient clipping norm: 0.25; L2 penalty: 0.00001; learning rate decay factor: 2.0; learning rate decay patience: 3; early stop patience: 9; batch size: 200; batch normalization: true; residual connection: true; output layer type: mean-pooling; minimum word count: 5; maximum input length: 120.
Multi-labeled logistic regression layer: task weight scheme: uniform; conflict penalty: 0.5.
Access to the source materials
The fight against human trafficking is adversarial, hence the access to the source materials in anti-trafficking research is typically not available to the general public by choice, but granted to researchers and law enforcement individually upon request.
Source code:
https://gitlab.com/BlazingBlade/TrafficKill
Trafficking-10k: Contact
[email protected]
Trafficking lexicon: Contact
[email protected] | re-train the skip-gram model and update the emoji map periodically on new escort ads, when traffickers switch to new emojis, the map can link the new emojis to the old ones |
394cf73c0aac8ccb45ce1b133f4e765e8e175403 | 394cf73c0aac8ccb45ce1b133f4e765e8e175403_0 | Q: Do they experiment with the dataset?
Text: Introduction
In contrast to traditional content distribution channels like television, radio and newspapers, Internet opened the door for direct interaction between the content creator and its audience. Young people are now gaining more frequent access to online, networked media. Although most of the time, their Internet use is harmless, there are some risks associated with these online activities, such as the use of social networking sites (e.g., Twitter, Facebook, Reddit). The anonymity and freedom provided by social networks makes them vulnerable to threatening situations on the Web, such as trolling.
Trolling is “the activity of posting messages via a communications network that are intended to be provocative, offensive or menacing” BIBREF0 . People who post such comments are known as trolls. According to hardaker2010trolling, a troll's “real intention(s) is/are to cause disruption and/or trigger or exacerbate conflict for the purpose of their own amusement”. Worse still, the troll's comments may have a negative psychological impact on his target/victim and possibly others who participated in the same conversation. It is therefore imperative to identify such comments and perhaps even terminate the conversation before it evolves into something psychological disruptive for the participants. Monitoring conversations is a labor-intensive task: it can potentially place a severe burden on the moderators, and it may not be an effective solution when traffic is heavy. This calls for the need to develop automatic methods for identifying malicious comments, which we will refer to as trolling attempts in this paper.
In fact, there have recently been some attempts to automatically identify comments containing cyberbullying (e.g., van2015detection), which corresponds to the most severe cases of trolling BIBREF0 . However, we believe that it is important not only to identify trolling attempts, but also comments that could have a negative psychological impact on their recipients. As an example, consider the situation where a commenter posts a comment with the goal of amusing others. However, it is conceivable that not everybody would be aware of these playful intentions, and these people may disagree or dislike the mocking comments and take them as inappropriate, prompting a negative reaction or psychological impact on themselves.
In light of this discussion, we believe that there is a need to identify not only the trolling attempts, but also comments that could have a negative psychological impact on its receipts. To this end, we seek to achieve the following goals in this paper. First, we propose a comprehensive categorization of trolling that allows us to model not only the troll's intention given his trolling attempt, but also the recipients' perception of the troll's intention and subsequently their reaction to the trolling attempt. This categorization gives rise to very interesting problems in pragmatics that involve the computational modeling of intentions, perceived intentions, and reactions to perceived intentions. Second, we create a new annotated resource for computational modeling of trolling. Each instance in this resource corresponds to a suspected trolling attempt taken from a Reddit conversation, it's surrounding context, and its immediate responses and will be manually coded with information such as the troll's intention and the recipients' reactions using our proposed categorization of trolling. Finally, we identify the instances that are difficult to classify with the help of a classifier trained with features taken from the state of the art, and subsequently present an analysis of these instances.
To our knowledge, our annotated resource is the first one of its sort that allows computational modeling on both the troll's side and the recipients' side. By making it publicly available, we hope to stimulate further research on this task. We believe that it will be valuable to any NLP researcher who is interested in the computational modeling of trolling.
Related Work
In this section, we discuss related work in the areas of trolling, bullying, abusive language detection and politeness, as they intersect in their scope and at least partially address the problem presented in this work.
In the realm of psychology, bishop2013effect and bishop2014representations elaborate a deep description of a troll's personality, motivations, effects on the community that trolls interfere in and the criminal and psychological aspects of trolls. Their main focus are flaming (trolls), and hostile and aggressive interactions between users BIBREF1 .
On the computational side, mihaylov2015finding address the problem of identifying manipulation trolls in news community forums. Not only do they focus solely on troll identification, but the major difference with this work is that all their predictions are based on non-linguistic information such as number of votes, dates, number of comments and so on. In a networks related framework, kumar2014accurately and guha2004propagation present a methodology to identify malicious individuals in a network based solely on the network's properties rather than on the textual content of comments. cambria2010not propose a method that involves NLP components, but fail to provide an evaluation of their system.
There is extensive work on detecting offensive and abusive language in social media BIBREF2 and BIBREF3 . There are two clear differences between their work and ours. One is that trolling is concerned about not only abusive language but also a much larger range of language styles and addresses the intentions and interpretations of the commenters, which goes beyond the linguistic dimension. The other is that we are additionally interested in the reactions to trolling attempts, real or perceived, because we argued that this is a phenomenon that occurs in pairs through the interaction of at least two individuals, which is different from abusive language detection. Also, xu2012learning, xu2012fast and xu2013examination address bullying traces. Bullying traces are self-reported events of individuals describing being part of bullying events, but we believe that the real impact of computational trolling research is not on analyzing retrospective incidents, but on analyzing real-time conversations. chen2012detecting use lexical and semantic features to determine sentence offensiveness levels to identify cyberbullying, offensive or abusive comments on Youtube. On Youtube as well, dinakar2012common identified sensitive topics for cyberbullying. dadvar2014experts used expert systems to classify between bullying and no bullying in posts. van2015detection predict fine-grained categories for cyberbullying, distinguishing between insults and threats and identified user roles in the exchanges. Finally, hardaker2010trolling argues that trolling cannot be studied using established politeness research categories.
Trolling Categorization
In this section, we describe our proposal of a comprehensive trolling categorization. While there have been attempts in the realm of psychology to provide a working definition of trolling (e.g., hardaker2010trolling, bishop2014representations), their focus is mostly on modeling the troll's behavior. For instance, bishop2014representations constructed a “trolling magnitude” scale focused on the severity of abuse and misuse of internet mediated communications. bishop2013effect also categorized trolls based on psychological characteristics focused on pathologies and possible criminal behaviors. In contrast, our trolling categorization seeks to model not only the troll's behavior but also the impact on the recipients, as described below.
Since one of our goals is to identify trolling events, our datasets will be composed of suspected trolling attempts (i.e., comments that are suspected to be trolling attempts). In other words, some of these suspected trolling attempts will be real trolling attempts, and some of them won't. So, if a suspected trolling attempt is in fact not a trolling attempt, then its author will not be a troll.
To cover both the troll and the recipients, we define a (suspected trolling attempt, responses) pair as the basic unit that we consider for the study of trolling, where “responses” are all the direct responses to the suspected trolling attempt. We characterize a (suspected trolling attempt, responses) pair using four aspects. Two aspects describe the trolling attempt: (1) Intention (I) (what is its author's purpose?), and (2) Intention Disclosure (D) (is its author trying to deceive its readers by hiding his real (i.e., malicious) intentions?). The remaining two aspects are defined on each of the (direct) responses to the trolling attempt: (1) Intention Interpretation (R) (what is the responder's perception of the troll's intention?), and (2) the Response strategy (B) (what is the responder's reaction?). Two points deserve mention. First, R can be different from I due to misunderstanding and the fact that the troll may be trying to hide his intention. Second, B is influenced by R, and the responder's comment can itself be a trolling attempt. We believe that these four aspects constitute interesting, under-studied pragmatics tasks for NLP researchers.
The possible values of each aspect are described in Table TABREF1 . As noted before, since these are suspected trolling attempts, if an attempt turns out not to be a trolling attempt, its author will not be a troll.
For a given (suspected trolling attempt, responses) pair, not all of the 189 (= INLINEFORM0 ) combinations of values of the four aspects are possible. There are logical constraints that limit plausible combinations: a) Trolling or Playing Intentions (I) must have Hidden or Exposed Intention Disclosure (D), b) Normal intentions (I) can only have None Intention disclosure (D) and c) Trolling or Playing interpretation (R) cannot have Normal response strategy (B).
Conversation Excerpts
To enable the reader to better understand this categorization, we present two example excerpts taken from the original (Reddit) conversations. The first comment on each excerpt, generated by author C0, is given as a minimal piece of context. The second comment, written by the author C1 in italics, is the suspected trolling attempt. The rest of the comments comprise all direct responses to the suspected trolling comment.
Example 1.
[noitemsep,nolistsep]
Yeah, cause that's what usually happens. Also, quit following me around, I don't want a boyfriend.
[noitemsep,nolistsep]
I wasn't aware you were the same person.... I've replied to a number of stupid people recently, my bad
[noitemsep,nolistsep]
Trollname trollpost brotroll
In this example, C1 is teasing of C0, expecting to provoke or irritate irritate, and he is clearly disclosing her trolling intentions. In C0's response, we see that he clearly believe that C1 is trolling, since is directly calling him a “brotroll” and his response strategy is frustrate the trolling attempt by denouncing C1 troll's intentions “trollpost” and true identity “brotroll”.
Example 2.
[noitemsep,nolistsep]
Please post a video of your dog doing this. The way I'm imagining this is adorable.
[noitemsep,nolistsep]
I hope the dog gets run over by a truck on the way out of the childrens playground.
[noitemsep,nolistsep]
If you're going to troll, can you at least try to be a bit more
Haha I hope the cancer kills you. convincing?
In this example, we observe that C0's first comment is making a polite request (Please). In return, C1 answer is a mean spirited comment whose intention is to disrupt and possible hurtful C0. Also, C1's comment is not subtle at all, so his intention is clearly disclosed. As for C2, she is clearly acknowledging C1's trolling intention and her response strategy is a criticism which we categorize as frustrate. Now, in C0's second comment, we observe that his interpretation is clear, he believes that C1 is trolling and the negative effect is so tangible, that his response strategy is to troll back or counter-troll by replying with a comparable mean comment.
Corpus and Annotation
Reddit is popular website that allows registered users (without identity verification) to participate in fora grouped by topic or interest. Participation consists of posting stories that can be seen by other users, voting stories and comments, and comments in the story's comment section, in the form of a forum. The forums are arranged in the form of a tree, allowing nested conversations, where the replies to a comment are its direct responses. We collected all comments in the stories' conversation in Reddit that were posted in August 2015. Since it is infeasible to manually annotate all of the comments, we process this dataset with the goal of extracting threads that involve suspected trolling attempts and the direct responses to them. To do so, we used Lucene to create an inverted index from the comments and queried it for comments containing the word “troll” with an edit distance of 1 in order to include close variations of this word, hypothesizing that such comments would be reasonable candidates of real trolling attempts. We did observe, however, that sometimes people use the word troll to point out that another user is trolling. Other times, people use the term to express their frustration about a particular user, but there is no trolling attempt. Yet other times people simply discuss trolling and trolls without actually observing one. Nonetheless, we found that this search produced a dataset in which 44.3% of the comments are real trolling attempts. Moreover, it is possible for commenters to believe that they are witnessing a trolling attempt and respond accordingly even where there is none due to misunderstanding. Therefore, the inclusion of comments that do not involve trolling would allow us to learn what triggers a user's interpretation of trolling when it is not present and what kind of response strategies are used.
For each retrieved comment, we reconstructed the original conversation tree it appears in, from the original post (i.e., the root) to the leaves, so that its parent and children can be recovered. We consider a comment in our dataset a suspected trolling attempt if at least one of its immediate children contains the word troll. For annotation purposes, we created snippets of conversations exactly like the ones shown in Example 1 and Example 2, each of which consists of the parent of the suspected trolling attempt, the suspected trolling attempt, and all of the direct responses to the suspected trolling attempt.
We had two human annotators who were trained on snippets (i.e., (suspected trolling attempt, responses) pairs) taken from 200 conversations and were allowed to discuss their findings. After this training stage, we asked them to independently label the four aspects for each snippet. We recognize that this limited amount of information is not always sufficient to recover the four aspects we are interested in, so we give the annotators the option to discard instances for which they couldn't determine the labels confidently. The final annotated dataset consists of 1000 conversations composed of 6833 sentences and 88047 tokens. The distribution over the classes per trolling aspect is shown in the table TABREF19 in the column “Size”.
Due to the subjective nature of the task we did not expect perfect agreement. However, on the 100 doubly-annotated snippets, we obtained substantial inter-annotator agreement according to Cohen's kappa statistic BIBREF4 for each of the four aspects: Intention: 0.788, Intention Disclosure: 0.780, Interpretation: 0.797 and Response 0.776. In the end, the annotators discussed their discrepancies and managed to resolve all of them.
Trolling Attempt Prediction
In this section, we make predictions on the four aspects of our task, with the primary goal of identifying the errors our classifier makes (i.e., the hard-to-classify instances) and hence the directions for future work, and the secondary goal of estimating the state of the art on this new task using only shallow (i.e., lexical and wordlist-based) features.
Feature Sets
For prediction we define two sets of features: (1) a basic feature set taken from Van Hee's van2015detection paper on cyberbullying prediction, and (2) an extended feature set that we designed using primarily information extracted from wordlists and dictionaries.
N-gram features. We encode each lemmatized and unlemmatized unigram and bigram collected from the training comments as a binary feature. In a similar manner, we include the unigram and bigram along with their POS tag as in BIBREF5 . To extract these features we used Stanford CoreNLP BIBREF6 .
Sentiment Polarity. The overall comment's emotion could be useful to identify the response and intention in a trolling attempt. So, we apply the Vader Sentiment Polarity Analyzer BIBREF7 and include four features, one per each measurement given by the analyzer: positive, neutral, negative and a composite metric, each as a real number value.
Emoticons. Reddit's comments make extensive use of emoticons. We argue that some emoticons are specifically used in trolling attempts to express a variety of emotions, which we hypothesize would be useful to identify a comment's intention, interpretation and response. For that reason, we use the emoticon dictionary developed hogenboom2015exploiting. We create a binary feature whose value is one if at least one of these emoticons is found in the comment.
Harmful Vocabulary. In their research on bullying, nitta2013detecting identified a small set of words that are highly offensive. We create a binary feature whose value is one if the comment contains at least one of these words.
Emotions Synsets. As in xu2012fast, we extracted all lemmas associated with each WordNet BIBREF8 synset involving seven emotions (anger, embarrassment, empathy, fear, pride, relief and sadness) as well as the synonyms of these emotion words extracted from the English merriam2004merriam dictionary. We create a binary feature whose value is one if any of these synsets or synonyms appears in the comment.
Swearing Vocabulary. We manually collected 1061 swear words and short phrases from the internet, blogs, forums and smaller repositories . The informal nature of this dictionary resembles the type of language used by flaming trolls and agitated responses, so we encode a binary feature whose value is one when at least one such swear word is found in the comment.
Swearing Vocabulary in Username. An interesting feature that is suggestive of the intention of a comment is the author's username. We found that abusive and annoying commenters contained cursing words in their usernames. So, we create a binary feature whose value is one if a swear word from the swearing vocabulary is found in their usernames.
Framenet. We apply the SEMAFOR parser BIBREF9 to each sentence in every comment, and construct three different types of binary features: every frame name that is present in the sentence, the frame name and the target word associated with it, and the argument name along with the token or lexical unit in the sentence associated with it. We believe that some frames are especially interesting from the trolling perspective. We hypothesize that these features are useful for identifying trolling attempts in which semantic and not just syntactic information is required.
Politeness cues. danescu2013computational identified cues that signal polite and impolite interactions among groups of people collaborating online. Based on our observations of trolling examples, it is clear that flaming, hostile and aggressive interactions between users BIBREF1 and engaged or emotional responses would use impolite cues. In contrast, neutralizing and frustrating responses to the troll avoid falling in confrontation and their vocabulary tends to be more polite. So we create a binary feature whose value is one if at least one cue appears in the comment.
GloVe Embeddings. All the aforementioned features constitute a high dimensional bag of words (BOW). Word embeddings were created to overcome certain problems with the BOW representation, like sparsity, and weight in correlations of semantically similar words. For this reason, and following nobata2016abusive, we create a distributed representation of the comments by averaging the word vector of each lowercase token in the comment found in the Twitter corpus pre-trained GloVe vectors BIBREF10 . The resulting comment vector representation is a 200 dimensional array that is concatenated with the existing BOW.
Results
Using the features described in the previous subsection, we train four independent classifiers using logistic regression, one per each of the four prediction tasks. All the results are obtained using 5-fold cross-validation experiments. In each fold experiment, we use three folds for training, one fold for development, and one fold for testing. All learning parameters are set to their default values except for the regularization parameter, which we tuned on the development set. In Table TABREF19 the leftmost results column reports F1 score based on majority class prediction. The next section (Single Feature Group) reports F1 scores obtained by using one feature group at a time. The goal of the later set of experiments is to gain insights about feature predictive effectiveness. The right side section (All features) shows the system performance measured using recall, precision, and F-1 as shown when all features described in section SECREF13 are used.
The majority class prediction experiment is simplest baseline to which we can can compare the rest of the experiments. In order to illustrate the prediction power of each feature group independent from all others, we perform the “Single Feature Group”, experiments. As we can observe in Table TABREF19 there are groups of features that independently are not better than the majority baseline, for example, the emoticons, politeness cues and polarity are not better disclosure predictors than the majority base. Also, we observe that only n-grams and GloVe features are the only group of features that contribute to more than a class type for the different tasks. Now, the “All Features” experiment shows how the interaction between feature sets perform than any of the other features groups in isolation. The accuracy metric for each trolling task is meant to provide an overall performance for all the classes within a particular task, and allow comparison between different experiments. In particular, we observe that GloVe vectors are the most powerful feature set, accuracy-wise, even better than the experiments with all features for all tasks except interpretation.
The overall Total Accuracy score reported in table TABREF19 using the entire feature set is 549. This result is what makes this dataset interesting: there is still lots of room for research on this task. Again, the primary goal of this experiment is to help identify the difficult-to-classify instances for analysis in the next section.
Error Analysis
In order to provide directions for future work, we analyze the errors made by the classifier trained on the extended features on the four prediction tasks.
Errors on Intention (I) prediction: The lack of background is a major problem when identifying trolling comments. For example, “your comments fit well in Stormfront” seems inoffensive on the surface. However, people who know that Stormfront is a white supremacist website will realize that the author of this comment had an annoying or malicious intention. But our system had no knowledge about it and simply predicted it as non-trolling. These kind of errors reduces recall on the prediction of trolling comments. A solution would be to include additional knowledge from anthologies along with a sentiment or polarity. One could modify NELL BIBREF12 to broaden the understanding of entities in the comments.
Non-cursing aggressions and insults This is a challenging problem, since the majority of abusive and insulting comments rely on profanity and swearing. The problem arises with subtler aggressions and insults that are equally or even more annoying, such as “Troll? How cute.” and “settle down drama queen”. The classifier has a more difficult task of determining that these are indeed aggressions or insults. This error also decreases the recall of trolling intention. A solution would be to exploit all the comments made by the suspected troll in the entire conversation in order to increase the chances of finding curse words or other cues that lead the classifier to correctly classify the comment as trolling.
Another source of error is the presence of controversial topic words such as “black”,“feminism”, “killing”, “racism”, “brown”, etc. that are commonly used by trolls. The classifier seems too confident to classify a comment as trolling in the presence of these words, but in many cases they do not. In order to ameliorate this problem, one could create ad-hoc word embeddings by training glove or other type of distributed representation on a large corpus for the specific social media platform in consideration. From these vectors one could expect a better representation of controversial topics and their interactions with other words so they might help to reduce these errors.
Errors on Disclosure (D) prediction: A major source of error that affects disclosure is the shallow meaning representation obtained from the BOW model even when augmented with the distributional features given by the glove vectors. For example, the suspected troll's comment “how to deal with refugees? How about a bullet to the head” is clearly mean-spirited and is an example of disclosed trolling. However, to reach that conclusion the reader need to infer the meaning of “bullet to the head” and that this action is desirable for a vulnerable group like migrants or refugees. This problem produces low recall for the disclosed prediction task. A solution for this problem may be the use of deeper semantics, where we represent the comments and sentences in their logical form and infer from them the intended meaning.
Errors on Interpretation (R) prediction: it is a common practice from many users to directly ask the suspected troll if he/she is trolling or not. There are several variations of this question, such as “Are you a troll?” and “not sure if trolling or not”. While the presence of a question like these seems to give us a hint of the responder's interpretation, we cannot be sure of his interpretation without also considering the context. One way to improve interpretation is to exploit the response strategy, but the response strategy in our model is predicted independently of interpretation. So one solution could be similar to the one proposed above for the disclosure task problem: jointly learning classifiers that predict both variables simultaneously. Another possibility is to use the temporal sequence of response comments and make use of older response interpretation as input features for later comments. This could be useful since commenters seem to influence each other as they read through the conversation.
Errors on Response Strategy (B) prediction: In some cases there is a blurry line between “Frustrate” and “Neutralize”. The key distinction between them is that there exists some criticism in the Frustrate responses towards the suspected troll's comment, while “Neutralizing” comments acknowledge that the suspected troll has trolling intentions, but gives no importance to them. For example, response comments such as “oh, you are a troll” and “you are just a lame troll” are examples of this subtle difference. The first is a case of “neutralize” while the second is indeed criticizing the suspected troll's comment and therefore a “frustrate” response strategy. This kind of error affects both precision and recall for these two classes. A possible solution could be to train a specialized classifier to disambiguate between “frustrate” and “neutralize” only.
Another challenging problem is the distinction between the classes “Troll” and “Engage”. This is true when the direct responder is intensely flared up with the suspected comment to the point that his own comment becomes a trolling attempt. A useful indicator for distinguishing these cases are the presence of insults, and to detect them we look for swear words, but as we noted before, there is no guarantee that swear words are used for insulting. This kind of error affects the precision and recall for the “troll” and “engage” classes. A solution to this problem may be the inclusion of longer parts of the conversation. It is typical in a troll-engaged comment scheme to observe longer than usual exchanges between two users, and the comments evolve in very agitated remarks. One may then use this information to disambiguate between the two classes.
Conclusion and Future Work
We presented a new view on the computational modeling of trolling in Internet fora where we proposed a comprehensive categorization of trolling attempts that for the first time considers trolling from not only the troll's perspective but also the responders' perspectives. This categorization gives rise to four interesting pragmatics tasks that involve modeling intensions, perceived intensions, and reactions. Perhaps most importantly, we create an annotated dataset that we believe is the first of its sort. We intend to make publicly available with the hope of stimulating research on trolling. | Yes |
2c4003f25e8d95a3768204f52a7a5f5e17cb2102 | 2c4003f25e8d95a3768204f52a7a5f5e17cb2102_0 | Q: Do they use a crowdsourcing platform for annotation?
Text: Introduction
In contrast to traditional content distribution channels like television, radio and newspapers, Internet opened the door for direct interaction between the content creator and its audience. Young people are now gaining more frequent access to online, networked media. Although most of the time, their Internet use is harmless, there are some risks associated with these online activities, such as the use of social networking sites (e.g., Twitter, Facebook, Reddit). The anonymity and freedom provided by social networks makes them vulnerable to threatening situations on the Web, such as trolling.
Trolling is “the activity of posting messages via a communications network that are intended to be provocative, offensive or menacing” BIBREF0 . People who post such comments are known as trolls. According to hardaker2010trolling, a troll's “real intention(s) is/are to cause disruption and/or trigger or exacerbate conflict for the purpose of their own amusement”. Worse still, the troll's comments may have a negative psychological impact on his target/victim and possibly others who participated in the same conversation. It is therefore imperative to identify such comments and perhaps even terminate the conversation before it evolves into something psychological disruptive for the participants. Monitoring conversations is a labor-intensive task: it can potentially place a severe burden on the moderators, and it may not be an effective solution when traffic is heavy. This calls for the need to develop automatic methods for identifying malicious comments, which we will refer to as trolling attempts in this paper.
In fact, there have recently been some attempts to automatically identify comments containing cyberbullying (e.g., van2015detection), which corresponds to the most severe cases of trolling BIBREF0 . However, we believe that it is important not only to identify trolling attempts, but also comments that could have a negative psychological impact on their recipients. As an example, consider the situation where a commenter posts a comment with the goal of amusing others. However, it is conceivable that not everybody would be aware of these playful intentions, and these people may disagree or dislike the mocking comments and take them as inappropriate, prompting a negative reaction or psychological impact on themselves.
In light of this discussion, we believe that there is a need to identify not only the trolling attempts, but also comments that could have a negative psychological impact on its receipts. To this end, we seek to achieve the following goals in this paper. First, we propose a comprehensive categorization of trolling that allows us to model not only the troll's intention given his trolling attempt, but also the recipients' perception of the troll's intention and subsequently their reaction to the trolling attempt. This categorization gives rise to very interesting problems in pragmatics that involve the computational modeling of intentions, perceived intentions, and reactions to perceived intentions. Second, we create a new annotated resource for computational modeling of trolling. Each instance in this resource corresponds to a suspected trolling attempt taken from a Reddit conversation, it's surrounding context, and its immediate responses and will be manually coded with information such as the troll's intention and the recipients' reactions using our proposed categorization of trolling. Finally, we identify the instances that are difficult to classify with the help of a classifier trained with features taken from the state of the art, and subsequently present an analysis of these instances.
To our knowledge, our annotated resource is the first one of its sort that allows computational modeling on both the troll's side and the recipients' side. By making it publicly available, we hope to stimulate further research on this task. We believe that it will be valuable to any NLP researcher who is interested in the computational modeling of trolling.
Related Work
In this section, we discuss related work in the areas of trolling, bullying, abusive language detection and politeness, as they intersect in their scope and at least partially address the problem presented in this work.
In the realm of psychology, bishop2013effect and bishop2014representations elaborate a deep description of a troll's personality, motivations, effects on the community that trolls interfere in and the criminal and psychological aspects of trolls. Their main focus are flaming (trolls), and hostile and aggressive interactions between users BIBREF1 .
On the computational side, mihaylov2015finding address the problem of identifying manipulation trolls in news community forums. Not only do they focus solely on troll identification, but the major difference with this work is that all their predictions are based on non-linguistic information such as number of votes, dates, number of comments and so on. In a networks related framework, kumar2014accurately and guha2004propagation present a methodology to identify malicious individuals in a network based solely on the network's properties rather than on the textual content of comments. cambria2010not propose a method that involves NLP components, but fail to provide an evaluation of their system.
There is extensive work on detecting offensive and abusive language in social media BIBREF2 and BIBREF3 . There are two clear differences between their work and ours. One is that trolling is concerned about not only abusive language but also a much larger range of language styles and addresses the intentions and interpretations of the commenters, which goes beyond the linguistic dimension. The other is that we are additionally interested in the reactions to trolling attempts, real or perceived, because we argued that this is a phenomenon that occurs in pairs through the interaction of at least two individuals, which is different from abusive language detection. Also, xu2012learning, xu2012fast and xu2013examination address bullying traces. Bullying traces are self-reported events of individuals describing being part of bullying events, but we believe that the real impact of computational trolling research is not on analyzing retrospective incidents, but on analyzing real-time conversations. chen2012detecting use lexical and semantic features to determine sentence offensiveness levels to identify cyberbullying, offensive or abusive comments on Youtube. On Youtube as well, dinakar2012common identified sensitive topics for cyberbullying. dadvar2014experts used expert systems to classify between bullying and no bullying in posts. van2015detection predict fine-grained categories for cyberbullying, distinguishing between insults and threats and identified user roles in the exchanges. Finally, hardaker2010trolling argues that trolling cannot be studied using established politeness research categories.
Trolling Categorization
In this section, we describe our proposal of a comprehensive trolling categorization. While there have been attempts in the realm of psychology to provide a working definition of trolling (e.g., hardaker2010trolling, bishop2014representations), their focus is mostly on modeling the troll's behavior. For instance, bishop2014representations constructed a “trolling magnitude” scale focused on the severity of abuse and misuse of internet mediated communications. bishop2013effect also categorized trolls based on psychological characteristics focused on pathologies and possible criminal behaviors. In contrast, our trolling categorization seeks to model not only the troll's behavior but also the impact on the recipients, as described below.
Since one of our goals is to identify trolling events, our datasets will be composed of suspected trolling attempts (i.e., comments that are suspected to be trolling attempts). In other words, some of these suspected trolling attempts will be real trolling attempts, and some of them won't. So, if a suspected trolling attempt is in fact not a trolling attempt, then its author will not be a troll.
To cover both the troll and the recipients, we define a (suspected trolling attempt, responses) pair as the basic unit that we consider for the study of trolling, where “responses” are all the direct responses to the suspected trolling attempt. We characterize a (suspected trolling attempt, responses) pair using four aspects. Two aspects describe the trolling attempt: (1) Intention (I) (what is its author's purpose?), and (2) Intention Disclosure (D) (is its author trying to deceive its readers by hiding his real (i.e., malicious) intentions?). The remaining two aspects are defined on each of the (direct) responses to the trolling attempt: (1) Intention Interpretation (R) (what is the responder's perception of the troll's intention?), and (2) the Response strategy (B) (what is the responder's reaction?). Two points deserve mention. First, R can be different from I due to misunderstanding and the fact that the troll may be trying to hide his intention. Second, B is influenced by R, and the responder's comment can itself be a trolling attempt. We believe that these four aspects constitute interesting, under-studied pragmatics tasks for NLP researchers.
The possible values of each aspect are described in Table TABREF1 . As noted before, since these are suspected trolling attempts, if an attempt turns out not to be a trolling attempt, its author will not be a troll.
For a given (suspected trolling attempt, responses) pair, not all of the 189 (= INLINEFORM0 ) combinations of values of the four aspects are possible. There are logical constraints that limit plausible combinations: a) Trolling or Playing Intentions (I) must have Hidden or Exposed Intention Disclosure (D), b) Normal intentions (I) can only have None Intention disclosure (D) and c) Trolling or Playing interpretation (R) cannot have Normal response strategy (B).
Conversation Excerpts
To enable the reader to better understand this categorization, we present two example excerpts taken from the original (Reddit) conversations. The first comment on each excerpt, generated by author C0, is given as a minimal piece of context. The second comment, written by the author C1 in italics, is the suspected trolling attempt. The rest of the comments comprise all direct responses to the suspected trolling comment.
Example 1.
[noitemsep,nolistsep]
Yeah, cause that's what usually happens. Also, quit following me around, I don't want a boyfriend.
[noitemsep,nolistsep]
I wasn't aware you were the same person.... I've replied to a number of stupid people recently, my bad
[noitemsep,nolistsep]
Trollname trollpost brotroll
In this example, C1 is teasing of C0, expecting to provoke or irritate irritate, and he is clearly disclosing her trolling intentions. In C0's response, we see that he clearly believe that C1 is trolling, since is directly calling him a “brotroll” and his response strategy is frustrate the trolling attempt by denouncing C1 troll's intentions “trollpost” and true identity “brotroll”.
Example 2.
[noitemsep,nolistsep]
Please post a video of your dog doing this. The way I'm imagining this is adorable.
[noitemsep,nolistsep]
I hope the dog gets run over by a truck on the way out of the childrens playground.
[noitemsep,nolistsep]
If you're going to troll, can you at least try to be a bit more
Haha I hope the cancer kills you. convincing?
In this example, we observe that C0's first comment is making a polite request (Please). In return, C1 answer is a mean spirited comment whose intention is to disrupt and possible hurtful C0. Also, C1's comment is not subtle at all, so his intention is clearly disclosed. As for C2, she is clearly acknowledging C1's trolling intention and her response strategy is a criticism which we categorize as frustrate. Now, in C0's second comment, we observe that his interpretation is clear, he believes that C1 is trolling and the negative effect is so tangible, that his response strategy is to troll back or counter-troll by replying with a comparable mean comment.
Corpus and Annotation
Reddit is popular website that allows registered users (without identity verification) to participate in fora grouped by topic or interest. Participation consists of posting stories that can be seen by other users, voting stories and comments, and comments in the story's comment section, in the form of a forum. The forums are arranged in the form of a tree, allowing nested conversations, where the replies to a comment are its direct responses. We collected all comments in the stories' conversation in Reddit that were posted in August 2015. Since it is infeasible to manually annotate all of the comments, we process this dataset with the goal of extracting threads that involve suspected trolling attempts and the direct responses to them. To do so, we used Lucene to create an inverted index from the comments and queried it for comments containing the word “troll” with an edit distance of 1 in order to include close variations of this word, hypothesizing that such comments would be reasonable candidates of real trolling attempts. We did observe, however, that sometimes people use the word troll to point out that another user is trolling. Other times, people use the term to express their frustration about a particular user, but there is no trolling attempt. Yet other times people simply discuss trolling and trolls without actually observing one. Nonetheless, we found that this search produced a dataset in which 44.3% of the comments are real trolling attempts. Moreover, it is possible for commenters to believe that they are witnessing a trolling attempt and respond accordingly even where there is none due to misunderstanding. Therefore, the inclusion of comments that do not involve trolling would allow us to learn what triggers a user's interpretation of trolling when it is not present and what kind of response strategies are used.
For each retrieved comment, we reconstructed the original conversation tree it appears in, from the original post (i.e., the root) to the leaves, so that its parent and children can be recovered. We consider a comment in our dataset a suspected trolling attempt if at least one of its immediate children contains the word troll. For annotation purposes, we created snippets of conversations exactly like the ones shown in Example 1 and Example 2, each of which consists of the parent of the suspected trolling attempt, the suspected trolling attempt, and all of the direct responses to the suspected trolling attempt.
We had two human annotators who were trained on snippets (i.e., (suspected trolling attempt, responses) pairs) taken from 200 conversations and were allowed to discuss their findings. After this training stage, we asked them to independently label the four aspects for each snippet. We recognize that this limited amount of information is not always sufficient to recover the four aspects we are interested in, so we give the annotators the option to discard instances for which they couldn't determine the labels confidently. The final annotated dataset consists of 1000 conversations composed of 6833 sentences and 88047 tokens. The distribution over the classes per trolling aspect is shown in the table TABREF19 in the column “Size”.
Due to the subjective nature of the task we did not expect perfect agreement. However, on the 100 doubly-annotated snippets, we obtained substantial inter-annotator agreement according to Cohen's kappa statistic BIBREF4 for each of the four aspects: Intention: 0.788, Intention Disclosure: 0.780, Interpretation: 0.797 and Response 0.776. In the end, the annotators discussed their discrepancies and managed to resolve all of them.
Trolling Attempt Prediction
In this section, we make predictions on the four aspects of our task, with the primary goal of identifying the errors our classifier makes (i.e., the hard-to-classify instances) and hence the directions for future work, and the secondary goal of estimating the state of the art on this new task using only shallow (i.e., lexical and wordlist-based) features.
Feature Sets
For prediction we define two sets of features: (1) a basic feature set taken from Van Hee's van2015detection paper on cyberbullying prediction, and (2) an extended feature set that we designed using primarily information extracted from wordlists and dictionaries.
N-gram features. We encode each lemmatized and unlemmatized unigram and bigram collected from the training comments as a binary feature. In a similar manner, we include the unigram and bigram along with their POS tag as in BIBREF5 . To extract these features we used Stanford CoreNLP BIBREF6 .
Sentiment Polarity. The overall comment's emotion could be useful to identify the response and intention in a trolling attempt. So, we apply the Vader Sentiment Polarity Analyzer BIBREF7 and include four features, one per each measurement given by the analyzer: positive, neutral, negative and a composite metric, each as a real number value.
Emoticons. Reddit's comments make extensive use of emoticons. We argue that some emoticons are specifically used in trolling attempts to express a variety of emotions, which we hypothesize would be useful to identify a comment's intention, interpretation and response. For that reason, we use the emoticon dictionary developed hogenboom2015exploiting. We create a binary feature whose value is one if at least one of these emoticons is found in the comment.
Harmful Vocabulary. In their research on bullying, nitta2013detecting identified a small set of words that are highly offensive. We create a binary feature whose value is one if the comment contains at least one of these words.
Emotions Synsets. As in xu2012fast, we extracted all lemmas associated with each WordNet BIBREF8 synset involving seven emotions (anger, embarrassment, empathy, fear, pride, relief and sadness) as well as the synonyms of these emotion words extracted from the English merriam2004merriam dictionary. We create a binary feature whose value is one if any of these synsets or synonyms appears in the comment.
Swearing Vocabulary. We manually collected 1061 swear words and short phrases from the internet, blogs, forums and smaller repositories . The informal nature of this dictionary resembles the type of language used by flaming trolls and agitated responses, so we encode a binary feature whose value is one when at least one such swear word is found in the comment.
Swearing Vocabulary in Username. An interesting feature that is suggestive of the intention of a comment is the author's username. We found that abusive and annoying commenters contained cursing words in their usernames. So, we create a binary feature whose value is one if a swear word from the swearing vocabulary is found in their usernames.
Framenet. We apply the SEMAFOR parser BIBREF9 to each sentence in every comment, and construct three different types of binary features: every frame name that is present in the sentence, the frame name and the target word associated with it, and the argument name along with the token or lexical unit in the sentence associated with it. We believe that some frames are especially interesting from the trolling perspective. We hypothesize that these features are useful for identifying trolling attempts in which semantic and not just syntactic information is required.
Politeness cues. danescu2013computational identified cues that signal polite and impolite interactions among groups of people collaborating online. Based on our observations of trolling examples, it is clear that flaming, hostile and aggressive interactions between users BIBREF1 and engaged or emotional responses would use impolite cues. In contrast, neutralizing and frustrating responses to the troll avoid falling in confrontation and their vocabulary tends to be more polite. So we create a binary feature whose value is one if at least one cue appears in the comment.
GloVe Embeddings. All the aforementioned features constitute a high dimensional bag of words (BOW). Word embeddings were created to overcome certain problems with the BOW representation, like sparsity, and weight in correlations of semantically similar words. For this reason, and following nobata2016abusive, we create a distributed representation of the comments by averaging the word vector of each lowercase token in the comment found in the Twitter corpus pre-trained GloVe vectors BIBREF10 . The resulting comment vector representation is a 200 dimensional array that is concatenated with the existing BOW.
Results
Using the features described in the previous subsection, we train four independent classifiers using logistic regression, one per each of the four prediction tasks. All the results are obtained using 5-fold cross-validation experiments. In each fold experiment, we use three folds for training, one fold for development, and one fold for testing. All learning parameters are set to their default values except for the regularization parameter, which we tuned on the development set. In Table TABREF19 the leftmost results column reports F1 score based on majority class prediction. The next section (Single Feature Group) reports F1 scores obtained by using one feature group at a time. The goal of the later set of experiments is to gain insights about feature predictive effectiveness. The right side section (All features) shows the system performance measured using recall, precision, and F-1 as shown when all features described in section SECREF13 are used.
The majority class prediction experiment is simplest baseline to which we can can compare the rest of the experiments. In order to illustrate the prediction power of each feature group independent from all others, we perform the “Single Feature Group”, experiments. As we can observe in Table TABREF19 there are groups of features that independently are not better than the majority baseline, for example, the emoticons, politeness cues and polarity are not better disclosure predictors than the majority base. Also, we observe that only n-grams and GloVe features are the only group of features that contribute to more than a class type for the different tasks. Now, the “All Features” experiment shows how the interaction between feature sets perform than any of the other features groups in isolation. The accuracy metric for each trolling task is meant to provide an overall performance for all the classes within a particular task, and allow comparison between different experiments. In particular, we observe that GloVe vectors are the most powerful feature set, accuracy-wise, even better than the experiments with all features for all tasks except interpretation.
The overall Total Accuracy score reported in table TABREF19 using the entire feature set is 549. This result is what makes this dataset interesting: there is still lots of room for research on this task. Again, the primary goal of this experiment is to help identify the difficult-to-classify instances for analysis in the next section.
Error Analysis
In order to provide directions for future work, we analyze the errors made by the classifier trained on the extended features on the four prediction tasks.
Errors on Intention (I) prediction: The lack of background is a major problem when identifying trolling comments. For example, “your comments fit well in Stormfront” seems inoffensive on the surface. However, people who know that Stormfront is a white supremacist website will realize that the author of this comment had an annoying or malicious intention. But our system had no knowledge about it and simply predicted it as non-trolling. These kind of errors reduces recall on the prediction of trolling comments. A solution would be to include additional knowledge from anthologies along with a sentiment or polarity. One could modify NELL BIBREF12 to broaden the understanding of entities in the comments.
Non-cursing aggressions and insults This is a challenging problem, since the majority of abusive and insulting comments rely on profanity and swearing. The problem arises with subtler aggressions and insults that are equally or even more annoying, such as “Troll? How cute.” and “settle down drama queen”. The classifier has a more difficult task of determining that these are indeed aggressions or insults. This error also decreases the recall of trolling intention. A solution would be to exploit all the comments made by the suspected troll in the entire conversation in order to increase the chances of finding curse words or other cues that lead the classifier to correctly classify the comment as trolling.
Another source of error is the presence of controversial topic words such as “black”,“feminism”, “killing”, “racism”, “brown”, etc. that are commonly used by trolls. The classifier seems too confident to classify a comment as trolling in the presence of these words, but in many cases they do not. In order to ameliorate this problem, one could create ad-hoc word embeddings by training glove or other type of distributed representation on a large corpus for the specific social media platform in consideration. From these vectors one could expect a better representation of controversial topics and their interactions with other words so they might help to reduce these errors.
Errors on Disclosure (D) prediction: A major source of error that affects disclosure is the shallow meaning representation obtained from the BOW model even when augmented with the distributional features given by the glove vectors. For example, the suspected troll's comment “how to deal with refugees? How about a bullet to the head” is clearly mean-spirited and is an example of disclosed trolling. However, to reach that conclusion the reader need to infer the meaning of “bullet to the head” and that this action is desirable for a vulnerable group like migrants or refugees. This problem produces low recall for the disclosed prediction task. A solution for this problem may be the use of deeper semantics, where we represent the comments and sentences in their logical form and infer from them the intended meaning.
Errors on Interpretation (R) prediction: it is a common practice from many users to directly ask the suspected troll if he/she is trolling or not. There are several variations of this question, such as “Are you a troll?” and “not sure if trolling or not”. While the presence of a question like these seems to give us a hint of the responder's interpretation, we cannot be sure of his interpretation without also considering the context. One way to improve interpretation is to exploit the response strategy, but the response strategy in our model is predicted independently of interpretation. So one solution could be similar to the one proposed above for the disclosure task problem: jointly learning classifiers that predict both variables simultaneously. Another possibility is to use the temporal sequence of response comments and make use of older response interpretation as input features for later comments. This could be useful since commenters seem to influence each other as they read through the conversation.
Errors on Response Strategy (B) prediction: In some cases there is a blurry line between “Frustrate” and “Neutralize”. The key distinction between them is that there exists some criticism in the Frustrate responses towards the suspected troll's comment, while “Neutralizing” comments acknowledge that the suspected troll has trolling intentions, but gives no importance to them. For example, response comments such as “oh, you are a troll” and “you are just a lame troll” are examples of this subtle difference. The first is a case of “neutralize” while the second is indeed criticizing the suspected troll's comment and therefore a “frustrate” response strategy. This kind of error affects both precision and recall for these two classes. A possible solution could be to train a specialized classifier to disambiguate between “frustrate” and “neutralize” only.
Another challenging problem is the distinction between the classes “Troll” and “Engage”. This is true when the direct responder is intensely flared up with the suspected comment to the point that his own comment becomes a trolling attempt. A useful indicator for distinguishing these cases are the presence of insults, and to detect them we look for swear words, but as we noted before, there is no guarantee that swear words are used for insulting. This kind of error affects the precision and recall for the “troll” and “engage” classes. A solution to this problem may be the inclusion of longer parts of the conversation. It is typical in a troll-engaged comment scheme to observe longer than usual exchanges between two users, and the comments evolve in very agitated remarks. One may then use this information to disambiguate between the two classes.
Conclusion and Future Work
We presented a new view on the computational modeling of trolling in Internet fora where we proposed a comprehensive categorization of trolling attempts that for the first time considers trolling from not only the troll's perspective but also the responders' perspectives. This categorization gives rise to four interesting pragmatics tasks that involve modeling intensions, perceived intensions, and reactions. Perhaps most importantly, we create an annotated dataset that we believe is the first of its sort. We intend to make publicly available with the hope of stimulating research on trolling. | No |
65e32f73357bb26a29a58596e1ac314f7e9c6c91 | 65e32f73357bb26a29a58596e1ac314f7e9c6c91_0 | Q: What is an example of a difficult-to-classify case?
Text: Introduction
In contrast to traditional content distribution channels like television, radio and newspapers, Internet opened the door for direct interaction between the content creator and its audience. Young people are now gaining more frequent access to online, networked media. Although most of the time, their Internet use is harmless, there are some risks associated with these online activities, such as the use of social networking sites (e.g., Twitter, Facebook, Reddit). The anonymity and freedom provided by social networks makes them vulnerable to threatening situations on the Web, such as trolling.
Trolling is “the activity of posting messages via a communications network that are intended to be provocative, offensive or menacing” BIBREF0 . People who post such comments are known as trolls. According to hardaker2010trolling, a troll's “real intention(s) is/are to cause disruption and/or trigger or exacerbate conflict for the purpose of their own amusement”. Worse still, the troll's comments may have a negative psychological impact on his target/victim and possibly others who participated in the same conversation. It is therefore imperative to identify such comments and perhaps even terminate the conversation before it evolves into something psychological disruptive for the participants. Monitoring conversations is a labor-intensive task: it can potentially place a severe burden on the moderators, and it may not be an effective solution when traffic is heavy. This calls for the need to develop automatic methods for identifying malicious comments, which we will refer to as trolling attempts in this paper.
In fact, there have recently been some attempts to automatically identify comments containing cyberbullying (e.g., van2015detection), which corresponds to the most severe cases of trolling BIBREF0 . However, we believe that it is important not only to identify trolling attempts, but also comments that could have a negative psychological impact on their recipients. As an example, consider the situation where a commenter posts a comment with the goal of amusing others. However, it is conceivable that not everybody would be aware of these playful intentions, and these people may disagree or dislike the mocking comments and take them as inappropriate, prompting a negative reaction or psychological impact on themselves.
In light of this discussion, we believe that there is a need to identify not only the trolling attempts, but also comments that could have a negative psychological impact on its receipts. To this end, we seek to achieve the following goals in this paper. First, we propose a comprehensive categorization of trolling that allows us to model not only the troll's intention given his trolling attempt, but also the recipients' perception of the troll's intention and subsequently their reaction to the trolling attempt. This categorization gives rise to very interesting problems in pragmatics that involve the computational modeling of intentions, perceived intentions, and reactions to perceived intentions. Second, we create a new annotated resource for computational modeling of trolling. Each instance in this resource corresponds to a suspected trolling attempt taken from a Reddit conversation, it's surrounding context, and its immediate responses and will be manually coded with information such as the troll's intention and the recipients' reactions using our proposed categorization of trolling. Finally, we identify the instances that are difficult to classify with the help of a classifier trained with features taken from the state of the art, and subsequently present an analysis of these instances.
To our knowledge, our annotated resource is the first one of its sort that allows computational modeling on both the troll's side and the recipients' side. By making it publicly available, we hope to stimulate further research on this task. We believe that it will be valuable to any NLP researcher who is interested in the computational modeling of trolling.
Related Work
In this section, we discuss related work in the areas of trolling, bullying, abusive language detection and politeness, as they intersect in their scope and at least partially address the problem presented in this work.
In the realm of psychology, bishop2013effect and bishop2014representations elaborate a deep description of a troll's personality, motivations, effects on the community that trolls interfere in and the criminal and psychological aspects of trolls. Their main focus are flaming (trolls), and hostile and aggressive interactions between users BIBREF1 .
On the computational side, mihaylov2015finding address the problem of identifying manipulation trolls in news community forums. Not only do they focus solely on troll identification, but the major difference with this work is that all their predictions are based on non-linguistic information such as number of votes, dates, number of comments and so on. In a networks related framework, kumar2014accurately and guha2004propagation present a methodology to identify malicious individuals in a network based solely on the network's properties rather than on the textual content of comments. cambria2010not propose a method that involves NLP components, but fail to provide an evaluation of their system.
There is extensive work on detecting offensive and abusive language in social media BIBREF2 and BIBREF3 . There are two clear differences between their work and ours. One is that trolling is concerned about not only abusive language but also a much larger range of language styles and addresses the intentions and interpretations of the commenters, which goes beyond the linguistic dimension. The other is that we are additionally interested in the reactions to trolling attempts, real or perceived, because we argued that this is a phenomenon that occurs in pairs through the interaction of at least two individuals, which is different from abusive language detection. Also, xu2012learning, xu2012fast and xu2013examination address bullying traces. Bullying traces are self-reported events of individuals describing being part of bullying events, but we believe that the real impact of computational trolling research is not on analyzing retrospective incidents, but on analyzing real-time conversations. chen2012detecting use lexical and semantic features to determine sentence offensiveness levels to identify cyberbullying, offensive or abusive comments on Youtube. On Youtube as well, dinakar2012common identified sensitive topics for cyberbullying. dadvar2014experts used expert systems to classify between bullying and no bullying in posts. van2015detection predict fine-grained categories for cyberbullying, distinguishing between insults and threats and identified user roles in the exchanges. Finally, hardaker2010trolling argues that trolling cannot be studied using established politeness research categories.
Trolling Categorization
In this section, we describe our proposal of a comprehensive trolling categorization. While there have been attempts in the realm of psychology to provide a working definition of trolling (e.g., hardaker2010trolling, bishop2014representations), their focus is mostly on modeling the troll's behavior. For instance, bishop2014representations constructed a “trolling magnitude” scale focused on the severity of abuse and misuse of internet mediated communications. bishop2013effect also categorized trolls based on psychological characteristics focused on pathologies and possible criminal behaviors. In contrast, our trolling categorization seeks to model not only the troll's behavior but also the impact on the recipients, as described below.
Since one of our goals is to identify trolling events, our datasets will be composed of suspected trolling attempts (i.e., comments that are suspected to be trolling attempts). In other words, some of these suspected trolling attempts will be real trolling attempts, and some of them won't. So, if a suspected trolling attempt is in fact not a trolling attempt, then its author will not be a troll.
To cover both the troll and the recipients, we define a (suspected trolling attempt, responses) pair as the basic unit that we consider for the study of trolling, where “responses” are all the direct responses to the suspected trolling attempt. We characterize a (suspected trolling attempt, responses) pair using four aspects. Two aspects describe the trolling attempt: (1) Intention (I) (what is its author's purpose?), and (2) Intention Disclosure (D) (is its author trying to deceive its readers by hiding his real (i.e., malicious) intentions?). The remaining two aspects are defined on each of the (direct) responses to the trolling attempt: (1) Intention Interpretation (R) (what is the responder's perception of the troll's intention?), and (2) the Response strategy (B) (what is the responder's reaction?). Two points deserve mention. First, R can be different from I due to misunderstanding and the fact that the troll may be trying to hide his intention. Second, B is influenced by R, and the responder's comment can itself be a trolling attempt. We believe that these four aspects constitute interesting, under-studied pragmatics tasks for NLP researchers.
The possible values of each aspect are described in Table TABREF1 . As noted before, since these are suspected trolling attempts, if an attempt turns out not to be a trolling attempt, its author will not be a troll.
For a given (suspected trolling attempt, responses) pair, not all of the 189 (= INLINEFORM0 ) combinations of values of the four aspects are possible. There are logical constraints that limit plausible combinations: a) Trolling or Playing Intentions (I) must have Hidden or Exposed Intention Disclosure (D), b) Normal intentions (I) can only have None Intention disclosure (D) and c) Trolling or Playing interpretation (R) cannot have Normal response strategy (B).
Conversation Excerpts
To enable the reader to better understand this categorization, we present two example excerpts taken from the original (Reddit) conversations. The first comment on each excerpt, generated by author C0, is given as a minimal piece of context. The second comment, written by the author C1 in italics, is the suspected trolling attempt. The rest of the comments comprise all direct responses to the suspected trolling comment.
Example 1.
[noitemsep,nolistsep]
Yeah, cause that's what usually happens. Also, quit following me around, I don't want a boyfriend.
[noitemsep,nolistsep]
I wasn't aware you were the same person.... I've replied to a number of stupid people recently, my bad
[noitemsep,nolistsep]
Trollname trollpost brotroll
In this example, C1 is teasing of C0, expecting to provoke or irritate irritate, and he is clearly disclosing her trolling intentions. In C0's response, we see that he clearly believe that C1 is trolling, since is directly calling him a “brotroll” and his response strategy is frustrate the trolling attempt by denouncing C1 troll's intentions “trollpost” and true identity “brotroll”.
Example 2.
[noitemsep,nolistsep]
Please post a video of your dog doing this. The way I'm imagining this is adorable.
[noitemsep,nolistsep]
I hope the dog gets run over by a truck on the way out of the childrens playground.
[noitemsep,nolistsep]
If you're going to troll, can you at least try to be a bit more
Haha I hope the cancer kills you. convincing?
In this example, we observe that C0's first comment is making a polite request (Please). In return, C1 answer is a mean spirited comment whose intention is to disrupt and possible hurtful C0. Also, C1's comment is not subtle at all, so his intention is clearly disclosed. As for C2, she is clearly acknowledging C1's trolling intention and her response strategy is a criticism which we categorize as frustrate. Now, in C0's second comment, we observe that his interpretation is clear, he believes that C1 is trolling and the negative effect is so tangible, that his response strategy is to troll back or counter-troll by replying with a comparable mean comment.
Corpus and Annotation
Reddit is popular website that allows registered users (without identity verification) to participate in fora grouped by topic or interest. Participation consists of posting stories that can be seen by other users, voting stories and comments, and comments in the story's comment section, in the form of a forum. The forums are arranged in the form of a tree, allowing nested conversations, where the replies to a comment are its direct responses. We collected all comments in the stories' conversation in Reddit that were posted in August 2015. Since it is infeasible to manually annotate all of the comments, we process this dataset with the goal of extracting threads that involve suspected trolling attempts and the direct responses to them. To do so, we used Lucene to create an inverted index from the comments and queried it for comments containing the word “troll” with an edit distance of 1 in order to include close variations of this word, hypothesizing that such comments would be reasonable candidates of real trolling attempts. We did observe, however, that sometimes people use the word troll to point out that another user is trolling. Other times, people use the term to express their frustration about a particular user, but there is no trolling attempt. Yet other times people simply discuss trolling and trolls without actually observing one. Nonetheless, we found that this search produced a dataset in which 44.3% of the comments are real trolling attempts. Moreover, it is possible for commenters to believe that they are witnessing a trolling attempt and respond accordingly even where there is none due to misunderstanding. Therefore, the inclusion of comments that do not involve trolling would allow us to learn what triggers a user's interpretation of trolling when it is not present and what kind of response strategies are used.
For each retrieved comment, we reconstructed the original conversation tree it appears in, from the original post (i.e., the root) to the leaves, so that its parent and children can be recovered. We consider a comment in our dataset a suspected trolling attempt if at least one of its immediate children contains the word troll. For annotation purposes, we created snippets of conversations exactly like the ones shown in Example 1 and Example 2, each of which consists of the parent of the suspected trolling attempt, the suspected trolling attempt, and all of the direct responses to the suspected trolling attempt.
We had two human annotators who were trained on snippets (i.e., (suspected trolling attempt, responses) pairs) taken from 200 conversations and were allowed to discuss their findings. After this training stage, we asked them to independently label the four aspects for each snippet. We recognize that this limited amount of information is not always sufficient to recover the four aspects we are interested in, so we give the annotators the option to discard instances for which they couldn't determine the labels confidently. The final annotated dataset consists of 1000 conversations composed of 6833 sentences and 88047 tokens. The distribution over the classes per trolling aspect is shown in the table TABREF19 in the column “Size”.
Due to the subjective nature of the task we did not expect perfect agreement. However, on the 100 doubly-annotated snippets, we obtained substantial inter-annotator agreement according to Cohen's kappa statistic BIBREF4 for each of the four aspects: Intention: 0.788, Intention Disclosure: 0.780, Interpretation: 0.797 and Response 0.776. In the end, the annotators discussed their discrepancies and managed to resolve all of them.
Trolling Attempt Prediction
In this section, we make predictions on the four aspects of our task, with the primary goal of identifying the errors our classifier makes (i.e., the hard-to-classify instances) and hence the directions for future work, and the secondary goal of estimating the state of the art on this new task using only shallow (i.e., lexical and wordlist-based) features.
Feature Sets
For prediction we define two sets of features: (1) a basic feature set taken from Van Hee's van2015detection paper on cyberbullying prediction, and (2) an extended feature set that we designed using primarily information extracted from wordlists and dictionaries.
N-gram features. We encode each lemmatized and unlemmatized unigram and bigram collected from the training comments as a binary feature. In a similar manner, we include the unigram and bigram along with their POS tag as in BIBREF5 . To extract these features we used Stanford CoreNLP BIBREF6 .
Sentiment Polarity. The overall comment's emotion could be useful to identify the response and intention in a trolling attempt. So, we apply the Vader Sentiment Polarity Analyzer BIBREF7 and include four features, one per each measurement given by the analyzer: positive, neutral, negative and a composite metric, each as a real number value.
Emoticons. Reddit's comments make extensive use of emoticons. We argue that some emoticons are specifically used in trolling attempts to express a variety of emotions, which we hypothesize would be useful to identify a comment's intention, interpretation and response. For that reason, we use the emoticon dictionary developed hogenboom2015exploiting. We create a binary feature whose value is one if at least one of these emoticons is found in the comment.
Harmful Vocabulary. In their research on bullying, nitta2013detecting identified a small set of words that are highly offensive. We create a binary feature whose value is one if the comment contains at least one of these words.
Emotions Synsets. As in xu2012fast, we extracted all lemmas associated with each WordNet BIBREF8 synset involving seven emotions (anger, embarrassment, empathy, fear, pride, relief and sadness) as well as the synonyms of these emotion words extracted from the English merriam2004merriam dictionary. We create a binary feature whose value is one if any of these synsets or synonyms appears in the comment.
Swearing Vocabulary. We manually collected 1061 swear words and short phrases from the internet, blogs, forums and smaller repositories . The informal nature of this dictionary resembles the type of language used by flaming trolls and agitated responses, so we encode a binary feature whose value is one when at least one such swear word is found in the comment.
Swearing Vocabulary in Username. An interesting feature that is suggestive of the intention of a comment is the author's username. We found that abusive and annoying commenters contained cursing words in their usernames. So, we create a binary feature whose value is one if a swear word from the swearing vocabulary is found in their usernames.
Framenet. We apply the SEMAFOR parser BIBREF9 to each sentence in every comment, and construct three different types of binary features: every frame name that is present in the sentence, the frame name and the target word associated with it, and the argument name along with the token or lexical unit in the sentence associated with it. We believe that some frames are especially interesting from the trolling perspective. We hypothesize that these features are useful for identifying trolling attempts in which semantic and not just syntactic information is required.
Politeness cues. danescu2013computational identified cues that signal polite and impolite interactions among groups of people collaborating online. Based on our observations of trolling examples, it is clear that flaming, hostile and aggressive interactions between users BIBREF1 and engaged or emotional responses would use impolite cues. In contrast, neutralizing and frustrating responses to the troll avoid falling in confrontation and their vocabulary tends to be more polite. So we create a binary feature whose value is one if at least one cue appears in the comment.
GloVe Embeddings. All the aforementioned features constitute a high dimensional bag of words (BOW). Word embeddings were created to overcome certain problems with the BOW representation, like sparsity, and weight in correlations of semantically similar words. For this reason, and following nobata2016abusive, we create a distributed representation of the comments by averaging the word vector of each lowercase token in the comment found in the Twitter corpus pre-trained GloVe vectors BIBREF10 . The resulting comment vector representation is a 200 dimensional array that is concatenated with the existing BOW.
Results
Using the features described in the previous subsection, we train four independent classifiers using logistic regression, one per each of the four prediction tasks. All the results are obtained using 5-fold cross-validation experiments. In each fold experiment, we use three folds for training, one fold for development, and one fold for testing. All learning parameters are set to their default values except for the regularization parameter, which we tuned on the development set. In Table TABREF19 the leftmost results column reports F1 score based on majority class prediction. The next section (Single Feature Group) reports F1 scores obtained by using one feature group at a time. The goal of the later set of experiments is to gain insights about feature predictive effectiveness. The right side section (All features) shows the system performance measured using recall, precision, and F-1 as shown when all features described in section SECREF13 are used.
The majority class prediction experiment is simplest baseline to which we can can compare the rest of the experiments. In order to illustrate the prediction power of each feature group independent from all others, we perform the “Single Feature Group”, experiments. As we can observe in Table TABREF19 there are groups of features that independently are not better than the majority baseline, for example, the emoticons, politeness cues and polarity are not better disclosure predictors than the majority base. Also, we observe that only n-grams and GloVe features are the only group of features that contribute to more than a class type for the different tasks. Now, the “All Features” experiment shows how the interaction between feature sets perform than any of the other features groups in isolation. The accuracy metric for each trolling task is meant to provide an overall performance for all the classes within a particular task, and allow comparison between different experiments. In particular, we observe that GloVe vectors are the most powerful feature set, accuracy-wise, even better than the experiments with all features for all tasks except interpretation.
The overall Total Accuracy score reported in table TABREF19 using the entire feature set is 549. This result is what makes this dataset interesting: there is still lots of room for research on this task. Again, the primary goal of this experiment is to help identify the difficult-to-classify instances for analysis in the next section.
Error Analysis
In order to provide directions for future work, we analyze the errors made by the classifier trained on the extended features on the four prediction tasks.
Errors on Intention (I) prediction: The lack of background is a major problem when identifying trolling comments. For example, “your comments fit well in Stormfront” seems inoffensive on the surface. However, people who know that Stormfront is a white supremacist website will realize that the author of this comment had an annoying or malicious intention. But our system had no knowledge about it and simply predicted it as non-trolling. These kind of errors reduces recall on the prediction of trolling comments. A solution would be to include additional knowledge from anthologies along with a sentiment or polarity. One could modify NELL BIBREF12 to broaden the understanding of entities in the comments.
Non-cursing aggressions and insults This is a challenging problem, since the majority of abusive and insulting comments rely on profanity and swearing. The problem arises with subtler aggressions and insults that are equally or even more annoying, such as “Troll? How cute.” and “settle down drama queen”. The classifier has a more difficult task of determining that these are indeed aggressions or insults. This error also decreases the recall of trolling intention. A solution would be to exploit all the comments made by the suspected troll in the entire conversation in order to increase the chances of finding curse words or other cues that lead the classifier to correctly classify the comment as trolling.
Another source of error is the presence of controversial topic words such as “black”,“feminism”, “killing”, “racism”, “brown”, etc. that are commonly used by trolls. The classifier seems too confident to classify a comment as trolling in the presence of these words, but in many cases they do not. In order to ameliorate this problem, one could create ad-hoc word embeddings by training glove or other type of distributed representation on a large corpus for the specific social media platform in consideration. From these vectors one could expect a better representation of controversial topics and their interactions with other words so they might help to reduce these errors.
Errors on Disclosure (D) prediction: A major source of error that affects disclosure is the shallow meaning representation obtained from the BOW model even when augmented with the distributional features given by the glove vectors. For example, the suspected troll's comment “how to deal with refugees? How about a bullet to the head” is clearly mean-spirited and is an example of disclosed trolling. However, to reach that conclusion the reader need to infer the meaning of “bullet to the head” and that this action is desirable for a vulnerable group like migrants or refugees. This problem produces low recall for the disclosed prediction task. A solution for this problem may be the use of deeper semantics, where we represent the comments and sentences in their logical form and infer from them the intended meaning.
Errors on Interpretation (R) prediction: it is a common practice from many users to directly ask the suspected troll if he/she is trolling or not. There are several variations of this question, such as “Are you a troll?” and “not sure if trolling or not”. While the presence of a question like these seems to give us a hint of the responder's interpretation, we cannot be sure of his interpretation without also considering the context. One way to improve interpretation is to exploit the response strategy, but the response strategy in our model is predicted independently of interpretation. So one solution could be similar to the one proposed above for the disclosure task problem: jointly learning classifiers that predict both variables simultaneously. Another possibility is to use the temporal sequence of response comments and make use of older response interpretation as input features for later comments. This could be useful since commenters seem to influence each other as they read through the conversation.
Errors on Response Strategy (B) prediction: In some cases there is a blurry line between “Frustrate” and “Neutralize”. The key distinction between them is that there exists some criticism in the Frustrate responses towards the suspected troll's comment, while “Neutralizing” comments acknowledge that the suspected troll has trolling intentions, but gives no importance to them. For example, response comments such as “oh, you are a troll” and “you are just a lame troll” are examples of this subtle difference. The first is a case of “neutralize” while the second is indeed criticizing the suspected troll's comment and therefore a “frustrate” response strategy. This kind of error affects both precision and recall for these two classes. A possible solution could be to train a specialized classifier to disambiguate between “frustrate” and “neutralize” only.
Another challenging problem is the distinction between the classes “Troll” and “Engage”. This is true when the direct responder is intensely flared up with the suspected comment to the point that his own comment becomes a trolling attempt. A useful indicator for distinguishing these cases are the presence of insults, and to detect them we look for swear words, but as we noted before, there is no guarantee that swear words are used for insulting. This kind of error affects the precision and recall for the “troll” and “engage” classes. A solution to this problem may be the inclusion of longer parts of the conversation. It is typical in a troll-engaged comment scheme to observe longer than usual exchanges between two users, and the comments evolve in very agitated remarks. One may then use this information to disambiguate between the two classes.
Conclusion and Future Work
We presented a new view on the computational modeling of trolling in Internet fora where we proposed a comprehensive categorization of trolling attempts that for the first time considers trolling from not only the troll's perspective but also the responders' perspectives. This categorization gives rise to four interesting pragmatics tasks that involve modeling intensions, perceived intensions, and reactions. Perhaps most importantly, we create an annotated dataset that we believe is the first of its sort. We intend to make publicly available with the hope of stimulating research on trolling. | The lack of background, Non-cursing aggressions and insults, the presence of controversial topic words , shallow meaning representation, directly ask the suspected troll if he/she is trolling or not, a blurry line between “Frustrate” and “Neutralize”, distinction between the classes “Troll” and “Engage” |
46f175e1322d648ab2c0258a9609fe6f43d3b44e | 46f175e1322d648ab2c0258a9609fe6f43d3b44e_0 | Q: What potential solutions are suggested?
Text: Introduction
In contrast to traditional content distribution channels like television, radio and newspapers, Internet opened the door for direct interaction between the content creator and its audience. Young people are now gaining more frequent access to online, networked media. Although most of the time, their Internet use is harmless, there are some risks associated with these online activities, such as the use of social networking sites (e.g., Twitter, Facebook, Reddit). The anonymity and freedom provided by social networks makes them vulnerable to threatening situations on the Web, such as trolling.
Trolling is “the activity of posting messages via a communications network that are intended to be provocative, offensive or menacing” BIBREF0 . People who post such comments are known as trolls. According to hardaker2010trolling, a troll's “real intention(s) is/are to cause disruption and/or trigger or exacerbate conflict for the purpose of their own amusement”. Worse still, the troll's comments may have a negative psychological impact on his target/victim and possibly others who participated in the same conversation. It is therefore imperative to identify such comments and perhaps even terminate the conversation before it evolves into something psychological disruptive for the participants. Monitoring conversations is a labor-intensive task: it can potentially place a severe burden on the moderators, and it may not be an effective solution when traffic is heavy. This calls for the need to develop automatic methods for identifying malicious comments, which we will refer to as trolling attempts in this paper.
In fact, there have recently been some attempts to automatically identify comments containing cyberbullying (e.g., van2015detection), which corresponds to the most severe cases of trolling BIBREF0 . However, we believe that it is important not only to identify trolling attempts, but also comments that could have a negative psychological impact on their recipients. As an example, consider the situation where a commenter posts a comment with the goal of amusing others. However, it is conceivable that not everybody would be aware of these playful intentions, and these people may disagree or dislike the mocking comments and take them as inappropriate, prompting a negative reaction or psychological impact on themselves.
In light of this discussion, we believe that there is a need to identify not only the trolling attempts, but also comments that could have a negative psychological impact on its receipts. To this end, we seek to achieve the following goals in this paper. First, we propose a comprehensive categorization of trolling that allows us to model not only the troll's intention given his trolling attempt, but also the recipients' perception of the troll's intention and subsequently their reaction to the trolling attempt. This categorization gives rise to very interesting problems in pragmatics that involve the computational modeling of intentions, perceived intentions, and reactions to perceived intentions. Second, we create a new annotated resource for computational modeling of trolling. Each instance in this resource corresponds to a suspected trolling attempt taken from a Reddit conversation, it's surrounding context, and its immediate responses and will be manually coded with information such as the troll's intention and the recipients' reactions using our proposed categorization of trolling. Finally, we identify the instances that are difficult to classify with the help of a classifier trained with features taken from the state of the art, and subsequently present an analysis of these instances.
To our knowledge, our annotated resource is the first one of its sort that allows computational modeling on both the troll's side and the recipients' side. By making it publicly available, we hope to stimulate further research on this task. We believe that it will be valuable to any NLP researcher who is interested in the computational modeling of trolling.
Related Work
In this section, we discuss related work in the areas of trolling, bullying, abusive language detection and politeness, as they intersect in their scope and at least partially address the problem presented in this work.
In the realm of psychology, bishop2013effect and bishop2014representations elaborate a deep description of a troll's personality, motivations, effects on the community that trolls interfere in and the criminal and psychological aspects of trolls. Their main focus are flaming (trolls), and hostile and aggressive interactions between users BIBREF1 .
On the computational side, mihaylov2015finding address the problem of identifying manipulation trolls in news community forums. Not only do they focus solely on troll identification, but the major difference with this work is that all their predictions are based on non-linguistic information such as number of votes, dates, number of comments and so on. In a networks related framework, kumar2014accurately and guha2004propagation present a methodology to identify malicious individuals in a network based solely on the network's properties rather than on the textual content of comments. cambria2010not propose a method that involves NLP components, but fail to provide an evaluation of their system.
There is extensive work on detecting offensive and abusive language in social media BIBREF2 and BIBREF3 . There are two clear differences between their work and ours. One is that trolling is concerned about not only abusive language but also a much larger range of language styles and addresses the intentions and interpretations of the commenters, which goes beyond the linguistic dimension. The other is that we are additionally interested in the reactions to trolling attempts, real or perceived, because we argued that this is a phenomenon that occurs in pairs through the interaction of at least two individuals, which is different from abusive language detection. Also, xu2012learning, xu2012fast and xu2013examination address bullying traces. Bullying traces are self-reported events of individuals describing being part of bullying events, but we believe that the real impact of computational trolling research is not on analyzing retrospective incidents, but on analyzing real-time conversations. chen2012detecting use lexical and semantic features to determine sentence offensiveness levels to identify cyberbullying, offensive or abusive comments on Youtube. On Youtube as well, dinakar2012common identified sensitive topics for cyberbullying. dadvar2014experts used expert systems to classify between bullying and no bullying in posts. van2015detection predict fine-grained categories for cyberbullying, distinguishing between insults and threats and identified user roles in the exchanges. Finally, hardaker2010trolling argues that trolling cannot be studied using established politeness research categories.
Trolling Categorization
In this section, we describe our proposal of a comprehensive trolling categorization. While there have been attempts in the realm of psychology to provide a working definition of trolling (e.g., hardaker2010trolling, bishop2014representations), their focus is mostly on modeling the troll's behavior. For instance, bishop2014representations constructed a “trolling magnitude” scale focused on the severity of abuse and misuse of internet mediated communications. bishop2013effect also categorized trolls based on psychological characteristics focused on pathologies and possible criminal behaviors. In contrast, our trolling categorization seeks to model not only the troll's behavior but also the impact on the recipients, as described below.
Since one of our goals is to identify trolling events, our datasets will be composed of suspected trolling attempts (i.e., comments that are suspected to be trolling attempts). In other words, some of these suspected trolling attempts will be real trolling attempts, and some of them won't. So, if a suspected trolling attempt is in fact not a trolling attempt, then its author will not be a troll.
To cover both the troll and the recipients, we define a (suspected trolling attempt, responses) pair as the basic unit that we consider for the study of trolling, where “responses” are all the direct responses to the suspected trolling attempt. We characterize a (suspected trolling attempt, responses) pair using four aspects. Two aspects describe the trolling attempt: (1) Intention (I) (what is its author's purpose?), and (2) Intention Disclosure (D) (is its author trying to deceive its readers by hiding his real (i.e., malicious) intentions?). The remaining two aspects are defined on each of the (direct) responses to the trolling attempt: (1) Intention Interpretation (R) (what is the responder's perception of the troll's intention?), and (2) the Response strategy (B) (what is the responder's reaction?). Two points deserve mention. First, R can be different from I due to misunderstanding and the fact that the troll may be trying to hide his intention. Second, B is influenced by R, and the responder's comment can itself be a trolling attempt. We believe that these four aspects constitute interesting, under-studied pragmatics tasks for NLP researchers.
The possible values of each aspect are described in Table TABREF1 . As noted before, since these are suspected trolling attempts, if an attempt turns out not to be a trolling attempt, its author will not be a troll.
For a given (suspected trolling attempt, responses) pair, not all of the 189 (= INLINEFORM0 ) combinations of values of the four aspects are possible. There are logical constraints that limit plausible combinations: a) Trolling or Playing Intentions (I) must have Hidden or Exposed Intention Disclosure (D), b) Normal intentions (I) can only have None Intention disclosure (D) and c) Trolling or Playing interpretation (R) cannot have Normal response strategy (B).
Conversation Excerpts
To enable the reader to better understand this categorization, we present two example excerpts taken from the original (Reddit) conversations. The first comment on each excerpt, generated by author C0, is given as a minimal piece of context. The second comment, written by the author C1 in italics, is the suspected trolling attempt. The rest of the comments comprise all direct responses to the suspected trolling comment.
Example 1.
[noitemsep,nolistsep]
Yeah, cause that's what usually happens. Also, quit following me around, I don't want a boyfriend.
[noitemsep,nolistsep]
I wasn't aware you were the same person.... I've replied to a number of stupid people recently, my bad
[noitemsep,nolistsep]
Trollname trollpost brotroll
In this example, C1 is teasing of C0, expecting to provoke or irritate irritate, and he is clearly disclosing her trolling intentions. In C0's response, we see that he clearly believe that C1 is trolling, since is directly calling him a “brotroll” and his response strategy is frustrate the trolling attempt by denouncing C1 troll's intentions “trollpost” and true identity “brotroll”.
Example 2.
[noitemsep,nolistsep]
Please post a video of your dog doing this. The way I'm imagining this is adorable.
[noitemsep,nolistsep]
I hope the dog gets run over by a truck on the way out of the childrens playground.
[noitemsep,nolistsep]
If you're going to troll, can you at least try to be a bit more
Haha I hope the cancer kills you. convincing?
In this example, we observe that C0's first comment is making a polite request (Please). In return, C1 answer is a mean spirited comment whose intention is to disrupt and possible hurtful C0. Also, C1's comment is not subtle at all, so his intention is clearly disclosed. As for C2, she is clearly acknowledging C1's trolling intention and her response strategy is a criticism which we categorize as frustrate. Now, in C0's second comment, we observe that his interpretation is clear, he believes that C1 is trolling and the negative effect is so tangible, that his response strategy is to troll back or counter-troll by replying with a comparable mean comment.
Corpus and Annotation
Reddit is popular website that allows registered users (without identity verification) to participate in fora grouped by topic or interest. Participation consists of posting stories that can be seen by other users, voting stories and comments, and comments in the story's comment section, in the form of a forum. The forums are arranged in the form of a tree, allowing nested conversations, where the replies to a comment are its direct responses. We collected all comments in the stories' conversation in Reddit that were posted in August 2015. Since it is infeasible to manually annotate all of the comments, we process this dataset with the goal of extracting threads that involve suspected trolling attempts and the direct responses to them. To do so, we used Lucene to create an inverted index from the comments and queried it for comments containing the word “troll” with an edit distance of 1 in order to include close variations of this word, hypothesizing that such comments would be reasonable candidates of real trolling attempts. We did observe, however, that sometimes people use the word troll to point out that another user is trolling. Other times, people use the term to express their frustration about a particular user, but there is no trolling attempt. Yet other times people simply discuss trolling and trolls without actually observing one. Nonetheless, we found that this search produced a dataset in which 44.3% of the comments are real trolling attempts. Moreover, it is possible for commenters to believe that they are witnessing a trolling attempt and respond accordingly even where there is none due to misunderstanding. Therefore, the inclusion of comments that do not involve trolling would allow us to learn what triggers a user's interpretation of trolling when it is not present and what kind of response strategies are used.
For each retrieved comment, we reconstructed the original conversation tree it appears in, from the original post (i.e., the root) to the leaves, so that its parent and children can be recovered. We consider a comment in our dataset a suspected trolling attempt if at least one of its immediate children contains the word troll. For annotation purposes, we created snippets of conversations exactly like the ones shown in Example 1 and Example 2, each of which consists of the parent of the suspected trolling attempt, the suspected trolling attempt, and all of the direct responses to the suspected trolling attempt.
We had two human annotators who were trained on snippets (i.e., (suspected trolling attempt, responses) pairs) taken from 200 conversations and were allowed to discuss their findings. After this training stage, we asked them to independently label the four aspects for each snippet. We recognize that this limited amount of information is not always sufficient to recover the four aspects we are interested in, so we give the annotators the option to discard instances for which they couldn't determine the labels confidently. The final annotated dataset consists of 1000 conversations composed of 6833 sentences and 88047 tokens. The distribution over the classes per trolling aspect is shown in the table TABREF19 in the column “Size”.
Due to the subjective nature of the task we did not expect perfect agreement. However, on the 100 doubly-annotated snippets, we obtained substantial inter-annotator agreement according to Cohen's kappa statistic BIBREF4 for each of the four aspects: Intention: 0.788, Intention Disclosure: 0.780, Interpretation: 0.797 and Response 0.776. In the end, the annotators discussed their discrepancies and managed to resolve all of them.
Trolling Attempt Prediction
In this section, we make predictions on the four aspects of our task, with the primary goal of identifying the errors our classifier makes (i.e., the hard-to-classify instances) and hence the directions for future work, and the secondary goal of estimating the state of the art on this new task using only shallow (i.e., lexical and wordlist-based) features.
Feature Sets
For prediction we define two sets of features: (1) a basic feature set taken from Van Hee's van2015detection paper on cyberbullying prediction, and (2) an extended feature set that we designed using primarily information extracted from wordlists and dictionaries.
N-gram features. We encode each lemmatized and unlemmatized unigram and bigram collected from the training comments as a binary feature. In a similar manner, we include the unigram and bigram along with their POS tag as in BIBREF5 . To extract these features we used Stanford CoreNLP BIBREF6 .
Sentiment Polarity. The overall comment's emotion could be useful to identify the response and intention in a trolling attempt. So, we apply the Vader Sentiment Polarity Analyzer BIBREF7 and include four features, one per each measurement given by the analyzer: positive, neutral, negative and a composite metric, each as a real number value.
Emoticons. Reddit's comments make extensive use of emoticons. We argue that some emoticons are specifically used in trolling attempts to express a variety of emotions, which we hypothesize would be useful to identify a comment's intention, interpretation and response. For that reason, we use the emoticon dictionary developed hogenboom2015exploiting. We create a binary feature whose value is one if at least one of these emoticons is found in the comment.
Harmful Vocabulary. In their research on bullying, nitta2013detecting identified a small set of words that are highly offensive. We create a binary feature whose value is one if the comment contains at least one of these words.
Emotions Synsets. As in xu2012fast, we extracted all lemmas associated with each WordNet BIBREF8 synset involving seven emotions (anger, embarrassment, empathy, fear, pride, relief and sadness) as well as the synonyms of these emotion words extracted from the English merriam2004merriam dictionary. We create a binary feature whose value is one if any of these synsets or synonyms appears in the comment.
Swearing Vocabulary. We manually collected 1061 swear words and short phrases from the internet, blogs, forums and smaller repositories . The informal nature of this dictionary resembles the type of language used by flaming trolls and agitated responses, so we encode a binary feature whose value is one when at least one such swear word is found in the comment.
Swearing Vocabulary in Username. An interesting feature that is suggestive of the intention of a comment is the author's username. We found that abusive and annoying commenters contained cursing words in their usernames. So, we create a binary feature whose value is one if a swear word from the swearing vocabulary is found in their usernames.
Framenet. We apply the SEMAFOR parser BIBREF9 to each sentence in every comment, and construct three different types of binary features: every frame name that is present in the sentence, the frame name and the target word associated with it, and the argument name along with the token or lexical unit in the sentence associated with it. We believe that some frames are especially interesting from the trolling perspective. We hypothesize that these features are useful for identifying trolling attempts in which semantic and not just syntactic information is required.
Politeness cues. danescu2013computational identified cues that signal polite and impolite interactions among groups of people collaborating online. Based on our observations of trolling examples, it is clear that flaming, hostile and aggressive interactions between users BIBREF1 and engaged or emotional responses would use impolite cues. In contrast, neutralizing and frustrating responses to the troll avoid falling in confrontation and their vocabulary tends to be more polite. So we create a binary feature whose value is one if at least one cue appears in the comment.
GloVe Embeddings. All the aforementioned features constitute a high dimensional bag of words (BOW). Word embeddings were created to overcome certain problems with the BOW representation, like sparsity, and weight in correlations of semantically similar words. For this reason, and following nobata2016abusive, we create a distributed representation of the comments by averaging the word vector of each lowercase token in the comment found in the Twitter corpus pre-trained GloVe vectors BIBREF10 . The resulting comment vector representation is a 200 dimensional array that is concatenated with the existing BOW.
Results
Using the features described in the previous subsection, we train four independent classifiers using logistic regression, one per each of the four prediction tasks. All the results are obtained using 5-fold cross-validation experiments. In each fold experiment, we use three folds for training, one fold for development, and one fold for testing. All learning parameters are set to their default values except for the regularization parameter, which we tuned on the development set. In Table TABREF19 the leftmost results column reports F1 score based on majority class prediction. The next section (Single Feature Group) reports F1 scores obtained by using one feature group at a time. The goal of the later set of experiments is to gain insights about feature predictive effectiveness. The right side section (All features) shows the system performance measured using recall, precision, and F-1 as shown when all features described in section SECREF13 are used.
The majority class prediction experiment is simplest baseline to which we can can compare the rest of the experiments. In order to illustrate the prediction power of each feature group independent from all others, we perform the “Single Feature Group”, experiments. As we can observe in Table TABREF19 there are groups of features that independently are not better than the majority baseline, for example, the emoticons, politeness cues and polarity are not better disclosure predictors than the majority base. Also, we observe that only n-grams and GloVe features are the only group of features that contribute to more than a class type for the different tasks. Now, the “All Features” experiment shows how the interaction between feature sets perform than any of the other features groups in isolation. The accuracy metric for each trolling task is meant to provide an overall performance for all the classes within a particular task, and allow comparison between different experiments. In particular, we observe that GloVe vectors are the most powerful feature set, accuracy-wise, even better than the experiments with all features for all tasks except interpretation.
The overall Total Accuracy score reported in table TABREF19 using the entire feature set is 549. This result is what makes this dataset interesting: there is still lots of room for research on this task. Again, the primary goal of this experiment is to help identify the difficult-to-classify instances for analysis in the next section.
Error Analysis
In order to provide directions for future work, we analyze the errors made by the classifier trained on the extended features on the four prediction tasks.
Errors on Intention (I) prediction: The lack of background is a major problem when identifying trolling comments. For example, “your comments fit well in Stormfront” seems inoffensive on the surface. However, people who know that Stormfront is a white supremacist website will realize that the author of this comment had an annoying or malicious intention. But our system had no knowledge about it and simply predicted it as non-trolling. These kind of errors reduces recall on the prediction of trolling comments. A solution would be to include additional knowledge from anthologies along with a sentiment or polarity. One could modify NELL BIBREF12 to broaden the understanding of entities in the comments.
Non-cursing aggressions and insults This is a challenging problem, since the majority of abusive and insulting comments rely on profanity and swearing. The problem arises with subtler aggressions and insults that are equally or even more annoying, such as “Troll? How cute.” and “settle down drama queen”. The classifier has a more difficult task of determining that these are indeed aggressions or insults. This error also decreases the recall of trolling intention. A solution would be to exploit all the comments made by the suspected troll in the entire conversation in order to increase the chances of finding curse words or other cues that lead the classifier to correctly classify the comment as trolling.
Another source of error is the presence of controversial topic words such as “black”,“feminism”, “killing”, “racism”, “brown”, etc. that are commonly used by trolls. The classifier seems too confident to classify a comment as trolling in the presence of these words, but in many cases they do not. In order to ameliorate this problem, one could create ad-hoc word embeddings by training glove or other type of distributed representation on a large corpus for the specific social media platform in consideration. From these vectors one could expect a better representation of controversial topics and their interactions with other words so they might help to reduce these errors.
Errors on Disclosure (D) prediction: A major source of error that affects disclosure is the shallow meaning representation obtained from the BOW model even when augmented with the distributional features given by the glove vectors. For example, the suspected troll's comment “how to deal with refugees? How about a bullet to the head” is clearly mean-spirited and is an example of disclosed trolling. However, to reach that conclusion the reader need to infer the meaning of “bullet to the head” and that this action is desirable for a vulnerable group like migrants or refugees. This problem produces low recall for the disclosed prediction task. A solution for this problem may be the use of deeper semantics, where we represent the comments and sentences in their logical form and infer from them the intended meaning.
Errors on Interpretation (R) prediction: it is a common practice from many users to directly ask the suspected troll if he/she is trolling or not. There are several variations of this question, such as “Are you a troll?” and “not sure if trolling or not”. While the presence of a question like these seems to give us a hint of the responder's interpretation, we cannot be sure of his interpretation without also considering the context. One way to improve interpretation is to exploit the response strategy, but the response strategy in our model is predicted independently of interpretation. So one solution could be similar to the one proposed above for the disclosure task problem: jointly learning classifiers that predict both variables simultaneously. Another possibility is to use the temporal sequence of response comments and make use of older response interpretation as input features for later comments. This could be useful since commenters seem to influence each other as they read through the conversation.
Errors on Response Strategy (B) prediction: In some cases there is a blurry line between “Frustrate” and “Neutralize”. The key distinction between them is that there exists some criticism in the Frustrate responses towards the suspected troll's comment, while “Neutralizing” comments acknowledge that the suspected troll has trolling intentions, but gives no importance to them. For example, response comments such as “oh, you are a troll” and “you are just a lame troll” are examples of this subtle difference. The first is a case of “neutralize” while the second is indeed criticizing the suspected troll's comment and therefore a “frustrate” response strategy. This kind of error affects both precision and recall for these two classes. A possible solution could be to train a specialized classifier to disambiguate between “frustrate” and “neutralize” only.
Another challenging problem is the distinction between the classes “Troll” and “Engage”. This is true when the direct responder is intensely flared up with the suspected comment to the point that his own comment becomes a trolling attempt. A useful indicator for distinguishing these cases are the presence of insults, and to detect them we look for swear words, but as we noted before, there is no guarantee that swear words are used for insulting. This kind of error affects the precision and recall for the “troll” and “engage” classes. A solution to this problem may be the inclusion of longer parts of the conversation. It is typical in a troll-engaged comment scheme to observe longer than usual exchanges between two users, and the comments evolve in very agitated remarks. One may then use this information to disambiguate between the two classes.
Conclusion and Future Work
We presented a new view on the computational modeling of trolling in Internet fora where we proposed a comprehensive categorization of trolling attempts that for the first time considers trolling from not only the troll's perspective but also the responders' perspectives. This categorization gives rise to four interesting pragmatics tasks that involve modeling intensions, perceived intensions, and reactions. Perhaps most importantly, we create an annotated dataset that we believe is the first of its sort. We intend to make publicly available with the hope of stimulating research on trolling. | inclusion of longer parts of the conversation |
7cc22fd8c9d0e1ce5e86d0cbe90bf3a177f22a68 | 7cc22fd8c9d0e1ce5e86d0cbe90bf3a177f22a68_0 | Q: What is the size of the dataset?
Text: Introduction
In contrast to traditional content distribution channels like television, radio and newspapers, Internet opened the door for direct interaction between the content creator and its audience. Young people are now gaining more frequent access to online, networked media. Although most of the time, their Internet use is harmless, there are some risks associated with these online activities, such as the use of social networking sites (e.g., Twitter, Facebook, Reddit). The anonymity and freedom provided by social networks makes them vulnerable to threatening situations on the Web, such as trolling.
Trolling is “the activity of posting messages via a communications network that are intended to be provocative, offensive or menacing” BIBREF0 . People who post such comments are known as trolls. According to hardaker2010trolling, a troll's “real intention(s) is/are to cause disruption and/or trigger or exacerbate conflict for the purpose of their own amusement”. Worse still, the troll's comments may have a negative psychological impact on his target/victim and possibly others who participated in the same conversation. It is therefore imperative to identify such comments and perhaps even terminate the conversation before it evolves into something psychological disruptive for the participants. Monitoring conversations is a labor-intensive task: it can potentially place a severe burden on the moderators, and it may not be an effective solution when traffic is heavy. This calls for the need to develop automatic methods for identifying malicious comments, which we will refer to as trolling attempts in this paper.
In fact, there have recently been some attempts to automatically identify comments containing cyberbullying (e.g., van2015detection), which corresponds to the most severe cases of trolling BIBREF0 . However, we believe that it is important not only to identify trolling attempts, but also comments that could have a negative psychological impact on their recipients. As an example, consider the situation where a commenter posts a comment with the goal of amusing others. However, it is conceivable that not everybody would be aware of these playful intentions, and these people may disagree or dislike the mocking comments and take them as inappropriate, prompting a negative reaction or psychological impact on themselves.
In light of this discussion, we believe that there is a need to identify not only the trolling attempts, but also comments that could have a negative psychological impact on its receipts. To this end, we seek to achieve the following goals in this paper. First, we propose a comprehensive categorization of trolling that allows us to model not only the troll's intention given his trolling attempt, but also the recipients' perception of the troll's intention and subsequently their reaction to the trolling attempt. This categorization gives rise to very interesting problems in pragmatics that involve the computational modeling of intentions, perceived intentions, and reactions to perceived intentions. Second, we create a new annotated resource for computational modeling of trolling. Each instance in this resource corresponds to a suspected trolling attempt taken from a Reddit conversation, it's surrounding context, and its immediate responses and will be manually coded with information such as the troll's intention and the recipients' reactions using our proposed categorization of trolling. Finally, we identify the instances that are difficult to classify with the help of a classifier trained with features taken from the state of the art, and subsequently present an analysis of these instances.
To our knowledge, our annotated resource is the first one of its sort that allows computational modeling on both the troll's side and the recipients' side. By making it publicly available, we hope to stimulate further research on this task. We believe that it will be valuable to any NLP researcher who is interested in the computational modeling of trolling.
Related Work
In this section, we discuss related work in the areas of trolling, bullying, abusive language detection and politeness, as they intersect in their scope and at least partially address the problem presented in this work.
In the realm of psychology, bishop2013effect and bishop2014representations elaborate a deep description of a troll's personality, motivations, effects on the community that trolls interfere in and the criminal and psychological aspects of trolls. Their main focus are flaming (trolls), and hostile and aggressive interactions between users BIBREF1 .
On the computational side, mihaylov2015finding address the problem of identifying manipulation trolls in news community forums. Not only do they focus solely on troll identification, but the major difference with this work is that all their predictions are based on non-linguistic information such as number of votes, dates, number of comments and so on. In a networks related framework, kumar2014accurately and guha2004propagation present a methodology to identify malicious individuals in a network based solely on the network's properties rather than on the textual content of comments. cambria2010not propose a method that involves NLP components, but fail to provide an evaluation of their system.
There is extensive work on detecting offensive and abusive language in social media BIBREF2 and BIBREF3 . There are two clear differences between their work and ours. One is that trolling is concerned about not only abusive language but also a much larger range of language styles and addresses the intentions and interpretations of the commenters, which goes beyond the linguistic dimension. The other is that we are additionally interested in the reactions to trolling attempts, real or perceived, because we argued that this is a phenomenon that occurs in pairs through the interaction of at least two individuals, which is different from abusive language detection. Also, xu2012learning, xu2012fast and xu2013examination address bullying traces. Bullying traces are self-reported events of individuals describing being part of bullying events, but we believe that the real impact of computational trolling research is not on analyzing retrospective incidents, but on analyzing real-time conversations. chen2012detecting use lexical and semantic features to determine sentence offensiveness levels to identify cyberbullying, offensive or abusive comments on Youtube. On Youtube as well, dinakar2012common identified sensitive topics for cyberbullying. dadvar2014experts used expert systems to classify between bullying and no bullying in posts. van2015detection predict fine-grained categories for cyberbullying, distinguishing between insults and threats and identified user roles in the exchanges. Finally, hardaker2010trolling argues that trolling cannot be studied using established politeness research categories.
Trolling Categorization
In this section, we describe our proposal of a comprehensive trolling categorization. While there have been attempts in the realm of psychology to provide a working definition of trolling (e.g., hardaker2010trolling, bishop2014representations), their focus is mostly on modeling the troll's behavior. For instance, bishop2014representations constructed a “trolling magnitude” scale focused on the severity of abuse and misuse of internet mediated communications. bishop2013effect also categorized trolls based on psychological characteristics focused on pathologies and possible criminal behaviors. In contrast, our trolling categorization seeks to model not only the troll's behavior but also the impact on the recipients, as described below.
Since one of our goals is to identify trolling events, our datasets will be composed of suspected trolling attempts (i.e., comments that are suspected to be trolling attempts). In other words, some of these suspected trolling attempts will be real trolling attempts, and some of them won't. So, if a suspected trolling attempt is in fact not a trolling attempt, then its author will not be a troll.
To cover both the troll and the recipients, we define a (suspected trolling attempt, responses) pair as the basic unit that we consider for the study of trolling, where “responses” are all the direct responses to the suspected trolling attempt. We characterize a (suspected trolling attempt, responses) pair using four aspects. Two aspects describe the trolling attempt: (1) Intention (I) (what is its author's purpose?), and (2) Intention Disclosure (D) (is its author trying to deceive its readers by hiding his real (i.e., malicious) intentions?). The remaining two aspects are defined on each of the (direct) responses to the trolling attempt: (1) Intention Interpretation (R) (what is the responder's perception of the troll's intention?), and (2) the Response strategy (B) (what is the responder's reaction?). Two points deserve mention. First, R can be different from I due to misunderstanding and the fact that the troll may be trying to hide his intention. Second, B is influenced by R, and the responder's comment can itself be a trolling attempt. We believe that these four aspects constitute interesting, under-studied pragmatics tasks for NLP researchers.
The possible values of each aspect are described in Table TABREF1 . As noted before, since these are suspected trolling attempts, if an attempt turns out not to be a trolling attempt, its author will not be a troll.
For a given (suspected trolling attempt, responses) pair, not all of the 189 (= INLINEFORM0 ) combinations of values of the four aspects are possible. There are logical constraints that limit plausible combinations: a) Trolling or Playing Intentions (I) must have Hidden or Exposed Intention Disclosure (D), b) Normal intentions (I) can only have None Intention disclosure (D) and c) Trolling or Playing interpretation (R) cannot have Normal response strategy (B).
Conversation Excerpts
To enable the reader to better understand this categorization, we present two example excerpts taken from the original (Reddit) conversations. The first comment on each excerpt, generated by author C0, is given as a minimal piece of context. The second comment, written by the author C1 in italics, is the suspected trolling attempt. The rest of the comments comprise all direct responses to the suspected trolling comment.
Example 1.
[noitemsep,nolistsep]
Yeah, cause that's what usually happens. Also, quit following me around, I don't want a boyfriend.
[noitemsep,nolistsep]
I wasn't aware you were the same person.... I've replied to a number of stupid people recently, my bad
[noitemsep,nolistsep]
Trollname trollpost brotroll
In this example, C1 is teasing of C0, expecting to provoke or irritate irritate, and he is clearly disclosing her trolling intentions. In C0's response, we see that he clearly believe that C1 is trolling, since is directly calling him a “brotroll” and his response strategy is frustrate the trolling attempt by denouncing C1 troll's intentions “trollpost” and true identity “brotroll”.
Example 2.
[noitemsep,nolistsep]
Please post a video of your dog doing this. The way I'm imagining this is adorable.
[noitemsep,nolistsep]
I hope the dog gets run over by a truck on the way out of the childrens playground.
[noitemsep,nolistsep]
If you're going to troll, can you at least try to be a bit more
Haha I hope the cancer kills you. convincing?
In this example, we observe that C0's first comment is making a polite request (Please). In return, C1 answer is a mean spirited comment whose intention is to disrupt and possible hurtful C0. Also, C1's comment is not subtle at all, so his intention is clearly disclosed. As for C2, she is clearly acknowledging C1's trolling intention and her response strategy is a criticism which we categorize as frustrate. Now, in C0's second comment, we observe that his interpretation is clear, he believes that C1 is trolling and the negative effect is so tangible, that his response strategy is to troll back or counter-troll by replying with a comparable mean comment.
Corpus and Annotation
Reddit is popular website that allows registered users (without identity verification) to participate in fora grouped by topic or interest. Participation consists of posting stories that can be seen by other users, voting stories and comments, and comments in the story's comment section, in the form of a forum. The forums are arranged in the form of a tree, allowing nested conversations, where the replies to a comment are its direct responses. We collected all comments in the stories' conversation in Reddit that were posted in August 2015. Since it is infeasible to manually annotate all of the comments, we process this dataset with the goal of extracting threads that involve suspected trolling attempts and the direct responses to them. To do so, we used Lucene to create an inverted index from the comments and queried it for comments containing the word “troll” with an edit distance of 1 in order to include close variations of this word, hypothesizing that such comments would be reasonable candidates of real trolling attempts. We did observe, however, that sometimes people use the word troll to point out that another user is trolling. Other times, people use the term to express their frustration about a particular user, but there is no trolling attempt. Yet other times people simply discuss trolling and trolls without actually observing one. Nonetheless, we found that this search produced a dataset in which 44.3% of the comments are real trolling attempts. Moreover, it is possible for commenters to believe that they are witnessing a trolling attempt and respond accordingly even where there is none due to misunderstanding. Therefore, the inclusion of comments that do not involve trolling would allow us to learn what triggers a user's interpretation of trolling when it is not present and what kind of response strategies are used.
For each retrieved comment, we reconstructed the original conversation tree it appears in, from the original post (i.e., the root) to the leaves, so that its parent and children can be recovered. We consider a comment in our dataset a suspected trolling attempt if at least one of its immediate children contains the word troll. For annotation purposes, we created snippets of conversations exactly like the ones shown in Example 1 and Example 2, each of which consists of the parent of the suspected trolling attempt, the suspected trolling attempt, and all of the direct responses to the suspected trolling attempt.
We had two human annotators who were trained on snippets (i.e., (suspected trolling attempt, responses) pairs) taken from 200 conversations and were allowed to discuss their findings. After this training stage, we asked them to independently label the four aspects for each snippet. We recognize that this limited amount of information is not always sufficient to recover the four aspects we are interested in, so we give the annotators the option to discard instances for which they couldn't determine the labels confidently. The final annotated dataset consists of 1000 conversations composed of 6833 sentences and 88047 tokens. The distribution over the classes per trolling aspect is shown in the table TABREF19 in the column “Size”.
Due to the subjective nature of the task we did not expect perfect agreement. However, on the 100 doubly-annotated snippets, we obtained substantial inter-annotator agreement according to Cohen's kappa statistic BIBREF4 for each of the four aspects: Intention: 0.788, Intention Disclosure: 0.780, Interpretation: 0.797 and Response 0.776. In the end, the annotators discussed their discrepancies and managed to resolve all of them.
Trolling Attempt Prediction
In this section, we make predictions on the four aspects of our task, with the primary goal of identifying the errors our classifier makes (i.e., the hard-to-classify instances) and hence the directions for future work, and the secondary goal of estimating the state of the art on this new task using only shallow (i.e., lexical and wordlist-based) features.
Feature Sets
For prediction we define two sets of features: (1) a basic feature set taken from Van Hee's van2015detection paper on cyberbullying prediction, and (2) an extended feature set that we designed using primarily information extracted from wordlists and dictionaries.
N-gram features. We encode each lemmatized and unlemmatized unigram and bigram collected from the training comments as a binary feature. In a similar manner, we include the unigram and bigram along with their POS tag as in BIBREF5 . To extract these features we used Stanford CoreNLP BIBREF6 .
Sentiment Polarity. The overall comment's emotion could be useful to identify the response and intention in a trolling attempt. So, we apply the Vader Sentiment Polarity Analyzer BIBREF7 and include four features, one per each measurement given by the analyzer: positive, neutral, negative and a composite metric, each as a real number value.
Emoticons. Reddit's comments make extensive use of emoticons. We argue that some emoticons are specifically used in trolling attempts to express a variety of emotions, which we hypothesize would be useful to identify a comment's intention, interpretation and response. For that reason, we use the emoticon dictionary developed hogenboom2015exploiting. We create a binary feature whose value is one if at least one of these emoticons is found in the comment.
Harmful Vocabulary. In their research on bullying, nitta2013detecting identified a small set of words that are highly offensive. We create a binary feature whose value is one if the comment contains at least one of these words.
Emotions Synsets. As in xu2012fast, we extracted all lemmas associated with each WordNet BIBREF8 synset involving seven emotions (anger, embarrassment, empathy, fear, pride, relief and sadness) as well as the synonyms of these emotion words extracted from the English merriam2004merriam dictionary. We create a binary feature whose value is one if any of these synsets or synonyms appears in the comment.
Swearing Vocabulary. We manually collected 1061 swear words and short phrases from the internet, blogs, forums and smaller repositories . The informal nature of this dictionary resembles the type of language used by flaming trolls and agitated responses, so we encode a binary feature whose value is one when at least one such swear word is found in the comment.
Swearing Vocabulary in Username. An interesting feature that is suggestive of the intention of a comment is the author's username. We found that abusive and annoying commenters contained cursing words in their usernames. So, we create a binary feature whose value is one if a swear word from the swearing vocabulary is found in their usernames.
Framenet. We apply the SEMAFOR parser BIBREF9 to each sentence in every comment, and construct three different types of binary features: every frame name that is present in the sentence, the frame name and the target word associated with it, and the argument name along with the token or lexical unit in the sentence associated with it. We believe that some frames are especially interesting from the trolling perspective. We hypothesize that these features are useful for identifying trolling attempts in which semantic and not just syntactic information is required.
Politeness cues. danescu2013computational identified cues that signal polite and impolite interactions among groups of people collaborating online. Based on our observations of trolling examples, it is clear that flaming, hostile and aggressive interactions between users BIBREF1 and engaged or emotional responses would use impolite cues. In contrast, neutralizing and frustrating responses to the troll avoid falling in confrontation and their vocabulary tends to be more polite. So we create a binary feature whose value is one if at least one cue appears in the comment.
GloVe Embeddings. All the aforementioned features constitute a high dimensional bag of words (BOW). Word embeddings were created to overcome certain problems with the BOW representation, like sparsity, and weight in correlations of semantically similar words. For this reason, and following nobata2016abusive, we create a distributed representation of the comments by averaging the word vector of each lowercase token in the comment found in the Twitter corpus pre-trained GloVe vectors BIBREF10 . The resulting comment vector representation is a 200 dimensional array that is concatenated with the existing BOW.
Results
Using the features described in the previous subsection, we train four independent classifiers using logistic regression, one per each of the four prediction tasks. All the results are obtained using 5-fold cross-validation experiments. In each fold experiment, we use three folds for training, one fold for development, and one fold for testing. All learning parameters are set to their default values except for the regularization parameter, which we tuned on the development set. In Table TABREF19 the leftmost results column reports F1 score based on majority class prediction. The next section (Single Feature Group) reports F1 scores obtained by using one feature group at a time. The goal of the later set of experiments is to gain insights about feature predictive effectiveness. The right side section (All features) shows the system performance measured using recall, precision, and F-1 as shown when all features described in section SECREF13 are used.
The majority class prediction experiment is simplest baseline to which we can can compare the rest of the experiments. In order to illustrate the prediction power of each feature group independent from all others, we perform the “Single Feature Group”, experiments. As we can observe in Table TABREF19 there are groups of features that independently are not better than the majority baseline, for example, the emoticons, politeness cues and polarity are not better disclosure predictors than the majority base. Also, we observe that only n-grams and GloVe features are the only group of features that contribute to more than a class type for the different tasks. Now, the “All Features” experiment shows how the interaction between feature sets perform than any of the other features groups in isolation. The accuracy metric for each trolling task is meant to provide an overall performance for all the classes within a particular task, and allow comparison between different experiments. In particular, we observe that GloVe vectors are the most powerful feature set, accuracy-wise, even better than the experiments with all features for all tasks except interpretation.
The overall Total Accuracy score reported in table TABREF19 using the entire feature set is 549. This result is what makes this dataset interesting: there is still lots of room for research on this task. Again, the primary goal of this experiment is to help identify the difficult-to-classify instances for analysis in the next section.
Error Analysis
In order to provide directions for future work, we analyze the errors made by the classifier trained on the extended features on the four prediction tasks.
Errors on Intention (I) prediction: The lack of background is a major problem when identifying trolling comments. For example, “your comments fit well in Stormfront” seems inoffensive on the surface. However, people who know that Stormfront is a white supremacist website will realize that the author of this comment had an annoying or malicious intention. But our system had no knowledge about it and simply predicted it as non-trolling. These kind of errors reduces recall on the prediction of trolling comments. A solution would be to include additional knowledge from anthologies along with a sentiment or polarity. One could modify NELL BIBREF12 to broaden the understanding of entities in the comments.
Non-cursing aggressions and insults This is a challenging problem, since the majority of abusive and insulting comments rely on profanity and swearing. The problem arises with subtler aggressions and insults that are equally or even more annoying, such as “Troll? How cute.” and “settle down drama queen”. The classifier has a more difficult task of determining that these are indeed aggressions or insults. This error also decreases the recall of trolling intention. A solution would be to exploit all the comments made by the suspected troll in the entire conversation in order to increase the chances of finding curse words or other cues that lead the classifier to correctly classify the comment as trolling.
Another source of error is the presence of controversial topic words such as “black”,“feminism”, “killing”, “racism”, “brown”, etc. that are commonly used by trolls. The classifier seems too confident to classify a comment as trolling in the presence of these words, but in many cases they do not. In order to ameliorate this problem, one could create ad-hoc word embeddings by training glove or other type of distributed representation on a large corpus for the specific social media platform in consideration. From these vectors one could expect a better representation of controversial topics and their interactions with other words so they might help to reduce these errors.
Errors on Disclosure (D) prediction: A major source of error that affects disclosure is the shallow meaning representation obtained from the BOW model even when augmented with the distributional features given by the glove vectors. For example, the suspected troll's comment “how to deal with refugees? How about a bullet to the head” is clearly mean-spirited and is an example of disclosed trolling. However, to reach that conclusion the reader need to infer the meaning of “bullet to the head” and that this action is desirable for a vulnerable group like migrants or refugees. This problem produces low recall for the disclosed prediction task. A solution for this problem may be the use of deeper semantics, where we represent the comments and sentences in their logical form and infer from them the intended meaning.
Errors on Interpretation (R) prediction: it is a common practice from many users to directly ask the suspected troll if he/she is trolling or not. There are several variations of this question, such as “Are you a troll?” and “not sure if trolling or not”. While the presence of a question like these seems to give us a hint of the responder's interpretation, we cannot be sure of his interpretation without also considering the context. One way to improve interpretation is to exploit the response strategy, but the response strategy in our model is predicted independently of interpretation. So one solution could be similar to the one proposed above for the disclosure task problem: jointly learning classifiers that predict both variables simultaneously. Another possibility is to use the temporal sequence of response comments and make use of older response interpretation as input features for later comments. This could be useful since commenters seem to influence each other as they read through the conversation.
Errors on Response Strategy (B) prediction: In some cases there is a blurry line between “Frustrate” and “Neutralize”. The key distinction between them is that there exists some criticism in the Frustrate responses towards the suspected troll's comment, while “Neutralizing” comments acknowledge that the suspected troll has trolling intentions, but gives no importance to them. For example, response comments such as “oh, you are a troll” and “you are just a lame troll” are examples of this subtle difference. The first is a case of “neutralize” while the second is indeed criticizing the suspected troll's comment and therefore a “frustrate” response strategy. This kind of error affects both precision and recall for these two classes. A possible solution could be to train a specialized classifier to disambiguate between “frustrate” and “neutralize” only.
Another challenging problem is the distinction between the classes “Troll” and “Engage”. This is true when the direct responder is intensely flared up with the suspected comment to the point that his own comment becomes a trolling attempt. A useful indicator for distinguishing these cases are the presence of insults, and to detect them we look for swear words, but as we noted before, there is no guarantee that swear words are used for insulting. This kind of error affects the precision and recall for the “troll” and “engage” classes. A solution to this problem may be the inclusion of longer parts of the conversation. It is typical in a troll-engaged comment scheme to observe longer than usual exchanges between two users, and the comments evolve in very agitated remarks. One may then use this information to disambiguate between the two classes.
Conclusion and Future Work
We presented a new view on the computational modeling of trolling in Internet fora where we proposed a comprehensive categorization of trolling attempts that for the first time considers trolling from not only the troll's perspective but also the responders' perspectives. This categorization gives rise to four interesting pragmatics tasks that involve modeling intensions, perceived intensions, and reactions. Perhaps most importantly, we create an annotated dataset that we believe is the first of its sort. We intend to make publicly available with the hope of stimulating research on trolling. | 1000 conversations composed of 6833 sentences and 88047 tokens |
3fa638e6167e1c7a931c8ee5c0e2e397ec1b6cda | 3fa638e6167e1c7a931c8ee5c0e2e397ec1b6cda_0 | Q: What Reddit communities do they look at?
Text: Introduction
In contrast to traditional content distribution channels like television, radio and newspapers, Internet opened the door for direct interaction between the content creator and its audience. Young people are now gaining more frequent access to online, networked media. Although most of the time, their Internet use is harmless, there are some risks associated with these online activities, such as the use of social networking sites (e.g., Twitter, Facebook, Reddit). The anonymity and freedom provided by social networks makes them vulnerable to threatening situations on the Web, such as trolling.
Trolling is “the activity of posting messages via a communications network that are intended to be provocative, offensive or menacing” BIBREF0 . People who post such comments are known as trolls. According to hardaker2010trolling, a troll's “real intention(s) is/are to cause disruption and/or trigger or exacerbate conflict for the purpose of their own amusement”. Worse still, the troll's comments may have a negative psychological impact on his target/victim and possibly others who participated in the same conversation. It is therefore imperative to identify such comments and perhaps even terminate the conversation before it evolves into something psychological disruptive for the participants. Monitoring conversations is a labor-intensive task: it can potentially place a severe burden on the moderators, and it may not be an effective solution when traffic is heavy. This calls for the need to develop automatic methods for identifying malicious comments, which we will refer to as trolling attempts in this paper.
In fact, there have recently been some attempts to automatically identify comments containing cyberbullying (e.g., van2015detection), which corresponds to the most severe cases of trolling BIBREF0 . However, we believe that it is important not only to identify trolling attempts, but also comments that could have a negative psychological impact on their recipients. As an example, consider the situation where a commenter posts a comment with the goal of amusing others. However, it is conceivable that not everybody would be aware of these playful intentions, and these people may disagree or dislike the mocking comments and take them as inappropriate, prompting a negative reaction or psychological impact on themselves.
In light of this discussion, we believe that there is a need to identify not only the trolling attempts, but also comments that could have a negative psychological impact on its receipts. To this end, we seek to achieve the following goals in this paper. First, we propose a comprehensive categorization of trolling that allows us to model not only the troll's intention given his trolling attempt, but also the recipients' perception of the troll's intention and subsequently their reaction to the trolling attempt. This categorization gives rise to very interesting problems in pragmatics that involve the computational modeling of intentions, perceived intentions, and reactions to perceived intentions. Second, we create a new annotated resource for computational modeling of trolling. Each instance in this resource corresponds to a suspected trolling attempt taken from a Reddit conversation, it's surrounding context, and its immediate responses and will be manually coded with information such as the troll's intention and the recipients' reactions using our proposed categorization of trolling. Finally, we identify the instances that are difficult to classify with the help of a classifier trained with features taken from the state of the art, and subsequently present an analysis of these instances.
To our knowledge, our annotated resource is the first one of its sort that allows computational modeling on both the troll's side and the recipients' side. By making it publicly available, we hope to stimulate further research on this task. We believe that it will be valuable to any NLP researcher who is interested in the computational modeling of trolling.
Related Work
In this section, we discuss related work in the areas of trolling, bullying, abusive language detection and politeness, as they intersect in their scope and at least partially address the problem presented in this work.
In the realm of psychology, bishop2013effect and bishop2014representations elaborate a deep description of a troll's personality, motivations, effects on the community that trolls interfere in and the criminal and psychological aspects of trolls. Their main focus are flaming (trolls), and hostile and aggressive interactions between users BIBREF1 .
On the computational side, mihaylov2015finding address the problem of identifying manipulation trolls in news community forums. Not only do they focus solely on troll identification, but the major difference with this work is that all their predictions are based on non-linguistic information such as number of votes, dates, number of comments and so on. In a networks related framework, kumar2014accurately and guha2004propagation present a methodology to identify malicious individuals in a network based solely on the network's properties rather than on the textual content of comments. cambria2010not propose a method that involves NLP components, but fail to provide an evaluation of their system.
There is extensive work on detecting offensive and abusive language in social media BIBREF2 and BIBREF3 . There are two clear differences between their work and ours. One is that trolling is concerned about not only abusive language but also a much larger range of language styles and addresses the intentions and interpretations of the commenters, which goes beyond the linguistic dimension. The other is that we are additionally interested in the reactions to trolling attempts, real or perceived, because we argued that this is a phenomenon that occurs in pairs through the interaction of at least two individuals, which is different from abusive language detection. Also, xu2012learning, xu2012fast and xu2013examination address bullying traces. Bullying traces are self-reported events of individuals describing being part of bullying events, but we believe that the real impact of computational trolling research is not on analyzing retrospective incidents, but on analyzing real-time conversations. chen2012detecting use lexical and semantic features to determine sentence offensiveness levels to identify cyberbullying, offensive or abusive comments on Youtube. On Youtube as well, dinakar2012common identified sensitive topics for cyberbullying. dadvar2014experts used expert systems to classify between bullying and no bullying in posts. van2015detection predict fine-grained categories for cyberbullying, distinguishing between insults and threats and identified user roles in the exchanges. Finally, hardaker2010trolling argues that trolling cannot be studied using established politeness research categories.
Trolling Categorization
In this section, we describe our proposal of a comprehensive trolling categorization. While there have been attempts in the realm of psychology to provide a working definition of trolling (e.g., hardaker2010trolling, bishop2014representations), their focus is mostly on modeling the troll's behavior. For instance, bishop2014representations constructed a “trolling magnitude” scale focused on the severity of abuse and misuse of internet mediated communications. bishop2013effect also categorized trolls based on psychological characteristics focused on pathologies and possible criminal behaviors. In contrast, our trolling categorization seeks to model not only the troll's behavior but also the impact on the recipients, as described below.
Since one of our goals is to identify trolling events, our datasets will be composed of suspected trolling attempts (i.e., comments that are suspected to be trolling attempts). In other words, some of these suspected trolling attempts will be real trolling attempts, and some of them won't. So, if a suspected trolling attempt is in fact not a trolling attempt, then its author will not be a troll.
To cover both the troll and the recipients, we define a (suspected trolling attempt, responses) pair as the basic unit that we consider for the study of trolling, where “responses” are all the direct responses to the suspected trolling attempt. We characterize a (suspected trolling attempt, responses) pair using four aspects. Two aspects describe the trolling attempt: (1) Intention (I) (what is its author's purpose?), and (2) Intention Disclosure (D) (is its author trying to deceive its readers by hiding his real (i.e., malicious) intentions?). The remaining two aspects are defined on each of the (direct) responses to the trolling attempt: (1) Intention Interpretation (R) (what is the responder's perception of the troll's intention?), and (2) the Response strategy (B) (what is the responder's reaction?). Two points deserve mention. First, R can be different from I due to misunderstanding and the fact that the troll may be trying to hide his intention. Second, B is influenced by R, and the responder's comment can itself be a trolling attempt. We believe that these four aspects constitute interesting, under-studied pragmatics tasks for NLP researchers.
The possible values of each aspect are described in Table TABREF1 . As noted before, since these are suspected trolling attempts, if an attempt turns out not to be a trolling attempt, its author will not be a troll.
For a given (suspected trolling attempt, responses) pair, not all of the 189 (= INLINEFORM0 ) combinations of values of the four aspects are possible. There are logical constraints that limit plausible combinations: a) Trolling or Playing Intentions (I) must have Hidden or Exposed Intention Disclosure (D), b) Normal intentions (I) can only have None Intention disclosure (D) and c) Trolling or Playing interpretation (R) cannot have Normal response strategy (B).
Conversation Excerpts
To enable the reader to better understand this categorization, we present two example excerpts taken from the original (Reddit) conversations. The first comment on each excerpt, generated by author C0, is given as a minimal piece of context. The second comment, written by the author C1 in italics, is the suspected trolling attempt. The rest of the comments comprise all direct responses to the suspected trolling comment.
Example 1.
[noitemsep,nolistsep]
Yeah, cause that's what usually happens. Also, quit following me around, I don't want a boyfriend.
[noitemsep,nolistsep]
I wasn't aware you were the same person.... I've replied to a number of stupid people recently, my bad
[noitemsep,nolistsep]
Trollname trollpost brotroll
In this example, C1 is teasing of C0, expecting to provoke or irritate irritate, and he is clearly disclosing her trolling intentions. In C0's response, we see that he clearly believe that C1 is trolling, since is directly calling him a “brotroll” and his response strategy is frustrate the trolling attempt by denouncing C1 troll's intentions “trollpost” and true identity “brotroll”.
Example 2.
[noitemsep,nolistsep]
Please post a video of your dog doing this. The way I'm imagining this is adorable.
[noitemsep,nolistsep]
I hope the dog gets run over by a truck on the way out of the childrens playground.
[noitemsep,nolistsep]
If you're going to troll, can you at least try to be a bit more
Haha I hope the cancer kills you. convincing?
In this example, we observe that C0's first comment is making a polite request (Please). In return, C1 answer is a mean spirited comment whose intention is to disrupt and possible hurtful C0. Also, C1's comment is not subtle at all, so his intention is clearly disclosed. As for C2, she is clearly acknowledging C1's trolling intention and her response strategy is a criticism which we categorize as frustrate. Now, in C0's second comment, we observe that his interpretation is clear, he believes that C1 is trolling and the negative effect is so tangible, that his response strategy is to troll back or counter-troll by replying with a comparable mean comment.
Corpus and Annotation
Reddit is popular website that allows registered users (without identity verification) to participate in fora grouped by topic or interest. Participation consists of posting stories that can be seen by other users, voting stories and comments, and comments in the story's comment section, in the form of a forum. The forums are arranged in the form of a tree, allowing nested conversations, where the replies to a comment are its direct responses. We collected all comments in the stories' conversation in Reddit that were posted in August 2015. Since it is infeasible to manually annotate all of the comments, we process this dataset with the goal of extracting threads that involve suspected trolling attempts and the direct responses to them. To do so, we used Lucene to create an inverted index from the comments and queried it for comments containing the word “troll” with an edit distance of 1 in order to include close variations of this word, hypothesizing that such comments would be reasonable candidates of real trolling attempts. We did observe, however, that sometimes people use the word troll to point out that another user is trolling. Other times, people use the term to express their frustration about a particular user, but there is no trolling attempt. Yet other times people simply discuss trolling and trolls without actually observing one. Nonetheless, we found that this search produced a dataset in which 44.3% of the comments are real trolling attempts. Moreover, it is possible for commenters to believe that they are witnessing a trolling attempt and respond accordingly even where there is none due to misunderstanding. Therefore, the inclusion of comments that do not involve trolling would allow us to learn what triggers a user's interpretation of trolling when it is not present and what kind of response strategies are used.
For each retrieved comment, we reconstructed the original conversation tree it appears in, from the original post (i.e., the root) to the leaves, so that its parent and children can be recovered. We consider a comment in our dataset a suspected trolling attempt if at least one of its immediate children contains the word troll. For annotation purposes, we created snippets of conversations exactly like the ones shown in Example 1 and Example 2, each of which consists of the parent of the suspected trolling attempt, the suspected trolling attempt, and all of the direct responses to the suspected trolling attempt.
We had two human annotators who were trained on snippets (i.e., (suspected trolling attempt, responses) pairs) taken from 200 conversations and were allowed to discuss their findings. After this training stage, we asked them to independently label the four aspects for each snippet. We recognize that this limited amount of information is not always sufficient to recover the four aspects we are interested in, so we give the annotators the option to discard instances for which they couldn't determine the labels confidently. The final annotated dataset consists of 1000 conversations composed of 6833 sentences and 88047 tokens. The distribution over the classes per trolling aspect is shown in the table TABREF19 in the column “Size”.
Due to the subjective nature of the task we did not expect perfect agreement. However, on the 100 doubly-annotated snippets, we obtained substantial inter-annotator agreement according to Cohen's kappa statistic BIBREF4 for each of the four aspects: Intention: 0.788, Intention Disclosure: 0.780, Interpretation: 0.797 and Response 0.776. In the end, the annotators discussed their discrepancies and managed to resolve all of them.
Trolling Attempt Prediction
In this section, we make predictions on the four aspects of our task, with the primary goal of identifying the errors our classifier makes (i.e., the hard-to-classify instances) and hence the directions for future work, and the secondary goal of estimating the state of the art on this new task using only shallow (i.e., lexical and wordlist-based) features.
Feature Sets
For prediction we define two sets of features: (1) a basic feature set taken from Van Hee's van2015detection paper on cyberbullying prediction, and (2) an extended feature set that we designed using primarily information extracted from wordlists and dictionaries.
N-gram features. We encode each lemmatized and unlemmatized unigram and bigram collected from the training comments as a binary feature. In a similar manner, we include the unigram and bigram along with their POS tag as in BIBREF5 . To extract these features we used Stanford CoreNLP BIBREF6 .
Sentiment Polarity. The overall comment's emotion could be useful to identify the response and intention in a trolling attempt. So, we apply the Vader Sentiment Polarity Analyzer BIBREF7 and include four features, one per each measurement given by the analyzer: positive, neutral, negative and a composite metric, each as a real number value.
Emoticons. Reddit's comments make extensive use of emoticons. We argue that some emoticons are specifically used in trolling attempts to express a variety of emotions, which we hypothesize would be useful to identify a comment's intention, interpretation and response. For that reason, we use the emoticon dictionary developed hogenboom2015exploiting. We create a binary feature whose value is one if at least one of these emoticons is found in the comment.
Harmful Vocabulary. In their research on bullying, nitta2013detecting identified a small set of words that are highly offensive. We create a binary feature whose value is one if the comment contains at least one of these words.
Emotions Synsets. As in xu2012fast, we extracted all lemmas associated with each WordNet BIBREF8 synset involving seven emotions (anger, embarrassment, empathy, fear, pride, relief and sadness) as well as the synonyms of these emotion words extracted from the English merriam2004merriam dictionary. We create a binary feature whose value is one if any of these synsets or synonyms appears in the comment.
Swearing Vocabulary. We manually collected 1061 swear words and short phrases from the internet, blogs, forums and smaller repositories . The informal nature of this dictionary resembles the type of language used by flaming trolls and agitated responses, so we encode a binary feature whose value is one when at least one such swear word is found in the comment.
Swearing Vocabulary in Username. An interesting feature that is suggestive of the intention of a comment is the author's username. We found that abusive and annoying commenters contained cursing words in their usernames. So, we create a binary feature whose value is one if a swear word from the swearing vocabulary is found in their usernames.
Framenet. We apply the SEMAFOR parser BIBREF9 to each sentence in every comment, and construct three different types of binary features: every frame name that is present in the sentence, the frame name and the target word associated with it, and the argument name along with the token or lexical unit in the sentence associated with it. We believe that some frames are especially interesting from the trolling perspective. We hypothesize that these features are useful for identifying trolling attempts in which semantic and not just syntactic information is required.
Politeness cues. danescu2013computational identified cues that signal polite and impolite interactions among groups of people collaborating online. Based on our observations of trolling examples, it is clear that flaming, hostile and aggressive interactions between users BIBREF1 and engaged or emotional responses would use impolite cues. In contrast, neutralizing and frustrating responses to the troll avoid falling in confrontation and their vocabulary tends to be more polite. So we create a binary feature whose value is one if at least one cue appears in the comment.
GloVe Embeddings. All the aforementioned features constitute a high dimensional bag of words (BOW). Word embeddings were created to overcome certain problems with the BOW representation, like sparsity, and weight in correlations of semantically similar words. For this reason, and following nobata2016abusive, we create a distributed representation of the comments by averaging the word vector of each lowercase token in the comment found in the Twitter corpus pre-trained GloVe vectors BIBREF10 . The resulting comment vector representation is a 200 dimensional array that is concatenated with the existing BOW.
Results
Using the features described in the previous subsection, we train four independent classifiers using logistic regression, one per each of the four prediction tasks. All the results are obtained using 5-fold cross-validation experiments. In each fold experiment, we use three folds for training, one fold for development, and one fold for testing. All learning parameters are set to their default values except for the regularization parameter, which we tuned on the development set. In Table TABREF19 the leftmost results column reports F1 score based on majority class prediction. The next section (Single Feature Group) reports F1 scores obtained by using one feature group at a time. The goal of the later set of experiments is to gain insights about feature predictive effectiveness. The right side section (All features) shows the system performance measured using recall, precision, and F-1 as shown when all features described in section SECREF13 are used.
The majority class prediction experiment is simplest baseline to which we can can compare the rest of the experiments. In order to illustrate the prediction power of each feature group independent from all others, we perform the “Single Feature Group”, experiments. As we can observe in Table TABREF19 there are groups of features that independently are not better than the majority baseline, for example, the emoticons, politeness cues and polarity are not better disclosure predictors than the majority base. Also, we observe that only n-grams and GloVe features are the only group of features that contribute to more than a class type for the different tasks. Now, the “All Features” experiment shows how the interaction between feature sets perform than any of the other features groups in isolation. The accuracy metric for each trolling task is meant to provide an overall performance for all the classes within a particular task, and allow comparison between different experiments. In particular, we observe that GloVe vectors are the most powerful feature set, accuracy-wise, even better than the experiments with all features for all tasks except interpretation.
The overall Total Accuracy score reported in table TABREF19 using the entire feature set is 549. This result is what makes this dataset interesting: there is still lots of room for research on this task. Again, the primary goal of this experiment is to help identify the difficult-to-classify instances for analysis in the next section.
Error Analysis
In order to provide directions for future work, we analyze the errors made by the classifier trained on the extended features on the four prediction tasks.
Errors on Intention (I) prediction: The lack of background is a major problem when identifying trolling comments. For example, “your comments fit well in Stormfront” seems inoffensive on the surface. However, people who know that Stormfront is a white supremacist website will realize that the author of this comment had an annoying or malicious intention. But our system had no knowledge about it and simply predicted it as non-trolling. These kind of errors reduces recall on the prediction of trolling comments. A solution would be to include additional knowledge from anthologies along with a sentiment or polarity. One could modify NELL BIBREF12 to broaden the understanding of entities in the comments.
Non-cursing aggressions and insults This is a challenging problem, since the majority of abusive and insulting comments rely on profanity and swearing. The problem arises with subtler aggressions and insults that are equally or even more annoying, such as “Troll? How cute.” and “settle down drama queen”. The classifier has a more difficult task of determining that these are indeed aggressions or insults. This error also decreases the recall of trolling intention. A solution would be to exploit all the comments made by the suspected troll in the entire conversation in order to increase the chances of finding curse words or other cues that lead the classifier to correctly classify the comment as trolling.
Another source of error is the presence of controversial topic words such as “black”,“feminism”, “killing”, “racism”, “brown”, etc. that are commonly used by trolls. The classifier seems too confident to classify a comment as trolling in the presence of these words, but in many cases they do not. In order to ameliorate this problem, one could create ad-hoc word embeddings by training glove or other type of distributed representation on a large corpus for the specific social media platform in consideration. From these vectors one could expect a better representation of controversial topics and their interactions with other words so they might help to reduce these errors.
Errors on Disclosure (D) prediction: A major source of error that affects disclosure is the shallow meaning representation obtained from the BOW model even when augmented with the distributional features given by the glove vectors. For example, the suspected troll's comment “how to deal with refugees? How about a bullet to the head” is clearly mean-spirited and is an example of disclosed trolling. However, to reach that conclusion the reader need to infer the meaning of “bullet to the head” and that this action is desirable for a vulnerable group like migrants or refugees. This problem produces low recall for the disclosed prediction task. A solution for this problem may be the use of deeper semantics, where we represent the comments and sentences in their logical form and infer from them the intended meaning.
Errors on Interpretation (R) prediction: it is a common practice from many users to directly ask the suspected troll if he/she is trolling or not. There are several variations of this question, such as “Are you a troll?” and “not sure if trolling or not”. While the presence of a question like these seems to give us a hint of the responder's interpretation, we cannot be sure of his interpretation without also considering the context. One way to improve interpretation is to exploit the response strategy, but the response strategy in our model is predicted independently of interpretation. So one solution could be similar to the one proposed above for the disclosure task problem: jointly learning classifiers that predict both variables simultaneously. Another possibility is to use the temporal sequence of response comments and make use of older response interpretation as input features for later comments. This could be useful since commenters seem to influence each other as they read through the conversation.
Errors on Response Strategy (B) prediction: In some cases there is a blurry line between “Frustrate” and “Neutralize”. The key distinction between them is that there exists some criticism in the Frustrate responses towards the suspected troll's comment, while “Neutralizing” comments acknowledge that the suspected troll has trolling intentions, but gives no importance to them. For example, response comments such as “oh, you are a troll” and “you are just a lame troll” are examples of this subtle difference. The first is a case of “neutralize” while the second is indeed criticizing the suspected troll's comment and therefore a “frustrate” response strategy. This kind of error affects both precision and recall for these two classes. A possible solution could be to train a specialized classifier to disambiguate between “frustrate” and “neutralize” only.
Another challenging problem is the distinction between the classes “Troll” and “Engage”. This is true when the direct responder is intensely flared up with the suspected comment to the point that his own comment becomes a trolling attempt. A useful indicator for distinguishing these cases are the presence of insults, and to detect them we look for swear words, but as we noted before, there is no guarantee that swear words are used for insulting. This kind of error affects the precision and recall for the “troll” and “engage” classes. A solution to this problem may be the inclusion of longer parts of the conversation. It is typical in a troll-engaged comment scheme to observe longer than usual exchanges between two users, and the comments evolve in very agitated remarks. One may then use this information to disambiguate between the two classes.
Conclusion and Future Work
We presented a new view on the computational modeling of trolling in Internet fora where we proposed a comprehensive categorization of trolling attempts that for the first time considers trolling from not only the troll's perspective but also the responders' perspectives. This categorization gives rise to four interesting pragmatics tasks that involve modeling intensions, perceived intensions, and reactions. Perhaps most importantly, we create an annotated dataset that we believe is the first of its sort. We intend to make publicly available with the hope of stimulating research on trolling. | Unanswerable |
d2b3f2178a177183b1aeb88784e48ff7e3e5070c | d2b3f2178a177183b1aeb88784e48ff7e3e5070c_0 | Q: How strong is negative correlation between compound divergence and accuracy in performed experiment?
Text: Introduction
Human intelligence exhibits systematic compositionality BIBREF0, the capacity to understand and produce a potentially infinite number of novel combinations of known components, i.e., to make “infinite use of finite means” BIBREF1. In the context of learning from a set of training examples, we can observe compositionality as compositional generalization, which we take to mean the ability to systematically generalize to composed test examples of a certain distribution after being exposed to the necessary components during training on a different distribution.
Humans demonstrate this ability in many different domains, such as natural language understanding (NLU) and visual scene understanding. For example, we can learn the meaning of a new word and then apply it to other language contexts. As BIBREF2 put it: “Once a person learns the meaning of a new verb `dax', he or she can immediately understand the meaning of `dax twice' and `sing and dax'.” Similarly, we can learn a new object shape and then understand its compositions with previously learned colors or materials BIBREF3, BIBREF4.
In contrast, state-of-the-art machine learning (ML) methods often fail to capture the compositional structure that is underlying the problem domain and thus fail to generalize compositionally BIBREF2, BIBREF5, BIBREF6, BIBREF7, BIBREF3. We believe that part of the reason for this shortcoming is a lack of realistic benchmarks that comprehensively measure this aspect of learning in realistic scenarios.
As others have proposed, compositional generalization can be assessed using a train-test split based on observable properties of the examples that intuitively correlate with their underlying compositional structure. BIBREF8, for example, propose to test on different output patterns than are in the train set, while BIBREF2 propose, among others, to split examples by output length or to test on examples containing primitives that are rarely shown during training. In this paper, we formalize and generalize this intuition and make these contributions:
We introduce distribution-based compositionality assessment (DBCA), which is a novel method to quantitatively assess the adequacy of a particular dataset split for measuring compositional generalization and to construct splits that are ideally suited for this purpose (Section SECREF2).
We present the Compositional Freebase Questions (CFQ) , a simple yet realistic and large NLU dataset that is specifically designed to measure compositional generalization using the DBCA method, and we describe how to construct such a dataset (Section SECREF3).
We use the DBCA method to construct a series of experiments for measuring compositionality on CFQ and scan BIBREF2 and to quantitatively compare these experiments to other compositionality experiments (Section SECREF4).
We analyze the performance of three baseline ML architectures on these experiments and show that these architectures fail to generalize compositionally, and perhaps more surprisingly, that compound divergence between train and test sets is a good predictor of the test accuracy (Section SECREF5).
Distribution-Based Compositionality Assessment (DBCA)
Like other authors, we propose to measure a learner's ability to generalize compositionally by using a setup where the train and test sets come from different distributions. More specifically, we propose a setup where each example is obtained by composing primitive elements (atoms), and where these atoms are similarly represented in the train and test sets while the test set contains novel compounds, i.e., new ways of composing the atoms of the train set.
As a simple illustrative scenario, consider the task of answering simple questions such as “Who directed Inception?” and “Did Christopher Nolan produce Goldfinger?”. In this scenario, the atoms intuitively correspond to the primitive elements that are used to compose those questions, such as the predicates “direct(ed)” and “produce(d)”, the question patterns “Who [predicate] [entity]” and “Did [entity1] [predicate] [entity2]”, and the entities “Inception”, “Christopher Nolan”, etc. The compounds on the other hand correspond to the combinations of these atoms that appear in the various examples: "Who directed [entity]?", "Did Christopher Nolan [predicate] Inception?", etc.
To measure compositional generalization on such a task, one might therefore use the questions “Who directed Inception?” and “Did Christopher Nolan produce Goldfinger?” as training examples while testing on questions such as “Did Christopher Nolan direct Goldfinger?” and "Who produced Inception?" because the atoms are identically represented in the train and test sets while the compounds differ.
To make this intuition more precise, we focus on datasets such as CFQ (introduced in Section SECREF3) and scan BIBREF2, where each example can be created from a formal set of rules by successively applying a number of these rules. In this case, the atoms are the individual rules, while the compounds are the subgraphs of the directed acyclic graphs (DAGs) that correspond to the rule applications. (See Sections SECREF3 and SECREF4 for more details.)
Distribution-Based Compositionality Assessment (DBCA) ::: Principles for measuring compositionality
We use the term compositionality experiment to mean a particular way of splitting the data into train and test sets with the goal of measuring compositional generalization. Based on the notions of atoms and compounds described above, we say that an ideal compositionality experiment should adhere to the following two principles:
Similar atom distribution: All atoms present in the test set are also present in the train set, and the distribution of atoms in the train set is as similar as possible to their distribution in the test set.
Different compound distribution: The distribution of compounds in the train set is as different as possible from the distribution in the test set.
The second principle guarantees that the experiment is compositionally challenging in the sense that it tests the learner on compounds that are as different as possible from the compounds used during training. The first principle aims to guarantee that the experiment is exclusively measuring the effect of the difference in the way atoms are composed to form compounds (rather than some related but different property such as domain adaptation on the distribution of the atoms).
To determine to which degree a certain experiment adheres to these principles, we use the following formalization. For a sample set $T$, we use $\mathcal {F}_A(T)$ to denote the frequency distribution of atoms in $T$ and $\mathcal {F}_C(T)$ for the weighted frequency distribution of compounds in $T$, which correspond to the subgraphs of the rule application DAGs. For practicality, we do not consider all subgraphs of rule application DAGs when computing the compound divergence. Instead, we first generate a large subset $\mathbb {G}$ of subgraphs, then weight them in context of their occurrence, and keep only the ones with highest sum of weights. The purpose of the weighting is to avoid double-counting compounds that are highly correlated with some of their super-compounds. We achieve this by calculating the weight of $G \in \mathbb {G}$ in a sample as $w(G) = \max _{g \in \text{occ}(G)} (1 - \max _{G^{\prime }: g \prec g^{\prime } \in \text{occ}(G^{\prime })} P(G^{\prime }| G))$, where $\text{occ}(G)$ is the set of all occurrences of $G$ in the sample, $\prec $ denotes the strict subgraph relation, and $P(G^{\prime }| G)$ is the empirical probability of $G^{\prime }$ occurring as a supergraph of $G$ over the full sample set. See Appendix SECREF202 for example subgraphs and more details on the weighting.
We measure divergence (or similarity) of the weighted distributions using the Chernoff coefficient $C_\alpha (P \Vert Q) = \sum _{k} p_k^\alpha \, q_k^{1-\alpha } \in [0, 1]$ BIBREF9. For the atom divergence, we use $\alpha =0.5$, which corresponds to the Bhattacharyya coefficient and reflects the desire of making the atom distributions in train and test as similar as possible. For the compound divergence, we use $\alpha = 0.1$, which reflects the intuition that it is more important whether a certain compound occurs in $P$ (train) than whether the probabilities in $P$ (train) and $Q$ (test) match exactly. This allows us to formally define as follows the notions of compound divergence $\mathcal {D}_C$ and atom divergence $\mathcal {D}_A$ of a compositionality experiment consisting of a train set $V$ and a test set $W$:
Based on these principles, we suggest to use as a preferred compositionality benchmark for a given dataset the accuracy obtained by a learner on splits with maximum compound divergence and low atom divergence (we use $\mathcal {D}_A \le 0.02$). See Section SECREF4 for details about how to construct such splits.
The CFQ Dataset
We present the Compositional Freebase Questions (CFQ) as an example of how to construct a dataset that is specifically designed to measure compositional generalization using the DBCA method introduced above. CFQ is a simple yet realistic, large dataset of natural language questions and answers that also provides for each question a corresponding sparql query against the Freebase knowledge base BIBREF10. This means that CFQ can be used for semantic parsing BIBREF11, BIBREF12, which is the task that we focus on in this paper.
The CFQ Dataset ::: Automatic, rule-based generation
BIBREF13 describe a number of benefits for automated rule-based dataset generation, including scalability, control of scope, and avoidance of human errors. Beyond these benefits, however, such an approach is particularly attractive in the context of measuring compositional generalization using the DBCA method, as it allows us to precisely track the atoms (rules) and compounds (rule applications) of each example by recording the sequence of rule applications used to generate it.
Since the way we measure compositionality depends on how the examples can be broken down into atoms and compounds, we design the generation rules so as to have few and meaningful atoms. More precisely, we aim to have as few rules as possible so that the richness of the examples comes from composing them, which yields a large variety of compounds (enabling a large range of different compound divergences) while making it easy to obtain similar distributions of atoms. Also, we aim to make our rules truly “atomic” in the sense that the behavior of any rule is independent of the context where it is applied (e.g., rules may not contain “if-then-else” constructs).
In order to minimize the number of rules, we use an intermediate logical form that serves as a uniform semantic representation with relatively direct mappings to natural language and sparql. Our rules thus fall into the following four categories (a selection of rules is provided in Appendix SECREF20):
Grammar rules that generate natural language constructs and corresponding logical forms.
Inference rules that describe transformations on logical forms, allowing us to factor out transformations that are independent of specific linguistic and sparql constructs.
Resolution rules that map constructs of the logical form to sparql constructs.
Knowledge rules that supply logical form expressions that are universally applicable. Other rules can be kept more generic by parameterizing them on knowledge.
These rules define a language of triples of the form $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $. Our generation algorithm produces such triples in a mixed top-down and bottom-up fashion. We first apply grammar rules and inference rules to produce the natural language questions and their semantics in our logical form. Then we apply resolution rules to obtain the sparql query. See Figure FIGREF14 for an illustration. In addition, the generator produces a normalized, directed acyclic graph (DAG) of rule applications that corresponds to the normalized program that generated the triple. (Appendix SECREF19 shows an example.) Edges of this DAG represent dependencies among the rule applications, and the normalization ensures that a certain rule combination is represented using the same DAG across all the examples where it occurs.
The described approach can generate a potentially infinite set of questions, from which we first sample randomly and then subsample (to maximize the overall diversity of rule combinations while keeping a uniform distribution over complexity). We measure the diversity of rule combinations using the empirical entropy of a weighted subset of the rule application DAGs, and we use the number of rule applications as a measure of the complexity of an example. We also limit the maximum example complexity such that the questions remain relatively natural. Table TABREF15 shows examples of generated questions at varying levels of complexity. An example of a complete data item is shown in Appendix SECREF8, a more detailed data quality analysis is presented in Appendix SECREF9, and the generation algorithm is discussed in more detail in Appendix SECREF18.
The CFQ Dataset ::: Dataset details and statistics
Input and output. While the primary focus of the dataset is semantic parsing (natural language question to sparql query), we also provide natural language answers for each question. This allows the dataset to be used in a text-in-text-out scenario as well (see Appendix SECREF8).
Ambiguity. We largely avoid ambiguity in the questions. In particular, we make sure each name is used to refer to exactly one entity, and we avoid different possible parse trees, different interpretations of plurals, and the need for disambiguation that requires semantic knowledge.
Scope. We select the following language features as compositional building blocks: open questions and closed questions; subordinate clauses; active and passive voice; conjunctions of verb phrases and of noun phrases; possessives with roles (“X's parent”); adjectives; and type restrictions. For knowledge base features, we select roles, verbs, types, and adjectives from domains that are well-represented in Freebase and that can be combined easily. We start from the popular movie domain (e.g., directing, producing, editor, sequel) and extend this with personal relations (e.g., parent, spouse, sibling), companies (e.g., founding, employer), and adjectives (e.g., gender, nationality).
Logical form and grammar. For the internal logical form, we adopt a variation of the description logic $\mathcal {EL}$ BIBREF14, BIBREF15, augmented with additional constructors (see Appendix SECREF16) to more easily map to certain linguistic structures. For the grammar rules, we use a unification-based grammar syntax similar to that used in the Prolog extension GULP 3.1 BIBREF16, with addition of support for disjunction, negation, absence, and default inheritance of features for compactness of representation.
Grounding in Freebase. Once an example is generated by the CFQ rules, it still contains entity placeholders instead of Freebase machine ids (MIDs). For the task of semantic parsing, the examples could theoretically be used as-is, as our avoidance of semantic ambiguity means that a learner should not need knowledge of the specific entity in order to parse the question. To make the questions natural, however, we apply an additional step of replacing the placeholders with appropriate specific entities. To do this we first execute the generated sparql query against Freebase. This returns a set of candidate MID combinations that satisfy the query and can be used as substitutes. If the set is empty, we abandon the generated question candidate as unnatural. Otherwise, we pick one combination at random to yield a question with positive answer. In the case of a closed question, we also generate a variation that yields the answer “No”, which we do by mixing in MIDs from another substitution (or a more generic replacement if that fails) to keep the question as plausible-sounding as possible. We then randomly choose either the question with positive or with negative answer, to avoid spurious correlations between question structure and yes/no answer.
Semantic and structural filtering. Even among the questions that can be satisfied in Freebase, there are some that are meaningful but somewhat unnatural, such as “Was Strange Days directed by a female person whose gender is female?”. We automatically filter out such unnatural questions using semantic and structural rules. Note that since we do not require a learner to identify such questions, we do not track these filtering rules.
Release and statistics.
CFQ contains 239,357 English question-answer pairs that are answerable using the public Freebase data. (The data URL is not yet provided for anonymous review.) We include a list of MIDs such that their English names map unambiguously to a MID. Table TABREF17(a) summarizes the overall statistics of CFQ. Table TABREF17(b) uses numbers from BIBREF8 and from an analysis of WebQuestionsSP BIBREF17 and ComplexWebQuestions BIBREF18 to compare three key statistics of CFQ to other semantic parsing datasets (none of which provide annotations of their compositional structure). CFQ contains the most query patterns by an order of magnitude and also contains significantly more queries and questions than the other datasets. Note that it would be easy to boost the raw number of questions in CFQ almost arbitrarily by repeating the same question pattern with varying entities, but we use at most one entity substitution per question pattern. Appendix SECREF10 contains more detailed analyses of the data distribution.
Compositionality Experiments for CFQ and scan
The DBCA principles described in Section SECREF6 enable a generic and task-independent method for constructing compositionality experiments. To construct such an experiment for a dataset $U$ and a desired combination of atom and compound divergences, we use an iterative greedy algorithm that starts with empty sets $V$ (train) and $W$ (test), and then alternates between adding an example $u \in U$ to $V$ or $W$ (while maintaining the desired train/test ratio). At each iteration, the element $u$ is selected such that $\mathcal {D}_C(V \Vert W)$ and $\mathcal {D}_A(V \Vert W)$ are kept as closely as possible to the desired values. To reduce the risk of being stuck in a local optimum, we also allow removing examples at certain iterations.
In general, there are many different splits that satisfy a desired compound and atom divergence. This reflects the fact that a certain compound may either occur exclusively in the train set or the test set, or it may occur in both of them because the split may have achieved the desired compound divergence by separating other (possibly orthogonal) compounds. Our greedy algorithm addresses this by making random choices along the way, starting with picking the first example randomly.
For the goal of measuring compositional generalization as accurately as possible, it is particularly interesting to construct maximum compound divergence (MCD) splits, which aim for a maximum compound divergence at a low atom divergence (we use $\mathcal {D}_A \le 0.02$). Table TABREF18 compares the compound divergence $\mathcal {D}_C$ and atom divergence $\mathcal {D}_A$ of three MCD splits to a random split baseline as well as to several previously suggested compositionality experiments for both CFQ and the existing scan dataset (cf. Section SECREF30). The split methods (beyond random split) are the following:
Output length: Variation of the setup described by BIBREF2 where the train set consists of examples with output (sparql query or action sequence) length $\le \hspace{-2.5pt} N$, while the test set consists of examples with output length $> \hspace{-2.5pt} N$. For CFQ, we use $N = 7$ constraints. For scan, we use $N = 22$ actions.
Input length: Variation of the above setup, in which the train set consists of examples with input (question or command) length $\le N$, while test set consists of examples with input length $> N$. For CFQ, we use $N=19$ grammar leaves. For SCAN, we use $N=8$ tokens.
Output pattern: Variation of setup described by BIBREF8, in which the split is based on randomly assigning clusters of examples sharing the same output (query or action sequence) pattern. Query patterns are determined by anonymizing entities and properties; action sequence patterns collapse primitive actions and directions.
Input pattern: Variation of the previous setup in which the split is based on randomly assigning clusters of examples sharing the same input (question or command) pattern. Question patterns are determined by anonymizing entity and property names ; command patterns collapse verbs and the interchangeable pairs left/right, around/opposite, twice/thrice.
All of these experiments are based on the same train and validation/test sizes of 40% and 10% of the whole set, respectively. For CFQ, this corresponds to about 96k train and 12k validation and test examples, whereas for scan, it corresponds to about 8k train and 1k validation and test examples. We chose to use half of the full dataset for the train-test splits, as it led to an appropriate balance between high compound divergence and high train set size in informal experiments.
The MCD splits achieve a significantly higher compound divergence at a similar atom divergence when compared to the other experiments. The reason for this is that, instead of focusing on only one intuitive but rather arbitrary aspect of compositional generalization, the MCD splits aim to optimize divergence across all compounds directly.
Interestingly, the MCD splits still correlate with the aspects of compositional generalization that are targeted by the other experiments in this table. As shown in the four right columns of Table TABREF18, for each MCD split, the train set $V$ contains on average shorter examples than the test set $W$ (measured by the ratio of average lengths), and $V$ also contains only a small fraction of the input and output patterns used in $W$ (measured by the fraction of patterns covered). However, these correlations are less pronounced than for the experiments that specifically target these aspects, and they vary significantly across the different MCD splits.
This illustrates that MCD splits are comprehensive in the sense that they cover many different aspects of compositional generalization, especially when looking at multiple of them. It also means that whether a certain example ends up in train or test is not determined solely by a single criterion that is immediately observable when looking at the input and output (such as length). As we show in Appendix SECREF91, this generally makes the examples in train and test look fairly similar.
Experimental Results and Analysis ::: Experiment Setup
We use three encoder-decoder neural architectures as baselines: (1) LSTM+attention as an LSTM BIBREF19 with attention mechanism BIBREF20; (2) Transformer BIBREF21 and (3) Universal Transformer BIBREF22.
We tune the hyperparameters using a CFQ random split, and we keep the hyperparameters fixed for both CFQ and scan (listed in Appendix SECREF12). In particular the number of training steps is kept constant to remove this factor of variation. We train a fresh model for each experiment, and we replicate each experiment 5 times and report the resulting mean accuracy with 95% confidence intervals.
Note that while we construct test and validation sets from the same distribution, we suggest that hyperparameter tuning should be done on a random split (or random subset of the train set) if one wants to measure compositional generalization of a model with respect to an unknown test distribution as opposed to an architecture with respect to a known test distribution. Tuning on a validation set that has the same distribution as the test set would amount to optimizing for a particular type of compound divergence and thus measure the ability for a particular architecture to yield models that can be made to generalize in one particular way (through leaking information about the test set in the hyperparameters).
Similarly to BIBREF8, we anonymize the Freebase names and MIDs in the textual input and the SPARQL output, respectively, by replacing them with a placeholder (e.g., “M0” for the first MID). This removes the need for two learning sub-tasks that are orthogonal to our focus: named entity recognition and learning that the MIDs are patterns that need to be copied. An example input-output (question-query) pair then looks like the following: `Was M0 a screenwriter' $\mapsto $ `select count(*) where {M0 a ns:film.writer}'.
The main relation we are interested in is the one between compound divergence of the data split and accuracy. Specifically, we compute the accuracy of each model configuration on a series of divergence-based splits that we produce with target compound divergences that span the range between zero and the maximum achievable in 0.1 increments (while ensuring that atom divergence does not exceed the value of 0.02). For each target divergence, we produce at least 3 different splits with different randomization parameters (compare Section SECREF4). For comparison, we also compute accuracies on the other splits shown in Table TABREF18.
Experimental Results and Analysis ::: Results and analysis for CFQ
The mean accuracies of the three architectures on CFQ are shown in Figure FIGREF28(a) and Table TABREF29. We make three main observations:
All models achieve an accuracy larger than 95% on a random split, and this is true even if they are trained on 10 times fewer training instances (see Appendix SECREF15 for a more detailed analysis on the performance with varying training size).
The mean accuracy on the MCD splits is below 20% for all architectures, which means that even a large train set (about 96k instances) with a similar distribution of atoms between train and test is not sufficient for these architectures to perform well on the test distribution.
For all architectures, there is a strong negative correlation between the compound divergence and the mean accuracy.
This suggests that the baseline models are able to capture the superficial structure of the dataset, but fail to capture the compositional structure. We find it surprising that varying the compound divergence gives direct control of the (mean) accuracy, even though the examples in train and test look similar (see Appendix SECREF91). This means that compound divergence seems to capture the core difficulty for these ML architectures to generalize compositionally.
Note that the experiment based on output-length exhibits a worse accuracy than what we would expect based on its compositional divergence. One explanation for this is that the test distribution varies from the training distribution in other ways than compound divergence (namely in output length and a slightly higher atom divergence), which seems to make this split particularly difficult for the baseline architectures. To analyze the influence of the length ratio further, we compute the correlation between length ratios and accuracy of the baseline systems and compare it to the correlation between compound divergence and accuracy. We observe $R^2$ correlation coefficients between 0.11 and 0.22 for the input and output length ratios and between 0.81 and 0.88 for the compound divergence. This shows that despite the known phenomenon that the baseline systems struggle to generalize to longer lengths, the compound divergence seems to be a stronger explanation for the accuracy on different splits than the lengths ratios.
Error analysis. We perform an analysis of the errors for the split MCD$_{1}$ (the first MCD split that we constructed, with more details provided in Appendix SECREF13). We observe accuracies between 29% and 37% on the test set of this particular split. Qualitatively, all three systems seem to make similar errors at this point (68% of errors are on the same samples). They make more errors for longer sequences and predict about 20% too short output when they make an error. The most common category of error is the omission of a clause in the output (present in 43%-49% of the test samples), e.g.: (1) Omitted conjunctions: for the input “What spouse of a film producer executive produced and edited M0, M1, and M2?” the best system ignores “executive produced” in the output. (2) Omitted adjectives: for the input “Which female Spanish film producer was M3' s spouse?” the best system ignores the adjective “female”.
Experimental Results and Analysis ::: Results and analysis for scan
To demonstrate the use of our analysis method on another dataset, we re-create the scan dataset BIBREF2, which consists of compositional navigation commands (e.g, `turn left twice and jump') mapped to corresponding action sequences (e.g., `lturn lturn jump'). We use the original grammar while tracking the rule applications used for the construction of each input-output pair. This enables us to compare the compositional generalization abilities of the baseline systems on this dataset in a novel way.
Figure FIGREF28(b) shows the graphs for the scan data set in the same setup as Figure FIGREF28(a) does for CFQ. We observe that the compound divergence again is a good predictor for the mean accuracy for all three architectures. One difference is that for scan the systems are able to attain accuracies close to 100% for compound divergences up to around 0.2, which is not the case for CFQ. This seems to be in line with the fact that overall CFQ is a more complex task than scan: the total number of rules used in generating scan is only 38 in comparison to 443 rules in the construction of CFQ.
Appendix SECREF14 provides a comparison to other experiments presented in previous work, including experiments that have significantly different atom distributions. We observe that this generally causes lower accuracies but does not break the correlation between accuracy and compound divergence.
Related Work
To measure compositional generalization for semantic parsing to SQL, BIBREF8 propose to ensure that no SQL query pattern occurs in both the train and the test set (“query split”), and they provide such splits for several data sets. By evaluating several ML architectures the authors confirm that this query-pattern split is harder to learn than a conventional split.
BIBREF2 introduce the scan dataset, and several publications provide interesting analyses of compositional generalization using it BIBREF5, BIBREF6. BIBREF7 discuss a particular extension of a seq2seq model that is effective in handling difficult scan sub-tasks by separating semantic and syntactic information during learning. Our contributions extend the analyses on the scan data in several ways: CFQ provides richer annotations and covers a broader subset of English than the scan dataset, and we propose a comprehensive score for assessing aggregate compositionality of a system on a given task.
The mathematics dataset BIBREF13 is a large, automatically generated set of 112M samples in 56 separated sub-tasks. The authors present data and experiments that share common goals with our approach, but focus on mathematical reasoning instead of natural language. Our breakdown of generation rules per train sample is more fine-grained, which allows a more precise compositional generalization analysis. Being automatically generated also links our approach to datasets such as the bAbI tasks BIBREF23, which however do not focus on compositional generalization.
A dataset related to CFQ is ComplexWebQuestions BIBREF18, which consists of complex questions that are automatically generated from simpler sub-questions in WebQuestionsSP BIBREF17 and then reworded manually. While these datasets can be used for semantic parsing, we did not find them suitable for a thorough compositionality analysis because a consistent annotation with the compositional structure would be hard to obtain. Other approaches to semi-automatic dataset creation also use paraphrasing BIBREF24, BIBREF25.
BIBREF3 introduce the generated clevr dataset, which shares common goals with our work applied in the area of visual reasoning. The dataset's functional programs capture some of the structural information of the questions and are linked one-to-many to the 423 question patterns used. The authors specifically investigate generalization to new combinations of visual attributes in one experiment which uses a particular train-test split based on the colors used. BIBREF26 propose a neural-symbolic architecture and discuss promising results on additional specific splits of the clevr data, e.g. based on object counts and program depth. BIBREF27 describe how the application of compositional attention networks to the clevr data leads to structured and data-efficient learning. BIBREF28 present a large, compositional, generated visual question answering data set with functional programs, on which neural state machines achieve good performance BIBREF29. The use of specific splits between train and test data also occurs in the context of visual data. E.g., BIBREF30 propose a greedy split algorithm to maximize the coverage of test concepts in the train set while keeping question-type/answer pairs disjoint and observe performance degradation of existing approaches. BIBREF31 introduce a synthetic visual question answering dataset called sqoop, which is used to test whether a learner can answer questions about all possible object pairs after being trained on a subset.
While these datasets are very interesting, the additional annotation that we provide in CFQ indicating the exact rule trees needed to link input and output makes additional analyses regarding compositionality possible. Our analyses go beyond many of the presented discussions (that mostly focus on accuracy regarding particular holdouts) in formalizing an approach that uses the atom and compound divergences to measure compositionality.
A number of ML approaches have been developed for semantic parsing. BIBREF32 propose Key-Value Memory Networks – neural network-based architectures that internalize a knowledge base into the network – and introduce the WikiMovies dataset. BIBREF33 develop an end-to-end architecture that can handle noise in questions and learn multi-hop reasoning simultaneously. They introduce the MetaQA benchmark that is based on WikiMovies but uses a set of only 511 question patterns (mod entities) shared between train and test.
With regards to studying compositionality in ML, BIBREF34 argue that combinatorial generalization should be a top priority to achieve human-like abilities. BIBREF35 discusses measuring the compositionality of a trained representation, e.g. of a learned embedding. The author suggests to use a tree reconstruction error that is based on how well the oracle derivation of the input matches the structure that can be derived on the representations. BIBREF4 discuss an architecture that enables the learning of compositional concept operators on top of learned visual abstractions. BIBREF36 introduce the compositional recursive learner that “can generalize to more complex problems than the learner has previously encountered”.
Conclusion and Outlook
In this paper we presented what is (to the best of our knowledge) the largest and most comprehensive benchmark for compositional generalization on a realistic NLU task. It is based on a new dataset generated via a principled rule-based approach and a new method of splitting the dataset by optimizing the divergence of atom and compound distributions between train and test sets. The performance of three baselines indicates that in a simple but realistic NLU scenario, state-of-the-art learning systems fail to generalize compositionally even if they are provided with large amounts of training data and that the mean accuracy is strongly correlated with the compound divergence.
We hope our work will inspire others to use this benchmark as a yardstick to advance the compositional generalization capabilities of learning systems and achieve high accuracy at high compound divergence. Some specific directions that we consider promising include applying unsupervised pretraining on the input language or output queries and the use of more diverse or more targeted learning architectures, such as syntactic attention BIBREF7. We also believe it would be interesting to apply the DBCA approach to other domains such as visual reasoning, e.g. based on clevr BIBREF3.
In the area of compositionality benchmarks, we are interested in determining the performance of current architectures on the end-to-end task that expects a natural language answer given a natural language question in CFQ. We would like also to extend our approach to broader subsets of language understanding, including use of ambiguous constructs, negations, quantification, comparatives, additional languages, and other vertical domains.
Example Dataset Item
The following shows an example data item including the question text in various forms, the answer, the sparql query in various forms, some tracked statistics, and the set of used rules (atoms) and the applied rule tree (compound). Some details are omitted, indicated by ellipses (`...').
Data Quality Analysis
During the development of our data generation pipeline, we manually checked the generated examples for quality. Below is a random selection of 50 examples of the final CFQ dataset (no cherry-picking was used). Brackets around [entity names] are provided just for ease of human reading. Manual checking also indicated that all questions are associated with the semantically correct sparql queries. However, because we rely on the data present in Freebase, there are three debatable questions which sound somewhat unnatural (UNKREF33, UNKREF51, and UNKREF59, see further discussion below the list).
Who was a writer, star, and cinematographer of [Tetsuo: The Bullet Man], [Nightmare Detective], and [Bullet Ballet]?
Which male person was a sibling of [Andrew Klavan]?
Did [Wallace Stevens] influence [Levi Seeley]'s spouse and parent?
Did a producer, writer, and art director of [Thelma & Luis] produce, direct, and write [Light Girls]?
Were [Hangover Square], [Zack and Miri Make a Porno], and [Clerks II] edited by a founder and employee of a film producer?
What American parent of [Charlie Sistovaris] was a British screenwriter's sibling?
Did [Anne Williams Rubinstein] marry a person that influenced a screenwriter and influenced [John Most]?
Was [Cachún cachún ra ra!]'s director a film director's American child?
Did [Maisy's Garden]'s executive producer write, edit, and executive produce [Pakalppooram], [It's Not About the Shawerma], [Rick's Canoe], and [The Fifth Wall]?
Was [Holly Ellenson]'s child [Wally Ellenson]?
Did [Emerald Cities]'s cinematographer, writer, and editor edit, executive produce, and direct [Blues for the Avatar] and [White Stork Is Coming]?
Was a film producer [Lilies of the Ghetto]'s distributor and producer?
Which child of [Mimi Iger] did a film producer employ and [The Walt Disney Company] employ?
What Japanese spouse of [Hong Kong Paradise]'s star did [Ineko Arima] and [Nishiki Kô] marry?
Who influenced and was influenced by [Black Dynamite]'s star?
What was written by, edited by, directed by, produced by, and executive produced by [Pauline Collins]'s child's sibling?
Which Swedish film director that [Théo Van Horn]'s actor influenced did [Egen ingȧng] star?
Who was influenced by [Golden Yeggs]'s star, was influenced by [Richard Pryor], was influenced by [Bill Murray], and married [Elaine Chappelle]?
What did [This Is My Show]'s director, cinematographer, and star direct, edit, produce, and executive produce?
Who was a male costume designer and director of [Ene... due... like... fake...] and [The Windmill Bar]?
Was [Kumudu Munasinghe] a Dutch film producer's country of nationality's employee?
Did an art director, editor, director, writer, cinematographer, and star of [Tetsuo II: Body Hammer] produce [Nightmare Detective], [Tetsuo: The Iron Man], and [A Snake of June]?
Was [Alexandra Naoum] [Monsieur Verdoux]'s producer, writer, and star?
What film director founded [THX], was employed by [American Zoetrope], [LucasArts], [Skywalker Sound], and [Lucasfilm], and founded [Industrial Light & Magic]?
What male employee of [Weta Workshop] was [Bad Taste]'s editor?
Were [Weta Digital] and [Weta Workshop] founded by a cinematographer and founded by a film editor?
What art director influenced [DreamWorks Animation]'s founder?
Did [Daisies] star [Fruit of Paradise]'s costume designer and writer, star [Jaromír Vomácka], and star [Jirina Myskova]?
What character was influenced by a costume designer, influenced by [Pedro Calderón de la Barca], influenced by [William Shakespeare] and [Luis Buñuel], and influenced by [Miguel de Unamuno]?
What British costume designer of [The Love Letter] and [The Chamber] was a screenwriter's child?
Was [Eric Massa] a cinematographer's parent's sibling's American sibling?
What art director of [Stepping Sisters 1932] was a parent of [Imre Sándorházi]?
What was executive produced by, written by, produced by, and edited by a director of [V/H/S/2]'s sequel?
What did an editor and cinematographer of [Tongue Twister Variations] direct?
Who was a Canadian screenwriter that produced [Her Painted Hero] and [The Nick of Time Baby]?
Which American parent of [Janet Friedman] did [Rose Friedman] influence and marry?
Did [George Carlin] influence [Louis C.K.: Shameless]'s executive producer and influence [Joan Rivers]?
Who was a male writer, star, director, and costume designer of [The Wizard of Speed and Time]?
Who was [Lost Boys: The Thirst]'s prequel's sequel's art director?
Did a cinematographer's female parent executive produce, direct, and write [Hit Dat Shit 5]?
Who married [Siri von Essen], influenced [A Lesson in Love]'s director and art director, influenced [Tennessee Williams], and influenced [Maxim Gorky]?
What Italian film director directed [Children of Hannibal]?
What film producer directed, wrote, edited, and produced [la estrella], [la ardilla], and [el valiente]?
Were [Flames: The Movie] and [Soltera] directed by a male person and executive produced by [Hilda Russoff]'s spouse?
Was a sibling of [Fawwaz bin Abdulaziz Al Saud] [Badr bin Abdulaziz Al Saud]'s sibling?
What did a sibling of [Louise Rohr] executive produce, produce, and edit?
Did a French cinematographer of [Le Volcan interdit] edit [The Last Bolshevik] and direct [A.K.] and [Statues Also Die]?
Was [Mannai Thottu Kumbidanum] directed by and written by a Dutch male cinematographer?
Was a director, art director, executive producer, and costume designer of [But I'm a Genderqueer] [Lauren Soldano]?
Was [When We Were Kings] produced by a film editor whose spouse was employed by [Royal Academy of Dramatic Art] and distributed by [PolyGram Filmed Entertainment]?
Further discussion of the debatable questions:
Did [Wallace Stevens] influence [Levi Seeley]'s spouse and parent?
The occurrence of the seemingly implausible combination of roles “spouse and parent” is due to incorrect data in Freebase, in which there are 502 entities asserted to be both the spouse and parent of other entities. For instance, “Anne Dacre” is both the spouse and parent of “Christopher Conyers”. We can also find occasional occurrences in CFQ of other implausible role combinations, such as “parent and child”, “spouse and sibling” etc., triggered by similar Freebase data issues.
Was [Kumudu Munasinghe] a Dutch film producer's country of nationality's employee?
The somewhat unnatural phrasing of “country's employee” occurs due to a modeling choice in Freebase, in which the same entity is used to represent both a country and the government of that country. This makes it possible for a country to employ a person.
What character was influenced by a costume designer, influenced by [Pedro Calderón de la Barca], influenced by [William Shakespeare] and [Luis Buñuel], and influenced by [Miguel de Unamuno]?
The somewhat unnatural phrasing of “a character was influenced by” occurs due to a modeling choice in Freebase, in which when a film character is based on a real person, Freebase commonly uses the same entity to represent both. This makes “person” and “character” exchangeable in the questions where the person is also a film character.
Data Distribution Analysis ::: Answer frequencies
Table TABREF85 shows the most frequently occurring answers in CFQ. Not surprisingly, after the answers “Yes” and “No”, entities related in Freebase to the domain of movies have highest frequency.
Data Distribution Analysis ::: Impact of subsampling on the distribution of complexity levels
Figure FIGREF87 illustrates how subsampling changes the distribution of questions in CFQ with different levels of complexity to become more even.
Data Distribution Analysis ::: Impact of subsampling on the frequency of rules and rule combinations
Subsampling increases the frequency of rarely used rules and rule combinations and decreases the frequency of commonly used ones. For rules, this is illustrated by Figure FIGREF89 which shows the ratio of examples each rule appears in, before and after subsampling, in the order of their frequency. Figure FIGREF90 shows the same comparison for rule combinations.
Divergence-Based Split Analysis ::: Qualitative analysis of MCD@!START@$_{1}$@!END@
Traditional compositionality experiments often use train-test splits based on observable properties of the input and output (e.g., input/output complexity, input/output patterns, and input/output feature holdouts). One consequence of this is that the difference between train and test examples is relatively easily observable “with the naked eye”. The lists below illustrate that this is not usually the case for divergence-based splits. Similar to the random sample of the general data in Appendix SECREF9 we provide a random sample of size 20 from both the train and test set here. Indeed, even for the MCD$_{1}$ split with a high divergence of 0.694, the 20 random samples of train and test questions shown below cannot easily be distinguished as they both contain the same kind of questions of different sizes.
Train samples from MCD$_{1}$:
What was founded by a costume designer, founded by [Forgotten Silver]'s star, and founded by [Jamie Selkirk]?
Which male person influenced and was influenced by [William Dean Howells]?
Did [Marco Bellocchio] produce, write, and direct [Greek Pete]?
What did [Rick Schmidt] edit, [Philip Rashkovetsky] edit, and a cinematographer edit?
Were [The Living Playing Cards] and [The Haunted Castle] edited by, directed by, and produced by a French writer of [Le cauchemar de Méliès]?
What did a spouse of [Shorts]'s producer's spouse executive produce and direct?
Did [P. G. Wodehouse], [Raymond Chandler], [Edward Bunker], [Pauline Kael], and [Michael Cimino] influence [Grindhouse]'s cinematographer and star?
What Mexican person did a film producer employ?
Did [The Midnight After]'s Chinese executive producer edit [Perfect Life] and [Dumplings]?
Who did [For the Secret Service]'s director's female spouse influence?
Who married, was influenced by, and influenced a company's founder?
Was [MAN SE]'s French male German employee's employer [Sulzer]?
Who influenced an actor that [Robin Santana] was influenced by and [K. J. Stevens] was influenced by and was influenced by [Virgil]?
Did [Pirates of Malaysia] star [Giuseppe Addobbati] and star a Spanish screenwriter?
Was [The Silence of the Sea] written by, produced by, executive produced by, directed by, and edited by [The Red Circle]'s French editor?
Did [Chanel] employ a German costume designer, employ [Gaspard Ulliel] and [Maureen Chiquet], and employ [Jacques Polge]?
Who was influenced by [Adam Sandler] and married a film producer?
Did a Spanish screenwriter's child direct and edit [Bakuchi-uchi: Nagaremono]?
Was a founder of [IG Port] employed by a film producer?
Was [Orizzonti Orizzonti!] executive produced by and written by an art director's sibling?
Test samples from MCD$_{1}$:
What sequel of [Paranormal Activity 2] was edited by and written by a film director?
What spouse of a film producer founded [Grand Hustle Records] and was employed by [40/40 Club], [Roc-A-Fella Records], and [Def Jam Recordings]?
Did [Pixar] employ an art director and employ [Susham Bedi]?
Was a sibling of [David Lindbland] [Dynamit Nobel]'s Swedish founder?
What prequel of [Charlie the Unicorn 2] starred, was edited by, was produced by, was written by, and was directed by [Jason Steele]?
Did [Rick Schmidt] direct, produce, executive produce, and edit [Blues for the Avatar], [White Stork Is Coming], [The Fifth Wall], and [It's Not About the Shawerma]?
Was [Luke Larkin Music] an art director's employer?
What prequel of [Goat Story 2] was executive produced, written, directed, edited, and produced by [Jan Tománek]?
Was [Bullet Ballet]'s editor, star, director, and cinematographer [Promises Written in Water]'s star, director, writer, executive producer, and art director?
What was edited by, produced by, directed by, and written by [Ellis Kaan Ozen], [Thaw Bwe], [Jeffrey Malkofsky-Berger], and [Leslie Berkley]?
Was a person's female sibling [Reggae in a Babylon]'s producer?
Who was a director, cinematographer, executive producer, art director, producer, star, and writer of [The Man Who Killed God]?
Was [My Sweet Home]'s director, editor, writer, art director, producer, cinematographer, and costume designer a person?
Which art director, star, and editor of [The Brown Bunny] and [Promises Written in Water] did [Cord] star?
Did an employee and founder of [Virgin Mobile Australia], [Virgin Mobile USA], and [Virgin Mobile France] found [Virgin America] and found [V2 Records]?
Was a Chinese executive producer and star of [Happy Ghost II] and [All's Well, Ends Well 2010] a film director?
Was [The Voyeur]'s executive producer an actor's parent?
Did [Erasable Cities]'s writer, producer, editor, art director, cinematographer, and director produce and executive produce [Promises Written in Water]?
Who was an editor, star, and cinematographer of [Tetsuo: The Iron Man], [A Snake of June], and [Bullet Ballet]?
Was a costume designer's employer [Philips High School]?
Divergence-Based Split Analysis ::: Quantitative analysis of MCD@!START@$_{1}$@!END@
Figure FIGREF133 shows the frequency of atoms (upper graph) and compounds (lower graph) in the train and test sets of the maximum compound divergence split for the CFQ data. As the frequency of an atom resp. compound we use the fraction of examples it appears in. Both atoms and compounds are indexed primarily by their frequency in the train set, secondarily by their frequency in the test set, in decreasing order. For practical reasons we only look at a small subset of compounds here but we believe the analysis is representative.
We can see that the frequency of atoms in the two sets is very aligned and that all atoms from the test set appear in the train set. The frequency of compounds however is wildly different: While some invariably occur in both sets, the frequencies are often not aligned and most compounds appear only in either the train or the test set.
Hyperparameters
The experiments were run using the tensor2tensor framework BIBREF39 with some of the hyperparameters tuned using a random split of a previous, smaller version of the data set during development. We use the default hyperparameter sets publicly available in the tensor2tensor implementation (obtained from https://github.com/tensorflow/tensor2tensor) and override the tuned hyperparameters. The hyperparameters used are summarized in Table TABREF134.
Detailed error analysis ::: Breakdown of error types
Table TABREF136 shows a more detailed analysis of the errors that the baseline models make on CFQ for MCD$_{1}$ (compare Section SECREF24). The reported errors are bucketized into three main types: sparql property clause error, sparql filter clause error and malformed sparql query in the model's output. The total number of test set examples exhibiting any clause or filter error is reported (sum column), as well as the number of insertions (ins), deletions (del), and substitutions (sub) in the model's output with respect to the correct query. Property clause substitution errors are further subdivided into those where only the property itself is wrong while subject and object are correct (prop), those where the property is correct but either subject or object is wrong (node) and those where both the property and the subject or the object are wrong (both).
The accuracy metric requires the model response and the golden (correct) answer to be exactly equal to each other. Thus, a sparql query with the same clauses as the golden answer but in a different order or with some of the clauses appearing multiple times is also considered to be an error despite being equivalent to the golden answer in its meaning. The amount of such errors is relatively small though, accounting for 1.8%, 0.6% and 1.5% of total test set size for LSTM+Attention, Transformer and Universal Transformer respectively.
Detailed error analysis ::: Qualitative error analysis
Below we qualitatively analyze a number of instances the models fail on. We anonymize the MIDs in the same way as the data is provided to the models (see Section SECREF5). We first select queries on which all machine learning systems fail in all replicated runs (about 5k instances out of a total of about 12k), and then randomly select queries from this list. In the following we discuss a few cases in more detail. Note that, for readability, we use the following abbreviations for the sparql properties in Query 1:
ns:people.person.child = ns:people.person.children|
ns:fictional_universe.fictional_character.children|
ns:organization.organization.child/
ns:organization.organization_relationship.child
ns:people.person.sibling = ns:people.person.siblings/
ns:people.siblingrelationship.sibling|
ns:fictionaluniverse.fictionalcharacter.siblings/
ns:fictionaluniverse.
siblingrelationshipoffictionalcharacters.siblings
Query 1: “What sibling of M0 was M1' s parent?”
Golden (correct) sparql query:
SELECT DISTINCT ?x0 WHERE {
?x0 ns:people.person.child M1 .
?x0 ns:people.person.sibling M0 .
FILTER ( ?x0 != M0 )
}
Inferred (system) sparql query:
SELECT DISTINCT ?x0 WHERE {
?x0 ns:people.person.sibling ?x1 .
?x0 ns:people.person.sibling M0 .
?x1 ns:people.person.child M1 .
FILTER ( ?x0 != ?x1 )
}
Analysis. The meaning of the sparql query generated by the system is “What sibling of M0 was a sibling of M1's parent?”, which is incorrect. We next analyze the train set, in order to show that we believe enough information has been provided in the train set for the question to be answered correctly.
Some subqueries of the query and their occurrences are shown in Table TABREF140. While the exact subquery “What sibling” does not occur at training, the two words have been shown separately in many instances: the subqueries “sibling of Mx”, and “Mx's parent” occur 2,331 and 1,222 times, respectively. We can analyze this example in more detail by comparing parts of the rule tree of this example with those shown at training. As can be read from the table, similar sentences have been shown during training. Some examples are:
What was executive produced by and written by a sibling of M0?
What costume designer did M1's parent employ?
What cinematographer was a film editor that M2 and M3 married?
What film director was a character influenced by M2?
Query 2: “Did a male film director edit and direct M0 and M1?”
Golden (correct) sparql query:
SELECT count ( * ) WHERE {
?x0 ns:film.director.film M0 .
?x0 ns:film.director.film M1 .
?x0 ns:film.editor.film M0 .
?x0 ns:film.editor.film M1 .
?x0 ns:people.person.gender m_05zppz
}
Inferred (system) sparql query:
SELECT count ( * ) WHERE {
?x0 ns:film.director.film M0 .
?x0 ns:film.director.film M1 .
?x0 ns:film.editor.film M0 .
?x0 ns:people.person.gender m_05zppz
}
Analysis. The meaning of the inferred sparql query is “Did a male film director edit M0 and direct M0 and M1?”. It thus seems the model `forgets' to include the relation between the director and movie M1.
Looking at subqueries and their occurrence count (Table TABREF145), we see again that various subqueries occur often during training. However, “edit and direct” have not been shown often together. When looking at the rule trees, we see that both conjunctions in the query occur often at training separately: “Did [DetNP] [VP] and [VP] [DetNP]” occurs 1,432 times, and “Did [DetNP] [VP] [Entity] and [Entity]” occurs 909 times. However, they never occur together: “Did [DetNP] [VP] and [VP] [DetNP] and [DetNP]” does not occur at training. This may be the reason why all systems fail on this example, but at the same time we believe a compositional learner should be able to generalize correctly given the training instances. Some examples are:
Did a male film director that M3's parent married influence an art director?
Did a film producer that played M2 edit and direct M1?
Did a screenwriter edit and direct a sequel of M1
Did a Chinese male film director edit M1 and M2?
Additional experimental results on scan
Figure FIGREF150 shows a scatter plot of accuracy vs. compound divergence for the three baseline architectures (see Section SECREF5) on existing splits of the scan data. These splits are discussed in BIBREF2 and BIBREF6, and the exact split data is available. (Data splits obtained from https://github.com/brendenlake/SCAN). We map these splits onto the re-created scan data, which enables us to measure the atom and compound divergences. The authors present a total of six split experiments (some with several sub-experiments):
BIBREF2:
simple (random)
by action sequence length
adding a primitive and adding a primitive along with complex combinations
BIBREF6:
adding a template
adding template fillers
adding more training examples of fillers (fewshot)
In the plot, we omit some data points that are too close to be distinguished easily. The point labels have the form `(abbreviated experiment name)<(parameter)>@(number of samples) (baseline system abbreviation) [(train set size fraction), (split atom divergence)]'. The train set size fraction is given as a percentage of the overall data size. The baseline system abbreviations are LSTM, T for Transformer, UT for Universal Transformer, T/UT where both transformer models are indistinguishable, and empty where all three systems perform indistinguishably. The abbreviated experiment name is one of the names in italics above.
We can observe a strong dependency of the accuracies on the compound divergence of the data split. Again, this seems to indicate that the compound divergence is correlated with accuracy for these baseline architectures. One difference to the data shown in Figure FIGREF28(b) is that for this set of experiments the accuracy drops faster with increasing compound divergence. One explanation for this effect is that the experiments are directly aimed at highlighting one specific potentially problematic scenario for learning. E.g. in the experiment `primitive<jump>' (with very low accuracies for all three systems) the jump command is shown exactly in one combination (namely alone) in the training data while it occurs in all test examples in arbitrary combinations.
This is reflected in the higher atom divergence value of 0.08 for this split, as well as in all other splits that exhibit a low accuracy at a low compound divergence in Figure FIGREF150. Note that BIBREF2 already compare the experiment `primitive<jump>' to the experiment `primitive<turn left>' for which all three systems achieve a much higher accuracy. In their interpretation of this phenomenon, they mainly focus on the fact that in contrast to 'jump', the action 'turn left' is also generated by other inputs. We additionally observe that the latter experiment also has a slightly lower atom divergence of 0.07, a lower compound divergence, and it covers a much larger part of the data in the train set (94% vs. 63%).
While the accuracies we observe for the `primitive' experiments are very much in line with the results reported by BIBREF2, we noticed a few interesting differences for other experiments: All three systems go to 100% accuracy on the fewshot task even for one example (while BIBREF6 report a slowly increasing accuracy for the architecture they evaluate). On the other hand, both transformer models only reach 0% accuracy on the length split, while the LSTM obtains around 14% (which is in line with what previous work reports).
Analysis of relations between accuracy, compound divergence, and training size
Figure FIGREF28 shows for all baseline systems a strong correlation between accuracy and compound divergence for the chosen training sizes (96k for CFQ and 8k for scan). One interesting question is whether and how this correlation is changed for different training sizes. Figures FIGREF159 and FIGREF159 show that this correlation holds also for smaller training sizes but that the accuracy is generally somewhat lower for smaller training sizes.
At the same time, we observe that the difference between accuracies of various training sizes gets smaller as the training size increases. This can be seen even more clearly in Figures FIGREF159 and FIGREF159, which plot the training size rather than the compound divergence on the x-axis. These figures show that the increase in accuracy flattens out significantly as we reach training size of about 80k for CFQ and about 6k for SCAN. This indicates that further increasing train set size may not be sufficient to do well on these compositionality experiments.
Logical Form
To represent our logical form we use syntax of the description logic $\mathcal {EL}$ BIBREF14, BIBREF15 with additional concept and role constructors. These constructors do not have description logic semantics; instead, their meaning is completely determined by the set of generation rules of the CFQ dataset.
Let $A$ be a concept name, $C, C_1, C_2$ be concepts, $R, R_1, R_2$ be roles, and $v$ be a raw string. Then the following would be concepts:
and the following would be roles:
Note that our logical form does not have roles other than those in a form of RolePair($C_1$, $C_2$).
New strings are generated by using a special function new_var($\$S$). This function generates a unique string of the form ?x<N>, where N is a unique number, and assigns that string to variable $\$S$. This string can later be used as a variable in a sparql constraint.
Rule Format
This section describes the format of each of the rule types we use for generating the CFQ dataset, in the form in which they appear in the rules index in Appendix SECREF20.
General formatting conventions shared across all rule types:
Variable names are prefixed by `$'. Example: $X.
(Exception: In grammar rules, while variables standing for constants are prefixed by `$', variables standing for logical forms are prefixed by `_'. Example: _action.)
Concept names are written in camel case. Example: FilmProducer.
Names of functions that output logical forms (concepts, roles, or knowledge) are also written in camel case. Examples: DropDependency, BoundRolePairs, RolePair.
Names of functions that output string literals or which are used for converting logical forms to sparql are written in lowercase with underscores. Examples: def2sparql, get_specializations, new_var.
String literals are enclosed in single quotes. Example: 'ns:film:director'.
Rule Format ::: Grammar rule format
The CFQ grammar is a unification-based grammar of recursive rewriting rules used to generate pairs of strings and their corresponding logical form. For an introductory overview of unification-based grammars including several popular variations, see BIBREF38. The rules in the CFQ grammar follow a similar syntax in particular to that used in the Prolog extension GULP 3.1 BIBREF16, with the addition of support for disjunction, negation, absence, and default inheritance of features, and with minor differences in formatting described below.
Properties shared between the CFQ grammar syntax and that of BIBREF16 include the following:
Grammar rules are notated as variations of context-free phrase-structure rules of the form $T_{0} \rightarrow T_{1}$ ... $T_{n}$, where each of the syntactic non-terminals and terminals $T_{0}$ ... $T_{n}$ are augmented with feature lists in parentheses.
Each grammar rule can be interpreted as specifying how a feature structure (with logical form) that is unifiable with the lefthand side can be re-written to the sequence of features structures (with logical form) indicated on the righthand side.
Features are represented as attribute-value pairs separated by a colon (i.e., $attribute$:$value$).
Shared values in feature structures are represented through the use of variables.
Specifically, in the rules index, CFQ grammar rules are described in the format
$T_{0}(F_{0})[H]/L_{0} \rightarrow T_{1}(F_{1})/L_{1}$ ... $T_{n}(F_{n})/L_{n}$
where:
Each $T_{i}$ is a syntactic category (syntactic nonterminal) or a string literal (syntactic terminal).
Each $L_{i}$ for $i \in [1, n]$ is either a variable representing a logical form or an empty string. In the case when $L_{i}$ is an empty string, we allow dropping the trailing slash from the $T_{i}(F_{i})/L_{i}$ expression, resulting in just $T_{i}(F_{i})$.
$L_{0}$ is a logical form expressed in terms of $L_{1}...L_{n}$.
Each $F_{i}$ is a comma-separated feature list of the form $(attribute_{1}$:$value_{1}$, ..., $attribute_{k}$:$value_{k})$. In the case where $F_{i}$ is empty, we allow dropping the parentheses from the $T_{i}(F_{i})$ expression, resulting in just $T_{i}$.
$H$ is either an empty string or one of the variables $L_{i}$ for $i \in [1, n]$, indicating that $F_{0}$ default inherits the features of $F_{i}$ (the syntactic “head”). In the case where $H$ is an empty string, we allow dropping the brackets from the $T_{0}(F_{0})[H]$ expression, resulting in just $T_{0}(F_{0})$.
Note that while the above notation adopts the convention of splitting out the syntactic category and logical form from the feature list for visual prominence and to highlight the relationship to its context-free phrase-structure rule core, behaviorally it is identical to adding two more features to the feature list (we can call them, for example, $cat$ and $sem$) to represent the syntactic category and logical form.
This means that, for example, the rule
ACTIVE_VP[_head]/_head
$\rightarrow $ VP_SIMPLE(form:infinitive)/_head
can be considered a notational shorthand for the following rule expressed purely using feature lists:
(cat:ACTIVE_VP, sem:_head)[_head]
$\rightarrow $ (cat:VP_SIMPLE, sem:_head, form:infinitive)
Disjunction of features. Similarly to BIBREF37, we allow disjunctive feature specifications, which we denote by separating the alternative values with a pipe (`$|$'). The feature specification (form:gerund|infinitive) would thus unify with either (form:gerund) or (form:infinitive), but not with (form:pastparticiple).
Absence of features. We use a special atomic value _none_ to indicate that a given feature must either be absent or else explicitly set to the value _none_. The feature specification (subject:_none_, object:yes) would thus unify with either (object:yes) or (subject:_none_, object:yes), but not with (subject:yes, object:yes).
Negation of features. Similarly to BIBREF37, we allow negated feature specifications, which we denote by prefixing the attribute with a minus sign (`-'). The feature specification (-form:gerund|infinitive) would thus unify with (form:pastparticiple) or (form:_none_), but not with (form:gerund) or (form:infinitive). In general, a feature specification of the form (-attribute:v$_{1}$|...|v$_{j}$) can be considered a notational shorthand for (attribute:v$_{j+1}$|...|v$_{k}$|_none_), where v$_{j+1}$|...|v$_{k}$ is an enumeration of all possible values of the feature attribute other than v$_{1}$|...|v$_{j}$.
Default inheritance of features. If the lefthand side term is notated as $T_{0}(F_{0})[H]$, with $H$ equal to one of the variables $L_{i}$ for $i \in [1, n]$, then this is interpreted as a notational shorthand for augmenting both $F_{0}$ and $F_{i}$ with an additional list of attribute-value pairs $(a_{1}$:$\$v_{1}, ..., a_{k}$:$\$v_{k})$, where $a_{1} ... a_{k}$ are all of the attributes listed in $F_{i}$ that were not originally listed in $F_{0}$.
Unification of logical forms. As described in Appendix SECREF16, we represent logical forms using a variation of description logic, rather than using feature structures. In the context of unification, we consider logical forms to unify if and only they achieve structural concept equality after variable replacement (using the same variable replacements applied during unification of the corresponding feature lists), while taking into account the commutativity and associativity of $\sqcap $. For example, under this criterion, the logical form GenderRel $\sqcap $ $\exists $RolePair(Predicate, Gender)._head would unify with either GenderRel $\sqcap $ $\exists $RolePair(Predicate, Gender).Male or with ($\exists $RolePair(Predicate, Gender).Male) $\sqcap $ GenderRel under a variable replacement mapping _head to Male, but would not unify with GenderRel $\sqcap $ $\exists $RolePair(Predicate, Gender).Male $\sqcap $ $\exists $RolePair(Predicate, GenderHaver).FilmProducer.
Rule Format ::: Knowledge rule format
CFQ knowledge rules output expressions representing facts that are known to be true. They have no direct effect on text, logical forms, or sparql, but the generated knowledge can be used as preconditions to other rules. In the rules index, they are described in the following format:
$\rightarrow K$, where $K$ is knowledge that is output.
By convention, we define the rule name of a knowledge rule to be simply the string representing the knowledge that the rule outputs, and we omit the rule name in the rules index for brevity.
The union of those rules defines a knowledge base which we denote with $KB^{CFQ}$.
All knowledge in CFQ is represented in the form $P(X_1,...,X_n)$, where $P$ is a predicate from the list below, and $X_1, ..., X_n$ are either logical forms or else raw strings. Knowledge rules do not use variable-based expressions.
Supported knowledge predicates:
BoundRolePairs
ExclusiveRolePair
FreebaseEntityMapping
FreebasePropertyMapping
FreebaseTypeMapping
NonExclusiveRolePair
Role
Rule Format ::: Inference rule format
CFQ inference rules transform logical forms and may be conditioned on knowledge. In the rules index, they are described in the following format:
$K: L_0 \rightarrow L_1$
where $K$ represents a comma-separated list of knowledge preconditions, and $L_0$ and $L_1$ represent the input and output logical forms, all expressed in terms of a shared set of variables $v_1,...,v_m$.
These rules are interpreted as stating that if there exists a variable replacement $r()$ replacing $v_1,...,v_m$ with some logical forms $l_1,...,l_m$ respectively, such that $r(K) \subseteq KB^{CFQ}$, then we can apply the inference rule by rewriting $r(L_0)$ to $r(L_1)$.
Rule Format ::: Resolution rule format
CFQ resolution rules transform sparql expressions and may be conditioned on knowledge. They do not affect text or logical forms.
In the rules index, they are described in the following format:
$K: S_0 \rightarrow S_1~...~S_n$
where $K$ represents a comma-separated list of knowledge preconditions, $S_0$ is a variable-based expression and $S_1~...~S_n$ are either raw sparql strings or else expressions described in terms of the same variables used in $S_0$ and $K$.
These rules are interpreted as stating that if there exists a variable replacement $r()$ replacing $v_1,...,v_m$ with some logical forms, strings, or expressions $l_1,...,l_m$ respectively, such that $r(K) \subseteq KB^{CFQ}$, then we can apply the resolution rule by rewriting $r(S_0)$ to the sequence of terms $r(S_1)~...~r(S_n)$.
Generation Algorithm
Our generation algorithm produces triples of the form $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ in a mixed top-down and bottom-up fashion, with the final program of rule applications output alongside each triple in the form of a rule application DAG. The top-down portion of generation is responsible for efficiently searching for rules that can be applied to produce a meaningful example, while the bottom-up portion is responsible for actually applying the rules (i.e., performing the composition) and for producing the DAG.
The generation process proceeds in two phases, each involving a top-down as well as bottom-up aspect. In the first phase, we apply grammar rules interleaved with inference rules to produce a pair of $\langle \text{question, logical form} \rangle $. Specifically, we apply a recursive top-down algorithm which starts with the $S$ nonterminal and at every step performs a random search over the rules in the grammar which could produce the target nonterminal with accompanying feature structure. This top-down process proceeds until a candidate syntactic parse tree is attained whose leaves consist purely of syntactic terminals (i.e., string literals or entity placeholders). The grammar rules from this candidate parse tree are then applied in a bottom-up fashion beginning with the syntactic terminals to yield a tree of $\langle \text{text, logical form} \rangle $ pairs. After each such bottom-up grammar rule application, we then greedily apply all possible inference rules on the resulting logical forms, applying an arbitrary deterministic ordering to the inference rules in cases where rules could be applied in multiple valid orderings. This ensures that inference rules and grammar rules are executed in an interleaved manner and each inference rule is applied at the earliest possible occasion.
When a $\langle \text{question, logical form} \rangle $ pair is generated for the $S$ nonterminal, we proceed to the second phase of the algorithm, in which resolution rules are applied to generate a corresponding sparql query to make up the third element of the desired $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple. In practice, the bulk of the work in this phase is performed in a top-down fashion, in which resolution rules are recursively applied to transform a starting expression of the form get_specializations($L) (where $L represents the logical form output from the grammar phase) into a sequence of text literals representing the sparql query. This is followed nominally by a bottom-up process to construct the rule application DAG, yielding a tree of resolution rule applications of a similar form to the tree of interleaved grammar and inference rules output from the grammar phase. Note that while the grammar phase involves a large degree of random choice, the resolution phase proceeds much more deterministically, as the CFQ resolution rules have been designed such that any given question can yield only one possible sparql query, modulo commutativity and associativity of $\sqcap $. In cases where resolution rules could be applied in multiple valid orderings, we again apply an arbitrary deterministic ordering to the resolution rules so as to yield as consistent as possible a rule application DAG and $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple for any given question.
Finally, to ease the task of tracking unique query patterns and to minimize the impact on the learning task of implementation details regarding choice of variable names or ordering of clauses, we normalize the final sparql query by alphabetically sorting the query clauses and re-numbering the variables to follow a standard increasing order.
The resulting $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple is then appended to the CFQ dataset.
Generation Algorithm ::: Join by Logical Form
In general, we do not explicitly track rules to represent the example-independent behaviors of the generation algorithm, as the universal applicability of these rules mean that the complete behavior of the generator should be observable on any reasonably-sized train set. The same applies to certain core behaviors of the description logic $\mathcal {EL}$, such as commutativity and associativity of $\sqcap $, which we omit tracking as explicit rules due to their similar ubiquity of application.
One example-independent rule, however, that we do explicitly track is the rule that describes the handover process between the grammar phase and the resolution phase – or in terms of the rule application DAG, the rule that joins the tree of interleaved grammar and inference rule applications with the tree of resolution rule applications. We call this rule JOIN_BY_LOGICAL_FORM. It is included in the rules list for every example in CFQ and appears as the head of the rule application tree for each example.
Generation Algorithm ::: Relationship between Generation and Parsing
Note that conceptually a similar approach for combining the different rule types could be applied to the semantic parsing task. The main difference would be that, instead of performing random search over the grammar, the semantic parsing task would need to find the set of rules which produce the desired input text.
Generation Algorithm ::: Selecting an appropriate sample set
For many domains, the set of examples generated by exhaustively combining rules is infinite or prohibitively large. For example, the CFQ grammar generates an infinite set of questions, and even when restricted to a reasonable complexity, the set is still too large for practical use. This means that we need to choose which subset of examples we want to include in our dataset. Given our goal of comprehensively measuring compositional generalization, we do this by:
maximizing the overall diversity of rule combinations (allowing us to test as many rule combinations as possible)
while using a uniform distribution from simple examples to increasingly more complex examples.
We measure the diversity of rule combinations of a dataset using the empirical entropy over the frequency distribution of the subgraphs of the rule application DAGs, and we measure the complexity of an example using the number of rule applications used to generate it.
For CFQ, we choose the following practical trade-off between these two criteria. We first generate a sufficiently large sample set by performing random rule applications. We then subsample from it to select a subset that maximizes the entropy of the subgraph distribution (while only taking into account subgraphs with a limited number of nodes for practicality). We use a greedy algorithm that incrementally assigns elements to the subsampled set while maximizing entropy at each step.
The subsampling is initially limited to examples with the smallest complexity level and continues with increasingly larger complexity levels. We cap the maximum number of examples per level to achieve a uniform distribution across levels, and we limit the maximum complexity level such that the questions remain relatively natural. Table TABREF15 shows examples of generated questions at varying levels of complexity.
Example of a rule application DAG
Figures FIGREF190 through FIGREF192 show the rule application DAG that was produced when generating the question “Who directed [entity]?”. They illustrate how grammar, inference, and knowledge rules are combined to generate a pair of text and logical form, and how resolution rules are used to generate the sparql query for the resulting logical form.
Example of a rule application DAG ::: DAG normalization
As discussed in Section SECREF3, nodes of this DAG represent rule applications while edges represent dependencies among the rules; i.e., an edge $A \rightarrow B$ means that rule $B$ strictly depends on rule $A$ in the sense that the generator cannot apply rule $B$ before applying rule $A$. The DAG is normalized to ensure that a certain rule combination is represented using the same DAG across all the examples where it occurs. This is important for meaningfully comparing measures such as entropy and divergence across subgraphs of different examples.
Specifically, together with adopting the measures described above to ensure that rules are applied in a deterministic order, we achieve the normalization of the DAG by only producing edges that represent “minimal dependencies”. This means that if a rule $A$ can be applied after rule $B$, but it could also be applied after rule $B^{\prime }$ with $B \rightarrow B^{\prime }$ (i.e., $B^{\prime }$ depends on $B$), we don't produce the edge $B^{\prime } \rightarrow A$.
Example of a rule application DAG ::: Concept abbreviations
For brevity, in the rule application DAG figures we have applied the following abbreviations for several lengthy concept names:
Director = FilmDirector
Directee = DirectedFilm
Directing = DirectingAFilm
SubjectAgentVerb = PredicateWithBoundRolePairs(RolePair( SubjectHaver, Subject), RolePair(Predicate, Agent))
ObjectUndergoerVerb = PredicateWithBoundRolePairs(RolePair( ObjectHaver, Object), RolePair(Predicate, Undergoer))
E1 = Entity('?E1')
Example of a rule application DAG ::: Entity placeholders
As described in Section SECREF16, during generation we initially generate a $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple containing entity placeholders, and then replace those placeholders with specific entities as a post-processing step. Conceptually, one could construct a rule application DAG describing either the process by which the original $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple with entity placeholders was generated, or alternatively the rules that would need to be applied if constructing the $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple containing the final entity MIDs directly. Structurally, these two DAGs are identical, differing only in the definition of two entity-related rules described below. The rule application DAG shown in the accompanying figures is the version using entity placeholders.
Versions of entity rules applicable when using entity placeholders:
ENTITY=[ENTITY]_HSz7QrdGdsX:
ENTITY(number:singular)/Entity(new_var(V1))
$\rightarrow $ '[entity]'
ENTITY_MID:
ent2sparql(Entity($X)) $\rightarrow $ $X
Versions of entity rules applicable when using actual entity MIDs:
ENTITY=[ENTITY]_HSz7QrdGdsX:
ENTITY(number:singular)/'m.'$X
$\rightarrow $ 'm.'$X
ENTITY_MID:
ent2sparql('m.'$X) $\rightarrow $ 'ns:m.'$X
Example of a rule application DAG ::: Subgraphs and their weights
Figure FIGREF203 shows an example of subgraphs in order to provide more details on the sampling and weighting of compounds. An example non-linear subgraph is highlighted by the red area, and two linear subgraphs are highlighted by the blue and the yellow areas, respectively.
As described in Section SECREF6, given a large subset $\mathbb {G}$ of subgraphs from the sample set as a whole, we calculate for each sample the weight of each subgraph $G \in \mathbb {G}$ that occurs in that sample as:
where $\text{occ}(G)$ is the set of all occurrences of $G$ in the sample, $\prec $ denotes the strict subgraph relation, and $P(G^{\prime }| G)$ is the empirical probability of $G^{\prime }$ occurring as a supergraph of $G$ over the full sample set.
Intuitively, we are trying to estimate how interesting the subgraph $G$ is in the sample. First, for every occurrence $g$ of a subgraph $G$, we look for the supergraph $G^{\prime }$ of $g$ that co-occurs most often with $G$ in the full sample set. The empirical probability of having $G^{\prime }$ as a supergraph of $G$ determines how interesting the occurrence $g$ is – the higher this probability, the less interesting the occurrence. Thus we compute the weight of the occurrence as the complement of this maximum empirical probability. Then we take the weight of $G$ to be the weight of the most interesting occurrence $g$ of $G$ in the sample.
E.g. in the extreme case that $G$ only occurs within the context $G^{\prime }$, the weight of $G$ will be 0 in all samples. Conversely, if $G$ occurs in many different contexts, such that there is no single other subgraph $G^{\prime }$ that subsumes it in many cases, then $w(G)$ will be high in all samples in which it occurs. This ensures that when calculating compound divergence based on a weighted subset of compounds, the most representative compounds are taken into account, while avoiding double-counting compounds whose frequency of occurrence is already largely explainable by the frequency of occurrence of one of its super-compounds.
Returning to our example in Figure FIGREF203, suppose that $G$ represents the smallest linear subgraph (yellow area), and suppose that the weight of $G$ in this sample is 0.4. Then this means that there exists some other subgraph $G^{\prime }$ (for instance, the linear subgraph highlighted by the blue area) that is a supergraph of $G$ in 60% of the occurrences of $G$ across the sample set.
Rules Index
Below is a selection of the rules used in the generation of CFQ. Specifically, this includes all rules involved in generating the question “Who directed [entity]?” (the same example illustrated in the rule application DAG in Appendix SECREF19). The format of the rules is discussed in Appendix SECREF17.
Rules Index ::: Grammar rules
S=WHQ_F6E9egkQqxj:
S/_x
$\rightarrow $ WHQ/_x
WHQ=NPQ_INDIRECT_VP_INDIRECT_TXCca9URgVm:
WHQ[_subject]/DropDependency(_subject) $\sqcap $ DropDependency($\exists $RolePair(Subject, SubjectHaver)._action)
$\rightarrow $ NPQ_INDIRECT(is_what:_none_, number:$n)/_subject
VP_INDIRECT(form:past, number:$n, object:yes, subject:_none_)/_action
NPQ_INDIRECT=WHO_5ptbPXXbuLZ:
NPQ_INDIRECT(number:singular)/Person
$\rightarrow $ 'who'
VP_INDIRECT=VP_INDIRECT_DP_ZJH4NhRkByc:
VP_INDIRECT(object:yes)[_action]/_action $\sqcap $ $\exists $RolePair(ObjectHaver, Object)._object
$\rightarrow $ VP_INDIRECT(object:_none_, subject:_none_)/_action
DP/_object
VP_INDIRECT=ACTIVE_VP_RX51Tm7RXPe:
VP_INDIRECT(object_type:$ut, subject_type:$at)[_head]/_head $\sqcap $ PredicateWithBoundRolePairs(RolePair(SubjectHaver, Subject), RolePair(Predicate, Agent)) $\sqcap $ PredicateWithBoundRolePairs(RolePair(ObjectHaver, Object), RolePair(Predicate, Undergoer))
$\rightarrow $ ACTIVE_VP(agent_type:$at, undergoer_type:$ut)/_head
ACTIVE_VP=VP_SIMPLE_hJqAyjRUYJp:
ACTIVE_VP(number:singular)[_head]/_head
$\rightarrow $ VP_SIMPLE(form:past)/_head
VP_SIMPLE=VP_GHWf3fcVRZg:
VP_SIMPLE(agent_type:person, undergoer_type:movie)[_head]/_head
$\rightarrow $ VP(concept_id:DirectingAFilm)/_head
VP=DIRECTED_JkYzNbQyXtv:
VP(concept_id:DirectingAFilm, form:past)/DirectingAFilm
$\rightarrow $ 'directed'
DP=ENTITY_M6fSP5GvRaN:
DP(is_proper_noun:yes, number:singular)[_head]/_head
$\rightarrow $ ENTITY/_head
ENTITY=[ENTITY]_HSz7QrdGdsX:
ENTITY(number:singular)/Entity(new_var(V1))
$\rightarrow $ '[entity]'
... (211 grammar rules total)
Rules Index ::: Inference rules
BOUND_ROLES_WITH_PREDICATE_OBJECT:
BoundRolePairs($A, RolePair($R, $Q), RolePair($T, $S)):
$\exists $RolePair($Q, $R).($A $\sqcap $ $B) $\rightarrow $ $\exists $RolePair($S, $T).($A $\sqcap $ $B)
BOUND_ROLES_WITH_PREDICATE_SUBJECT:
BoundRolePairs($B, RolePair($Q, $R), RolePair($S, $T)):
$B $\sqcap $ $\exists $RolePair($Q, $R).$A $\rightarrow $ $B $\sqcap $ $\exists $RolePair($S, $T).$A
IGNORE_BOUND_ROLE_PAIRS:
$A $\sqcap $ PredicateWithBoundRolePairs($X, $Y) $\rightarrow $ $A
IGNORE_DEPENDENCY_DROPPING:
DropDependency($X) $\rightarrow $ $X
PREDICATE_UNREIFICATION:
Role($Q, $P), Role($R, $P):
$\exists $RolePair($Q, Predicate).($P $\sqcap $ $\exists $RolePair(Predicate, $R).$A) $\rightarrow $ $\exists $RolePair($Q, $R).$A
... (17 inference rules total)
Rules Index ::: Resolution rules
CONJUNCTION_WITHOUT_ENTITY:
def2sparql($X $\sqcap $ $Y, $V1) $\rightarrow $ def2sparql($X, $V1) ' . ' def2sparql($Y, $V1)
ENTITY_MID:
ent2sparql(Entity($X)) $\rightarrow $ $X
GET_SPECIALIZATIONS:
get_specializations($X) $\rightarrow $ 'SELECT DISTINCT ' get_var($X, new_var($V0)) ' WHERE { ' def2sparql($X, get_var($X, $V0)) '}'
GET_VAR_CONJUNCTION:
get_var($X $\sqcap $ $Y, $V1) $\rightarrow $ shared_var(get_var($X, get_var($Y, $V1)), get_var($Y, get_var($X, $V1)))
GET_VAR_RELATION:
get_var($\exists $$R.$X, $V1) $\rightarrow $ $V1
GET_VAR_TYPE:
FreebaseTypeMapping($X, $F):
get_var($X, $V1) $\rightarrow $ $V1
PROPERTY_MAPPING:
FreebasePropertyMapping($R, $F):
role2sparql($R) $\rightarrow $ $F
RELATION_MAPPING_WITHOUT_EXCLUSION:
NonExclusiveRolePair($R):
rel2sparql($X, $R, $Y) $\rightarrow $ $X role2sparql($R) $Y
RELATION_TO_ENTITY:
def2sparql($\exists $$R.$X, $V1) $\rightarrow $ rel2sparql($V1, $R, ent2sparql($X))
SHARED_VAR:
shared_var($X, $X) $\rightarrow $ $X
SPECIALIZATION_OF_TYPE:
def2sparql($X, $V1) $\rightarrow $ $V1 ' a ' type2sparql($X)
TYPE_MAPPING:
FreebaseTypeMapping($X, $F):
type2sparql($X) $\rightarrow $ $F
... (21 resolution rules total)
Rules Index ::: Knowledge rules
$\rightarrow $ BoundRolePairs(DirectingFilm, RolePair(Predicate, Agent), RolePair(Predicate, FilmDirector))
$\rightarrow $ BoundRolePairs(DirectingFilm, RolePair(Predicate, Undergoer), RolePair(Predicate, DirectedFilm))
$\rightarrow $ BoundRolePairs(PredicateWithBoundRolePairs(RolePair(ObjectHaver, Object), RolePair(Predicate, Undergoer)), RolePair(ObjectHaver, Object), RolePair(Predicate, Undergoer))
$\rightarrow $ BoundRolePairs(PredicateWithBoundRolePairs(RolePair(Subject, SubjectHaver), RolePair(Agent, Predicate)), RolePair(Subject, SubjectHaver), RolePair(Agent, Predicate))
$\rightarrow $ FreebasePropertyMapping(RolePair(FilmDirector, DirectedFilm), 'ns:film.director.film')
$\rightarrow $ FreebaseTypeMapping(Person, 'ns:people.person')
$\rightarrow $ NonExclusiveRolePair(FilmDirector, DirectedFilm)
$\rightarrow $ Role(DirectedFilm, DirectingFilm)
$\rightarrow $ Role(FilmDirector, DirectingFilm)
... (194 knowledge rules total) | between 0.81 and 0.88 |
d5ff8fc4d3996db2c96cb8af5a6d215484991e62 | d5ff8fc4d3996db2c96cb8af5a6d215484991e62_0 | Q: What are results of comparison between novel method to other approaches for creating compositional generalization benchmarks?
Text: Introduction
Human intelligence exhibits systematic compositionality BIBREF0, the capacity to understand and produce a potentially infinite number of novel combinations of known components, i.e., to make “infinite use of finite means” BIBREF1. In the context of learning from a set of training examples, we can observe compositionality as compositional generalization, which we take to mean the ability to systematically generalize to composed test examples of a certain distribution after being exposed to the necessary components during training on a different distribution.
Humans demonstrate this ability in many different domains, such as natural language understanding (NLU) and visual scene understanding. For example, we can learn the meaning of a new word and then apply it to other language contexts. As BIBREF2 put it: “Once a person learns the meaning of a new verb `dax', he or she can immediately understand the meaning of `dax twice' and `sing and dax'.” Similarly, we can learn a new object shape and then understand its compositions with previously learned colors or materials BIBREF3, BIBREF4.
In contrast, state-of-the-art machine learning (ML) methods often fail to capture the compositional structure that is underlying the problem domain and thus fail to generalize compositionally BIBREF2, BIBREF5, BIBREF6, BIBREF7, BIBREF3. We believe that part of the reason for this shortcoming is a lack of realistic benchmarks that comprehensively measure this aspect of learning in realistic scenarios.
As others have proposed, compositional generalization can be assessed using a train-test split based on observable properties of the examples that intuitively correlate with their underlying compositional structure. BIBREF8, for example, propose to test on different output patterns than are in the train set, while BIBREF2 propose, among others, to split examples by output length or to test on examples containing primitives that are rarely shown during training. In this paper, we formalize and generalize this intuition and make these contributions:
We introduce distribution-based compositionality assessment (DBCA), which is a novel method to quantitatively assess the adequacy of a particular dataset split for measuring compositional generalization and to construct splits that are ideally suited for this purpose (Section SECREF2).
We present the Compositional Freebase Questions (CFQ) , a simple yet realistic and large NLU dataset that is specifically designed to measure compositional generalization using the DBCA method, and we describe how to construct such a dataset (Section SECREF3).
We use the DBCA method to construct a series of experiments for measuring compositionality on CFQ and scan BIBREF2 and to quantitatively compare these experiments to other compositionality experiments (Section SECREF4).
We analyze the performance of three baseline ML architectures on these experiments and show that these architectures fail to generalize compositionally, and perhaps more surprisingly, that compound divergence between train and test sets is a good predictor of the test accuracy (Section SECREF5).
Distribution-Based Compositionality Assessment (DBCA)
Like other authors, we propose to measure a learner's ability to generalize compositionally by using a setup where the train and test sets come from different distributions. More specifically, we propose a setup where each example is obtained by composing primitive elements (atoms), and where these atoms are similarly represented in the train and test sets while the test set contains novel compounds, i.e., new ways of composing the atoms of the train set.
As a simple illustrative scenario, consider the task of answering simple questions such as “Who directed Inception?” and “Did Christopher Nolan produce Goldfinger?”. In this scenario, the atoms intuitively correspond to the primitive elements that are used to compose those questions, such as the predicates “direct(ed)” and “produce(d)”, the question patterns “Who [predicate] [entity]” and “Did [entity1] [predicate] [entity2]”, and the entities “Inception”, “Christopher Nolan”, etc. The compounds on the other hand correspond to the combinations of these atoms that appear in the various examples: "Who directed [entity]?", "Did Christopher Nolan [predicate] Inception?", etc.
To measure compositional generalization on such a task, one might therefore use the questions “Who directed Inception?” and “Did Christopher Nolan produce Goldfinger?” as training examples while testing on questions such as “Did Christopher Nolan direct Goldfinger?” and "Who produced Inception?" because the atoms are identically represented in the train and test sets while the compounds differ.
To make this intuition more precise, we focus on datasets such as CFQ (introduced in Section SECREF3) and scan BIBREF2, where each example can be created from a formal set of rules by successively applying a number of these rules. In this case, the atoms are the individual rules, while the compounds are the subgraphs of the directed acyclic graphs (DAGs) that correspond to the rule applications. (See Sections SECREF3 and SECREF4 for more details.)
Distribution-Based Compositionality Assessment (DBCA) ::: Principles for measuring compositionality
We use the term compositionality experiment to mean a particular way of splitting the data into train and test sets with the goal of measuring compositional generalization. Based on the notions of atoms and compounds described above, we say that an ideal compositionality experiment should adhere to the following two principles:
Similar atom distribution: All atoms present in the test set are also present in the train set, and the distribution of atoms in the train set is as similar as possible to their distribution in the test set.
Different compound distribution: The distribution of compounds in the train set is as different as possible from the distribution in the test set.
The second principle guarantees that the experiment is compositionally challenging in the sense that it tests the learner on compounds that are as different as possible from the compounds used during training. The first principle aims to guarantee that the experiment is exclusively measuring the effect of the difference in the way atoms are composed to form compounds (rather than some related but different property such as domain adaptation on the distribution of the atoms).
To determine to which degree a certain experiment adheres to these principles, we use the following formalization. For a sample set $T$, we use $\mathcal {F}_A(T)$ to denote the frequency distribution of atoms in $T$ and $\mathcal {F}_C(T)$ for the weighted frequency distribution of compounds in $T$, which correspond to the subgraphs of the rule application DAGs. For practicality, we do not consider all subgraphs of rule application DAGs when computing the compound divergence. Instead, we first generate a large subset $\mathbb {G}$ of subgraphs, then weight them in context of their occurrence, and keep only the ones with highest sum of weights. The purpose of the weighting is to avoid double-counting compounds that are highly correlated with some of their super-compounds. We achieve this by calculating the weight of $G \in \mathbb {G}$ in a sample as $w(G) = \max _{g \in \text{occ}(G)} (1 - \max _{G^{\prime }: g \prec g^{\prime } \in \text{occ}(G^{\prime })} P(G^{\prime }| G))$, where $\text{occ}(G)$ is the set of all occurrences of $G$ in the sample, $\prec $ denotes the strict subgraph relation, and $P(G^{\prime }| G)$ is the empirical probability of $G^{\prime }$ occurring as a supergraph of $G$ over the full sample set. See Appendix SECREF202 for example subgraphs and more details on the weighting.
We measure divergence (or similarity) of the weighted distributions using the Chernoff coefficient $C_\alpha (P \Vert Q) = \sum _{k} p_k^\alpha \, q_k^{1-\alpha } \in [0, 1]$ BIBREF9. For the atom divergence, we use $\alpha =0.5$, which corresponds to the Bhattacharyya coefficient and reflects the desire of making the atom distributions in train and test as similar as possible. For the compound divergence, we use $\alpha = 0.1$, which reflects the intuition that it is more important whether a certain compound occurs in $P$ (train) than whether the probabilities in $P$ (train) and $Q$ (test) match exactly. This allows us to formally define as follows the notions of compound divergence $\mathcal {D}_C$ and atom divergence $\mathcal {D}_A$ of a compositionality experiment consisting of a train set $V$ and a test set $W$:
Based on these principles, we suggest to use as a preferred compositionality benchmark for a given dataset the accuracy obtained by a learner on splits with maximum compound divergence and low atom divergence (we use $\mathcal {D}_A \le 0.02$). See Section SECREF4 for details about how to construct such splits.
The CFQ Dataset
We present the Compositional Freebase Questions (CFQ) as an example of how to construct a dataset that is specifically designed to measure compositional generalization using the DBCA method introduced above. CFQ is a simple yet realistic, large dataset of natural language questions and answers that also provides for each question a corresponding sparql query against the Freebase knowledge base BIBREF10. This means that CFQ can be used for semantic parsing BIBREF11, BIBREF12, which is the task that we focus on in this paper.
The CFQ Dataset ::: Automatic, rule-based generation
BIBREF13 describe a number of benefits for automated rule-based dataset generation, including scalability, control of scope, and avoidance of human errors. Beyond these benefits, however, such an approach is particularly attractive in the context of measuring compositional generalization using the DBCA method, as it allows us to precisely track the atoms (rules) and compounds (rule applications) of each example by recording the sequence of rule applications used to generate it.
Since the way we measure compositionality depends on how the examples can be broken down into atoms and compounds, we design the generation rules so as to have few and meaningful atoms. More precisely, we aim to have as few rules as possible so that the richness of the examples comes from composing them, which yields a large variety of compounds (enabling a large range of different compound divergences) while making it easy to obtain similar distributions of atoms. Also, we aim to make our rules truly “atomic” in the sense that the behavior of any rule is independent of the context where it is applied (e.g., rules may not contain “if-then-else” constructs).
In order to minimize the number of rules, we use an intermediate logical form that serves as a uniform semantic representation with relatively direct mappings to natural language and sparql. Our rules thus fall into the following four categories (a selection of rules is provided in Appendix SECREF20):
Grammar rules that generate natural language constructs and corresponding logical forms.
Inference rules that describe transformations on logical forms, allowing us to factor out transformations that are independent of specific linguistic and sparql constructs.
Resolution rules that map constructs of the logical form to sparql constructs.
Knowledge rules that supply logical form expressions that are universally applicable. Other rules can be kept more generic by parameterizing them on knowledge.
These rules define a language of triples of the form $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $. Our generation algorithm produces such triples in a mixed top-down and bottom-up fashion. We first apply grammar rules and inference rules to produce the natural language questions and their semantics in our logical form. Then we apply resolution rules to obtain the sparql query. See Figure FIGREF14 for an illustration. In addition, the generator produces a normalized, directed acyclic graph (DAG) of rule applications that corresponds to the normalized program that generated the triple. (Appendix SECREF19 shows an example.) Edges of this DAG represent dependencies among the rule applications, and the normalization ensures that a certain rule combination is represented using the same DAG across all the examples where it occurs.
The described approach can generate a potentially infinite set of questions, from which we first sample randomly and then subsample (to maximize the overall diversity of rule combinations while keeping a uniform distribution over complexity). We measure the diversity of rule combinations using the empirical entropy of a weighted subset of the rule application DAGs, and we use the number of rule applications as a measure of the complexity of an example. We also limit the maximum example complexity such that the questions remain relatively natural. Table TABREF15 shows examples of generated questions at varying levels of complexity. An example of a complete data item is shown in Appendix SECREF8, a more detailed data quality analysis is presented in Appendix SECREF9, and the generation algorithm is discussed in more detail in Appendix SECREF18.
The CFQ Dataset ::: Dataset details and statistics
Input and output. While the primary focus of the dataset is semantic parsing (natural language question to sparql query), we also provide natural language answers for each question. This allows the dataset to be used in a text-in-text-out scenario as well (see Appendix SECREF8).
Ambiguity. We largely avoid ambiguity in the questions. In particular, we make sure each name is used to refer to exactly one entity, and we avoid different possible parse trees, different interpretations of plurals, and the need for disambiguation that requires semantic knowledge.
Scope. We select the following language features as compositional building blocks: open questions and closed questions; subordinate clauses; active and passive voice; conjunctions of verb phrases and of noun phrases; possessives with roles (“X's parent”); adjectives; and type restrictions. For knowledge base features, we select roles, verbs, types, and adjectives from domains that are well-represented in Freebase and that can be combined easily. We start from the popular movie domain (e.g., directing, producing, editor, sequel) and extend this with personal relations (e.g., parent, spouse, sibling), companies (e.g., founding, employer), and adjectives (e.g., gender, nationality).
Logical form and grammar. For the internal logical form, we adopt a variation of the description logic $\mathcal {EL}$ BIBREF14, BIBREF15, augmented with additional constructors (see Appendix SECREF16) to more easily map to certain linguistic structures. For the grammar rules, we use a unification-based grammar syntax similar to that used in the Prolog extension GULP 3.1 BIBREF16, with addition of support for disjunction, negation, absence, and default inheritance of features for compactness of representation.
Grounding in Freebase. Once an example is generated by the CFQ rules, it still contains entity placeholders instead of Freebase machine ids (MIDs). For the task of semantic parsing, the examples could theoretically be used as-is, as our avoidance of semantic ambiguity means that a learner should not need knowledge of the specific entity in order to parse the question. To make the questions natural, however, we apply an additional step of replacing the placeholders with appropriate specific entities. To do this we first execute the generated sparql query against Freebase. This returns a set of candidate MID combinations that satisfy the query and can be used as substitutes. If the set is empty, we abandon the generated question candidate as unnatural. Otherwise, we pick one combination at random to yield a question with positive answer. In the case of a closed question, we also generate a variation that yields the answer “No”, which we do by mixing in MIDs from another substitution (or a more generic replacement if that fails) to keep the question as plausible-sounding as possible. We then randomly choose either the question with positive or with negative answer, to avoid spurious correlations between question structure and yes/no answer.
Semantic and structural filtering. Even among the questions that can be satisfied in Freebase, there are some that are meaningful but somewhat unnatural, such as “Was Strange Days directed by a female person whose gender is female?”. We automatically filter out such unnatural questions using semantic and structural rules. Note that since we do not require a learner to identify such questions, we do not track these filtering rules.
Release and statistics.
CFQ contains 239,357 English question-answer pairs that are answerable using the public Freebase data. (The data URL is not yet provided for anonymous review.) We include a list of MIDs such that their English names map unambiguously to a MID. Table TABREF17(a) summarizes the overall statistics of CFQ. Table TABREF17(b) uses numbers from BIBREF8 and from an analysis of WebQuestionsSP BIBREF17 and ComplexWebQuestions BIBREF18 to compare three key statistics of CFQ to other semantic parsing datasets (none of which provide annotations of their compositional structure). CFQ contains the most query patterns by an order of magnitude and also contains significantly more queries and questions than the other datasets. Note that it would be easy to boost the raw number of questions in CFQ almost arbitrarily by repeating the same question pattern with varying entities, but we use at most one entity substitution per question pattern. Appendix SECREF10 contains more detailed analyses of the data distribution.
Compositionality Experiments for CFQ and scan
The DBCA principles described in Section SECREF6 enable a generic and task-independent method for constructing compositionality experiments. To construct such an experiment for a dataset $U$ and a desired combination of atom and compound divergences, we use an iterative greedy algorithm that starts with empty sets $V$ (train) and $W$ (test), and then alternates between adding an example $u \in U$ to $V$ or $W$ (while maintaining the desired train/test ratio). At each iteration, the element $u$ is selected such that $\mathcal {D}_C(V \Vert W)$ and $\mathcal {D}_A(V \Vert W)$ are kept as closely as possible to the desired values. To reduce the risk of being stuck in a local optimum, we also allow removing examples at certain iterations.
In general, there are many different splits that satisfy a desired compound and atom divergence. This reflects the fact that a certain compound may either occur exclusively in the train set or the test set, or it may occur in both of them because the split may have achieved the desired compound divergence by separating other (possibly orthogonal) compounds. Our greedy algorithm addresses this by making random choices along the way, starting with picking the first example randomly.
For the goal of measuring compositional generalization as accurately as possible, it is particularly interesting to construct maximum compound divergence (MCD) splits, which aim for a maximum compound divergence at a low atom divergence (we use $\mathcal {D}_A \le 0.02$). Table TABREF18 compares the compound divergence $\mathcal {D}_C$ and atom divergence $\mathcal {D}_A$ of three MCD splits to a random split baseline as well as to several previously suggested compositionality experiments for both CFQ and the existing scan dataset (cf. Section SECREF30). The split methods (beyond random split) are the following:
Output length: Variation of the setup described by BIBREF2 where the train set consists of examples with output (sparql query or action sequence) length $\le \hspace{-2.5pt} N$, while the test set consists of examples with output length $> \hspace{-2.5pt} N$. For CFQ, we use $N = 7$ constraints. For scan, we use $N = 22$ actions.
Input length: Variation of the above setup, in which the train set consists of examples with input (question or command) length $\le N$, while test set consists of examples with input length $> N$. For CFQ, we use $N=19$ grammar leaves. For SCAN, we use $N=8$ tokens.
Output pattern: Variation of setup described by BIBREF8, in which the split is based on randomly assigning clusters of examples sharing the same output (query or action sequence) pattern. Query patterns are determined by anonymizing entities and properties; action sequence patterns collapse primitive actions and directions.
Input pattern: Variation of the previous setup in which the split is based on randomly assigning clusters of examples sharing the same input (question or command) pattern. Question patterns are determined by anonymizing entity and property names ; command patterns collapse verbs and the interchangeable pairs left/right, around/opposite, twice/thrice.
All of these experiments are based on the same train and validation/test sizes of 40% and 10% of the whole set, respectively. For CFQ, this corresponds to about 96k train and 12k validation and test examples, whereas for scan, it corresponds to about 8k train and 1k validation and test examples. We chose to use half of the full dataset for the train-test splits, as it led to an appropriate balance between high compound divergence and high train set size in informal experiments.
The MCD splits achieve a significantly higher compound divergence at a similar atom divergence when compared to the other experiments. The reason for this is that, instead of focusing on only one intuitive but rather arbitrary aspect of compositional generalization, the MCD splits aim to optimize divergence across all compounds directly.
Interestingly, the MCD splits still correlate with the aspects of compositional generalization that are targeted by the other experiments in this table. As shown in the four right columns of Table TABREF18, for each MCD split, the train set $V$ contains on average shorter examples than the test set $W$ (measured by the ratio of average lengths), and $V$ also contains only a small fraction of the input and output patterns used in $W$ (measured by the fraction of patterns covered). However, these correlations are less pronounced than for the experiments that specifically target these aspects, and they vary significantly across the different MCD splits.
This illustrates that MCD splits are comprehensive in the sense that they cover many different aspects of compositional generalization, especially when looking at multiple of them. It also means that whether a certain example ends up in train or test is not determined solely by a single criterion that is immediately observable when looking at the input and output (such as length). As we show in Appendix SECREF91, this generally makes the examples in train and test look fairly similar.
Experimental Results and Analysis ::: Experiment Setup
We use three encoder-decoder neural architectures as baselines: (1) LSTM+attention as an LSTM BIBREF19 with attention mechanism BIBREF20; (2) Transformer BIBREF21 and (3) Universal Transformer BIBREF22.
We tune the hyperparameters using a CFQ random split, and we keep the hyperparameters fixed for both CFQ and scan (listed in Appendix SECREF12). In particular the number of training steps is kept constant to remove this factor of variation. We train a fresh model for each experiment, and we replicate each experiment 5 times and report the resulting mean accuracy with 95% confidence intervals.
Note that while we construct test and validation sets from the same distribution, we suggest that hyperparameter tuning should be done on a random split (or random subset of the train set) if one wants to measure compositional generalization of a model with respect to an unknown test distribution as opposed to an architecture with respect to a known test distribution. Tuning on a validation set that has the same distribution as the test set would amount to optimizing for a particular type of compound divergence and thus measure the ability for a particular architecture to yield models that can be made to generalize in one particular way (through leaking information about the test set in the hyperparameters).
Similarly to BIBREF8, we anonymize the Freebase names and MIDs in the textual input and the SPARQL output, respectively, by replacing them with a placeholder (e.g., “M0” for the first MID). This removes the need for two learning sub-tasks that are orthogonal to our focus: named entity recognition and learning that the MIDs are patterns that need to be copied. An example input-output (question-query) pair then looks like the following: `Was M0 a screenwriter' $\mapsto $ `select count(*) where {M0 a ns:film.writer}'.
The main relation we are interested in is the one between compound divergence of the data split and accuracy. Specifically, we compute the accuracy of each model configuration on a series of divergence-based splits that we produce with target compound divergences that span the range between zero and the maximum achievable in 0.1 increments (while ensuring that atom divergence does not exceed the value of 0.02). For each target divergence, we produce at least 3 different splits with different randomization parameters (compare Section SECREF4). For comparison, we also compute accuracies on the other splits shown in Table TABREF18.
Experimental Results and Analysis ::: Results and analysis for CFQ
The mean accuracies of the three architectures on CFQ are shown in Figure FIGREF28(a) and Table TABREF29. We make three main observations:
All models achieve an accuracy larger than 95% on a random split, and this is true even if they are trained on 10 times fewer training instances (see Appendix SECREF15 for a more detailed analysis on the performance with varying training size).
The mean accuracy on the MCD splits is below 20% for all architectures, which means that even a large train set (about 96k instances) with a similar distribution of atoms between train and test is not sufficient for these architectures to perform well on the test distribution.
For all architectures, there is a strong negative correlation between the compound divergence and the mean accuracy.
This suggests that the baseline models are able to capture the superficial structure of the dataset, but fail to capture the compositional structure. We find it surprising that varying the compound divergence gives direct control of the (mean) accuracy, even though the examples in train and test look similar (see Appendix SECREF91). This means that compound divergence seems to capture the core difficulty for these ML architectures to generalize compositionally.
Note that the experiment based on output-length exhibits a worse accuracy than what we would expect based on its compositional divergence. One explanation for this is that the test distribution varies from the training distribution in other ways than compound divergence (namely in output length and a slightly higher atom divergence), which seems to make this split particularly difficult for the baseline architectures. To analyze the influence of the length ratio further, we compute the correlation between length ratios and accuracy of the baseline systems and compare it to the correlation between compound divergence and accuracy. We observe $R^2$ correlation coefficients between 0.11 and 0.22 for the input and output length ratios and between 0.81 and 0.88 for the compound divergence. This shows that despite the known phenomenon that the baseline systems struggle to generalize to longer lengths, the compound divergence seems to be a stronger explanation for the accuracy on different splits than the lengths ratios.
Error analysis. We perform an analysis of the errors for the split MCD$_{1}$ (the first MCD split that we constructed, with more details provided in Appendix SECREF13). We observe accuracies between 29% and 37% on the test set of this particular split. Qualitatively, all three systems seem to make similar errors at this point (68% of errors are on the same samples). They make more errors for longer sequences and predict about 20% too short output when they make an error. The most common category of error is the omission of a clause in the output (present in 43%-49% of the test samples), e.g.: (1) Omitted conjunctions: for the input “What spouse of a film producer executive produced and edited M0, M1, and M2?” the best system ignores “executive produced” in the output. (2) Omitted adjectives: for the input “Which female Spanish film producer was M3' s spouse?” the best system ignores the adjective “female”.
Experimental Results and Analysis ::: Results and analysis for scan
To demonstrate the use of our analysis method on another dataset, we re-create the scan dataset BIBREF2, which consists of compositional navigation commands (e.g, `turn left twice and jump') mapped to corresponding action sequences (e.g., `lturn lturn jump'). We use the original grammar while tracking the rule applications used for the construction of each input-output pair. This enables us to compare the compositional generalization abilities of the baseline systems on this dataset in a novel way.
Figure FIGREF28(b) shows the graphs for the scan data set in the same setup as Figure FIGREF28(a) does for CFQ. We observe that the compound divergence again is a good predictor for the mean accuracy for all three architectures. One difference is that for scan the systems are able to attain accuracies close to 100% for compound divergences up to around 0.2, which is not the case for CFQ. This seems to be in line with the fact that overall CFQ is a more complex task than scan: the total number of rules used in generating scan is only 38 in comparison to 443 rules in the construction of CFQ.
Appendix SECREF14 provides a comparison to other experiments presented in previous work, including experiments that have significantly different atom distributions. We observe that this generally causes lower accuracies but does not break the correlation between accuracy and compound divergence.
Related Work
To measure compositional generalization for semantic parsing to SQL, BIBREF8 propose to ensure that no SQL query pattern occurs in both the train and the test set (“query split”), and they provide such splits for several data sets. By evaluating several ML architectures the authors confirm that this query-pattern split is harder to learn than a conventional split.
BIBREF2 introduce the scan dataset, and several publications provide interesting analyses of compositional generalization using it BIBREF5, BIBREF6. BIBREF7 discuss a particular extension of a seq2seq model that is effective in handling difficult scan sub-tasks by separating semantic and syntactic information during learning. Our contributions extend the analyses on the scan data in several ways: CFQ provides richer annotations and covers a broader subset of English than the scan dataset, and we propose a comprehensive score for assessing aggregate compositionality of a system on a given task.
The mathematics dataset BIBREF13 is a large, automatically generated set of 112M samples in 56 separated sub-tasks. The authors present data and experiments that share common goals with our approach, but focus on mathematical reasoning instead of natural language. Our breakdown of generation rules per train sample is more fine-grained, which allows a more precise compositional generalization analysis. Being automatically generated also links our approach to datasets such as the bAbI tasks BIBREF23, which however do not focus on compositional generalization.
A dataset related to CFQ is ComplexWebQuestions BIBREF18, which consists of complex questions that are automatically generated from simpler sub-questions in WebQuestionsSP BIBREF17 and then reworded manually. While these datasets can be used for semantic parsing, we did not find them suitable for a thorough compositionality analysis because a consistent annotation with the compositional structure would be hard to obtain. Other approaches to semi-automatic dataset creation also use paraphrasing BIBREF24, BIBREF25.
BIBREF3 introduce the generated clevr dataset, which shares common goals with our work applied in the area of visual reasoning. The dataset's functional programs capture some of the structural information of the questions and are linked one-to-many to the 423 question patterns used. The authors specifically investigate generalization to new combinations of visual attributes in one experiment which uses a particular train-test split based on the colors used. BIBREF26 propose a neural-symbolic architecture and discuss promising results on additional specific splits of the clevr data, e.g. based on object counts and program depth. BIBREF27 describe how the application of compositional attention networks to the clevr data leads to structured and data-efficient learning. BIBREF28 present a large, compositional, generated visual question answering data set with functional programs, on which neural state machines achieve good performance BIBREF29. The use of specific splits between train and test data also occurs in the context of visual data. E.g., BIBREF30 propose a greedy split algorithm to maximize the coverage of test concepts in the train set while keeping question-type/answer pairs disjoint and observe performance degradation of existing approaches. BIBREF31 introduce a synthetic visual question answering dataset called sqoop, which is used to test whether a learner can answer questions about all possible object pairs after being trained on a subset.
While these datasets are very interesting, the additional annotation that we provide in CFQ indicating the exact rule trees needed to link input and output makes additional analyses regarding compositionality possible. Our analyses go beyond many of the presented discussions (that mostly focus on accuracy regarding particular holdouts) in formalizing an approach that uses the atom and compound divergences to measure compositionality.
A number of ML approaches have been developed for semantic parsing. BIBREF32 propose Key-Value Memory Networks – neural network-based architectures that internalize a knowledge base into the network – and introduce the WikiMovies dataset. BIBREF33 develop an end-to-end architecture that can handle noise in questions and learn multi-hop reasoning simultaneously. They introduce the MetaQA benchmark that is based on WikiMovies but uses a set of only 511 question patterns (mod entities) shared between train and test.
With regards to studying compositionality in ML, BIBREF34 argue that combinatorial generalization should be a top priority to achieve human-like abilities. BIBREF35 discusses measuring the compositionality of a trained representation, e.g. of a learned embedding. The author suggests to use a tree reconstruction error that is based on how well the oracle derivation of the input matches the structure that can be derived on the representations. BIBREF4 discuss an architecture that enables the learning of compositional concept operators on top of learned visual abstractions. BIBREF36 introduce the compositional recursive learner that “can generalize to more complex problems than the learner has previously encountered”.
Conclusion and Outlook
In this paper we presented what is (to the best of our knowledge) the largest and most comprehensive benchmark for compositional generalization on a realistic NLU task. It is based on a new dataset generated via a principled rule-based approach and a new method of splitting the dataset by optimizing the divergence of atom and compound distributions between train and test sets. The performance of three baselines indicates that in a simple but realistic NLU scenario, state-of-the-art learning systems fail to generalize compositionally even if they are provided with large amounts of training data and that the mean accuracy is strongly correlated with the compound divergence.
We hope our work will inspire others to use this benchmark as a yardstick to advance the compositional generalization capabilities of learning systems and achieve high accuracy at high compound divergence. Some specific directions that we consider promising include applying unsupervised pretraining on the input language or output queries and the use of more diverse or more targeted learning architectures, such as syntactic attention BIBREF7. We also believe it would be interesting to apply the DBCA approach to other domains such as visual reasoning, e.g. based on clevr BIBREF3.
In the area of compositionality benchmarks, we are interested in determining the performance of current architectures on the end-to-end task that expects a natural language answer given a natural language question in CFQ. We would like also to extend our approach to broader subsets of language understanding, including use of ambiguous constructs, negations, quantification, comparatives, additional languages, and other vertical domains.
Example Dataset Item
The following shows an example data item including the question text in various forms, the answer, the sparql query in various forms, some tracked statistics, and the set of used rules (atoms) and the applied rule tree (compound). Some details are omitted, indicated by ellipses (`...').
Data Quality Analysis
During the development of our data generation pipeline, we manually checked the generated examples for quality. Below is a random selection of 50 examples of the final CFQ dataset (no cherry-picking was used). Brackets around [entity names] are provided just for ease of human reading. Manual checking also indicated that all questions are associated with the semantically correct sparql queries. However, because we rely on the data present in Freebase, there are three debatable questions which sound somewhat unnatural (UNKREF33, UNKREF51, and UNKREF59, see further discussion below the list).
Who was a writer, star, and cinematographer of [Tetsuo: The Bullet Man], [Nightmare Detective], and [Bullet Ballet]?
Which male person was a sibling of [Andrew Klavan]?
Did [Wallace Stevens] influence [Levi Seeley]'s spouse and parent?
Did a producer, writer, and art director of [Thelma & Luis] produce, direct, and write [Light Girls]?
Were [Hangover Square], [Zack and Miri Make a Porno], and [Clerks II] edited by a founder and employee of a film producer?
What American parent of [Charlie Sistovaris] was a British screenwriter's sibling?
Did [Anne Williams Rubinstein] marry a person that influenced a screenwriter and influenced [John Most]?
Was [Cachún cachún ra ra!]'s director a film director's American child?
Did [Maisy's Garden]'s executive producer write, edit, and executive produce [Pakalppooram], [It's Not About the Shawerma], [Rick's Canoe], and [The Fifth Wall]?
Was [Holly Ellenson]'s child [Wally Ellenson]?
Did [Emerald Cities]'s cinematographer, writer, and editor edit, executive produce, and direct [Blues for the Avatar] and [White Stork Is Coming]?
Was a film producer [Lilies of the Ghetto]'s distributor and producer?
Which child of [Mimi Iger] did a film producer employ and [The Walt Disney Company] employ?
What Japanese spouse of [Hong Kong Paradise]'s star did [Ineko Arima] and [Nishiki Kô] marry?
Who influenced and was influenced by [Black Dynamite]'s star?
What was written by, edited by, directed by, produced by, and executive produced by [Pauline Collins]'s child's sibling?
Which Swedish film director that [Théo Van Horn]'s actor influenced did [Egen ingȧng] star?
Who was influenced by [Golden Yeggs]'s star, was influenced by [Richard Pryor], was influenced by [Bill Murray], and married [Elaine Chappelle]?
What did [This Is My Show]'s director, cinematographer, and star direct, edit, produce, and executive produce?
Who was a male costume designer and director of [Ene... due... like... fake...] and [The Windmill Bar]?
Was [Kumudu Munasinghe] a Dutch film producer's country of nationality's employee?
Did an art director, editor, director, writer, cinematographer, and star of [Tetsuo II: Body Hammer] produce [Nightmare Detective], [Tetsuo: The Iron Man], and [A Snake of June]?
Was [Alexandra Naoum] [Monsieur Verdoux]'s producer, writer, and star?
What film director founded [THX], was employed by [American Zoetrope], [LucasArts], [Skywalker Sound], and [Lucasfilm], and founded [Industrial Light & Magic]?
What male employee of [Weta Workshop] was [Bad Taste]'s editor?
Were [Weta Digital] and [Weta Workshop] founded by a cinematographer and founded by a film editor?
What art director influenced [DreamWorks Animation]'s founder?
Did [Daisies] star [Fruit of Paradise]'s costume designer and writer, star [Jaromír Vomácka], and star [Jirina Myskova]?
What character was influenced by a costume designer, influenced by [Pedro Calderón de la Barca], influenced by [William Shakespeare] and [Luis Buñuel], and influenced by [Miguel de Unamuno]?
What British costume designer of [The Love Letter] and [The Chamber] was a screenwriter's child?
Was [Eric Massa] a cinematographer's parent's sibling's American sibling?
What art director of [Stepping Sisters 1932] was a parent of [Imre Sándorházi]?
What was executive produced by, written by, produced by, and edited by a director of [V/H/S/2]'s sequel?
What did an editor and cinematographer of [Tongue Twister Variations] direct?
Who was a Canadian screenwriter that produced [Her Painted Hero] and [The Nick of Time Baby]?
Which American parent of [Janet Friedman] did [Rose Friedman] influence and marry?
Did [George Carlin] influence [Louis C.K.: Shameless]'s executive producer and influence [Joan Rivers]?
Who was a male writer, star, director, and costume designer of [The Wizard of Speed and Time]?
Who was [Lost Boys: The Thirst]'s prequel's sequel's art director?
Did a cinematographer's female parent executive produce, direct, and write [Hit Dat Shit 5]?
Who married [Siri von Essen], influenced [A Lesson in Love]'s director and art director, influenced [Tennessee Williams], and influenced [Maxim Gorky]?
What Italian film director directed [Children of Hannibal]?
What film producer directed, wrote, edited, and produced [la estrella], [la ardilla], and [el valiente]?
Were [Flames: The Movie] and [Soltera] directed by a male person and executive produced by [Hilda Russoff]'s spouse?
Was a sibling of [Fawwaz bin Abdulaziz Al Saud] [Badr bin Abdulaziz Al Saud]'s sibling?
What did a sibling of [Louise Rohr] executive produce, produce, and edit?
Did a French cinematographer of [Le Volcan interdit] edit [The Last Bolshevik] and direct [A.K.] and [Statues Also Die]?
Was [Mannai Thottu Kumbidanum] directed by and written by a Dutch male cinematographer?
Was a director, art director, executive producer, and costume designer of [But I'm a Genderqueer] [Lauren Soldano]?
Was [When We Were Kings] produced by a film editor whose spouse was employed by [Royal Academy of Dramatic Art] and distributed by [PolyGram Filmed Entertainment]?
Further discussion of the debatable questions:
Did [Wallace Stevens] influence [Levi Seeley]'s spouse and parent?
The occurrence of the seemingly implausible combination of roles “spouse and parent” is due to incorrect data in Freebase, in which there are 502 entities asserted to be both the spouse and parent of other entities. For instance, “Anne Dacre” is both the spouse and parent of “Christopher Conyers”. We can also find occasional occurrences in CFQ of other implausible role combinations, such as “parent and child”, “spouse and sibling” etc., triggered by similar Freebase data issues.
Was [Kumudu Munasinghe] a Dutch film producer's country of nationality's employee?
The somewhat unnatural phrasing of “country's employee” occurs due to a modeling choice in Freebase, in which the same entity is used to represent both a country and the government of that country. This makes it possible for a country to employ a person.
What character was influenced by a costume designer, influenced by [Pedro Calderón de la Barca], influenced by [William Shakespeare] and [Luis Buñuel], and influenced by [Miguel de Unamuno]?
The somewhat unnatural phrasing of “a character was influenced by” occurs due to a modeling choice in Freebase, in which when a film character is based on a real person, Freebase commonly uses the same entity to represent both. This makes “person” and “character” exchangeable in the questions where the person is also a film character.
Data Distribution Analysis ::: Answer frequencies
Table TABREF85 shows the most frequently occurring answers in CFQ. Not surprisingly, after the answers “Yes” and “No”, entities related in Freebase to the domain of movies have highest frequency.
Data Distribution Analysis ::: Impact of subsampling on the distribution of complexity levels
Figure FIGREF87 illustrates how subsampling changes the distribution of questions in CFQ with different levels of complexity to become more even.
Data Distribution Analysis ::: Impact of subsampling on the frequency of rules and rule combinations
Subsampling increases the frequency of rarely used rules and rule combinations and decreases the frequency of commonly used ones. For rules, this is illustrated by Figure FIGREF89 which shows the ratio of examples each rule appears in, before and after subsampling, in the order of their frequency. Figure FIGREF90 shows the same comparison for rule combinations.
Divergence-Based Split Analysis ::: Qualitative analysis of MCD@!START@$_{1}$@!END@
Traditional compositionality experiments often use train-test splits based on observable properties of the input and output (e.g., input/output complexity, input/output patterns, and input/output feature holdouts). One consequence of this is that the difference between train and test examples is relatively easily observable “with the naked eye”. The lists below illustrate that this is not usually the case for divergence-based splits. Similar to the random sample of the general data in Appendix SECREF9 we provide a random sample of size 20 from both the train and test set here. Indeed, even for the MCD$_{1}$ split with a high divergence of 0.694, the 20 random samples of train and test questions shown below cannot easily be distinguished as they both contain the same kind of questions of different sizes.
Train samples from MCD$_{1}$:
What was founded by a costume designer, founded by [Forgotten Silver]'s star, and founded by [Jamie Selkirk]?
Which male person influenced and was influenced by [William Dean Howells]?
Did [Marco Bellocchio] produce, write, and direct [Greek Pete]?
What did [Rick Schmidt] edit, [Philip Rashkovetsky] edit, and a cinematographer edit?
Were [The Living Playing Cards] and [The Haunted Castle] edited by, directed by, and produced by a French writer of [Le cauchemar de Méliès]?
What did a spouse of [Shorts]'s producer's spouse executive produce and direct?
Did [P. G. Wodehouse], [Raymond Chandler], [Edward Bunker], [Pauline Kael], and [Michael Cimino] influence [Grindhouse]'s cinematographer and star?
What Mexican person did a film producer employ?
Did [The Midnight After]'s Chinese executive producer edit [Perfect Life] and [Dumplings]?
Who did [For the Secret Service]'s director's female spouse influence?
Who married, was influenced by, and influenced a company's founder?
Was [MAN SE]'s French male German employee's employer [Sulzer]?
Who influenced an actor that [Robin Santana] was influenced by and [K. J. Stevens] was influenced by and was influenced by [Virgil]?
Did [Pirates of Malaysia] star [Giuseppe Addobbati] and star a Spanish screenwriter?
Was [The Silence of the Sea] written by, produced by, executive produced by, directed by, and edited by [The Red Circle]'s French editor?
Did [Chanel] employ a German costume designer, employ [Gaspard Ulliel] and [Maureen Chiquet], and employ [Jacques Polge]?
Who was influenced by [Adam Sandler] and married a film producer?
Did a Spanish screenwriter's child direct and edit [Bakuchi-uchi: Nagaremono]?
Was a founder of [IG Port] employed by a film producer?
Was [Orizzonti Orizzonti!] executive produced by and written by an art director's sibling?
Test samples from MCD$_{1}$:
What sequel of [Paranormal Activity 2] was edited by and written by a film director?
What spouse of a film producer founded [Grand Hustle Records] and was employed by [40/40 Club], [Roc-A-Fella Records], and [Def Jam Recordings]?
Did [Pixar] employ an art director and employ [Susham Bedi]?
Was a sibling of [David Lindbland] [Dynamit Nobel]'s Swedish founder?
What prequel of [Charlie the Unicorn 2] starred, was edited by, was produced by, was written by, and was directed by [Jason Steele]?
Did [Rick Schmidt] direct, produce, executive produce, and edit [Blues for the Avatar], [White Stork Is Coming], [The Fifth Wall], and [It's Not About the Shawerma]?
Was [Luke Larkin Music] an art director's employer?
What prequel of [Goat Story 2] was executive produced, written, directed, edited, and produced by [Jan Tománek]?
Was [Bullet Ballet]'s editor, star, director, and cinematographer [Promises Written in Water]'s star, director, writer, executive producer, and art director?
What was edited by, produced by, directed by, and written by [Ellis Kaan Ozen], [Thaw Bwe], [Jeffrey Malkofsky-Berger], and [Leslie Berkley]?
Was a person's female sibling [Reggae in a Babylon]'s producer?
Who was a director, cinematographer, executive producer, art director, producer, star, and writer of [The Man Who Killed God]?
Was [My Sweet Home]'s director, editor, writer, art director, producer, cinematographer, and costume designer a person?
Which art director, star, and editor of [The Brown Bunny] and [Promises Written in Water] did [Cord] star?
Did an employee and founder of [Virgin Mobile Australia], [Virgin Mobile USA], and [Virgin Mobile France] found [Virgin America] and found [V2 Records]?
Was a Chinese executive producer and star of [Happy Ghost II] and [All's Well, Ends Well 2010] a film director?
Was [The Voyeur]'s executive producer an actor's parent?
Did [Erasable Cities]'s writer, producer, editor, art director, cinematographer, and director produce and executive produce [Promises Written in Water]?
Who was an editor, star, and cinematographer of [Tetsuo: The Iron Man], [A Snake of June], and [Bullet Ballet]?
Was a costume designer's employer [Philips High School]?
Divergence-Based Split Analysis ::: Quantitative analysis of MCD@!START@$_{1}$@!END@
Figure FIGREF133 shows the frequency of atoms (upper graph) and compounds (lower graph) in the train and test sets of the maximum compound divergence split for the CFQ data. As the frequency of an atom resp. compound we use the fraction of examples it appears in. Both atoms and compounds are indexed primarily by their frequency in the train set, secondarily by their frequency in the test set, in decreasing order. For practical reasons we only look at a small subset of compounds here but we believe the analysis is representative.
We can see that the frequency of atoms in the two sets is very aligned and that all atoms from the test set appear in the train set. The frequency of compounds however is wildly different: While some invariably occur in both sets, the frequencies are often not aligned and most compounds appear only in either the train or the test set.
Hyperparameters
The experiments were run using the tensor2tensor framework BIBREF39 with some of the hyperparameters tuned using a random split of a previous, smaller version of the data set during development. We use the default hyperparameter sets publicly available in the tensor2tensor implementation (obtained from https://github.com/tensorflow/tensor2tensor) and override the tuned hyperparameters. The hyperparameters used are summarized in Table TABREF134.
Detailed error analysis ::: Breakdown of error types
Table TABREF136 shows a more detailed analysis of the errors that the baseline models make on CFQ for MCD$_{1}$ (compare Section SECREF24). The reported errors are bucketized into three main types: sparql property clause error, sparql filter clause error and malformed sparql query in the model's output. The total number of test set examples exhibiting any clause or filter error is reported (sum column), as well as the number of insertions (ins), deletions (del), and substitutions (sub) in the model's output with respect to the correct query. Property clause substitution errors are further subdivided into those where only the property itself is wrong while subject and object are correct (prop), those where the property is correct but either subject or object is wrong (node) and those where both the property and the subject or the object are wrong (both).
The accuracy metric requires the model response and the golden (correct) answer to be exactly equal to each other. Thus, a sparql query with the same clauses as the golden answer but in a different order or with some of the clauses appearing multiple times is also considered to be an error despite being equivalent to the golden answer in its meaning. The amount of such errors is relatively small though, accounting for 1.8%, 0.6% and 1.5% of total test set size for LSTM+Attention, Transformer and Universal Transformer respectively.
Detailed error analysis ::: Qualitative error analysis
Below we qualitatively analyze a number of instances the models fail on. We anonymize the MIDs in the same way as the data is provided to the models (see Section SECREF5). We first select queries on which all machine learning systems fail in all replicated runs (about 5k instances out of a total of about 12k), and then randomly select queries from this list. In the following we discuss a few cases in more detail. Note that, for readability, we use the following abbreviations for the sparql properties in Query 1:
ns:people.person.child = ns:people.person.children|
ns:fictional_universe.fictional_character.children|
ns:organization.organization.child/
ns:organization.organization_relationship.child
ns:people.person.sibling = ns:people.person.siblings/
ns:people.siblingrelationship.sibling|
ns:fictionaluniverse.fictionalcharacter.siblings/
ns:fictionaluniverse.
siblingrelationshipoffictionalcharacters.siblings
Query 1: “What sibling of M0 was M1' s parent?”
Golden (correct) sparql query:
SELECT DISTINCT ?x0 WHERE {
?x0 ns:people.person.child M1 .
?x0 ns:people.person.sibling M0 .
FILTER ( ?x0 != M0 )
}
Inferred (system) sparql query:
SELECT DISTINCT ?x0 WHERE {
?x0 ns:people.person.sibling ?x1 .
?x0 ns:people.person.sibling M0 .
?x1 ns:people.person.child M1 .
FILTER ( ?x0 != ?x1 )
}
Analysis. The meaning of the sparql query generated by the system is “What sibling of M0 was a sibling of M1's parent?”, which is incorrect. We next analyze the train set, in order to show that we believe enough information has been provided in the train set for the question to be answered correctly.
Some subqueries of the query and their occurrences are shown in Table TABREF140. While the exact subquery “What sibling” does not occur at training, the two words have been shown separately in many instances: the subqueries “sibling of Mx”, and “Mx's parent” occur 2,331 and 1,222 times, respectively. We can analyze this example in more detail by comparing parts of the rule tree of this example with those shown at training. As can be read from the table, similar sentences have been shown during training. Some examples are:
What was executive produced by and written by a sibling of M0?
What costume designer did M1's parent employ?
What cinematographer was a film editor that M2 and M3 married?
What film director was a character influenced by M2?
Query 2: “Did a male film director edit and direct M0 and M1?”
Golden (correct) sparql query:
SELECT count ( * ) WHERE {
?x0 ns:film.director.film M0 .
?x0 ns:film.director.film M1 .
?x0 ns:film.editor.film M0 .
?x0 ns:film.editor.film M1 .
?x0 ns:people.person.gender m_05zppz
}
Inferred (system) sparql query:
SELECT count ( * ) WHERE {
?x0 ns:film.director.film M0 .
?x0 ns:film.director.film M1 .
?x0 ns:film.editor.film M0 .
?x0 ns:people.person.gender m_05zppz
}
Analysis. The meaning of the inferred sparql query is “Did a male film director edit M0 and direct M0 and M1?”. It thus seems the model `forgets' to include the relation between the director and movie M1.
Looking at subqueries and their occurrence count (Table TABREF145), we see again that various subqueries occur often during training. However, “edit and direct” have not been shown often together. When looking at the rule trees, we see that both conjunctions in the query occur often at training separately: “Did [DetNP] [VP] and [VP] [DetNP]” occurs 1,432 times, and “Did [DetNP] [VP] [Entity] and [Entity]” occurs 909 times. However, they never occur together: “Did [DetNP] [VP] and [VP] [DetNP] and [DetNP]” does not occur at training. This may be the reason why all systems fail on this example, but at the same time we believe a compositional learner should be able to generalize correctly given the training instances. Some examples are:
Did a male film director that M3's parent married influence an art director?
Did a film producer that played M2 edit and direct M1?
Did a screenwriter edit and direct a sequel of M1
Did a Chinese male film director edit M1 and M2?
Additional experimental results on scan
Figure FIGREF150 shows a scatter plot of accuracy vs. compound divergence for the three baseline architectures (see Section SECREF5) on existing splits of the scan data. These splits are discussed in BIBREF2 and BIBREF6, and the exact split data is available. (Data splits obtained from https://github.com/brendenlake/SCAN). We map these splits onto the re-created scan data, which enables us to measure the atom and compound divergences. The authors present a total of six split experiments (some with several sub-experiments):
BIBREF2:
simple (random)
by action sequence length
adding a primitive and adding a primitive along with complex combinations
BIBREF6:
adding a template
adding template fillers
adding more training examples of fillers (fewshot)
In the plot, we omit some data points that are too close to be distinguished easily. The point labels have the form `(abbreviated experiment name)<(parameter)>@(number of samples) (baseline system abbreviation) [(train set size fraction), (split atom divergence)]'. The train set size fraction is given as a percentage of the overall data size. The baseline system abbreviations are LSTM, T for Transformer, UT for Universal Transformer, T/UT where both transformer models are indistinguishable, and empty where all three systems perform indistinguishably. The abbreviated experiment name is one of the names in italics above.
We can observe a strong dependency of the accuracies on the compound divergence of the data split. Again, this seems to indicate that the compound divergence is correlated with accuracy for these baseline architectures. One difference to the data shown in Figure FIGREF28(b) is that for this set of experiments the accuracy drops faster with increasing compound divergence. One explanation for this effect is that the experiments are directly aimed at highlighting one specific potentially problematic scenario for learning. E.g. in the experiment `primitive<jump>' (with very low accuracies for all three systems) the jump command is shown exactly in one combination (namely alone) in the training data while it occurs in all test examples in arbitrary combinations.
This is reflected in the higher atom divergence value of 0.08 for this split, as well as in all other splits that exhibit a low accuracy at a low compound divergence in Figure FIGREF150. Note that BIBREF2 already compare the experiment `primitive<jump>' to the experiment `primitive<turn left>' for which all three systems achieve a much higher accuracy. In their interpretation of this phenomenon, they mainly focus on the fact that in contrast to 'jump', the action 'turn left' is also generated by other inputs. We additionally observe that the latter experiment also has a slightly lower atom divergence of 0.07, a lower compound divergence, and it covers a much larger part of the data in the train set (94% vs. 63%).
While the accuracies we observe for the `primitive' experiments are very much in line with the results reported by BIBREF2, we noticed a few interesting differences for other experiments: All three systems go to 100% accuracy on the fewshot task even for one example (while BIBREF6 report a slowly increasing accuracy for the architecture they evaluate). On the other hand, both transformer models only reach 0% accuracy on the length split, while the LSTM obtains around 14% (which is in line with what previous work reports).
Analysis of relations between accuracy, compound divergence, and training size
Figure FIGREF28 shows for all baseline systems a strong correlation between accuracy and compound divergence for the chosen training sizes (96k for CFQ and 8k for scan). One interesting question is whether and how this correlation is changed for different training sizes. Figures FIGREF159 and FIGREF159 show that this correlation holds also for smaller training sizes but that the accuracy is generally somewhat lower for smaller training sizes.
At the same time, we observe that the difference between accuracies of various training sizes gets smaller as the training size increases. This can be seen even more clearly in Figures FIGREF159 and FIGREF159, which plot the training size rather than the compound divergence on the x-axis. These figures show that the increase in accuracy flattens out significantly as we reach training size of about 80k for CFQ and about 6k for SCAN. This indicates that further increasing train set size may not be sufficient to do well on these compositionality experiments.
Logical Form
To represent our logical form we use syntax of the description logic $\mathcal {EL}$ BIBREF14, BIBREF15 with additional concept and role constructors. These constructors do not have description logic semantics; instead, their meaning is completely determined by the set of generation rules of the CFQ dataset.
Let $A$ be a concept name, $C, C_1, C_2$ be concepts, $R, R_1, R_2$ be roles, and $v$ be a raw string. Then the following would be concepts:
and the following would be roles:
Note that our logical form does not have roles other than those in a form of RolePair($C_1$, $C_2$).
New strings are generated by using a special function new_var($\$S$). This function generates a unique string of the form ?x<N>, where N is a unique number, and assigns that string to variable $\$S$. This string can later be used as a variable in a sparql constraint.
Rule Format
This section describes the format of each of the rule types we use for generating the CFQ dataset, in the form in which they appear in the rules index in Appendix SECREF20.
General formatting conventions shared across all rule types:
Variable names are prefixed by `$'. Example: $X.
(Exception: In grammar rules, while variables standing for constants are prefixed by `$', variables standing for logical forms are prefixed by `_'. Example: _action.)
Concept names are written in camel case. Example: FilmProducer.
Names of functions that output logical forms (concepts, roles, or knowledge) are also written in camel case. Examples: DropDependency, BoundRolePairs, RolePair.
Names of functions that output string literals or which are used for converting logical forms to sparql are written in lowercase with underscores. Examples: def2sparql, get_specializations, new_var.
String literals are enclosed in single quotes. Example: 'ns:film:director'.
Rule Format ::: Grammar rule format
The CFQ grammar is a unification-based grammar of recursive rewriting rules used to generate pairs of strings and their corresponding logical form. For an introductory overview of unification-based grammars including several popular variations, see BIBREF38. The rules in the CFQ grammar follow a similar syntax in particular to that used in the Prolog extension GULP 3.1 BIBREF16, with the addition of support for disjunction, negation, absence, and default inheritance of features, and with minor differences in formatting described below.
Properties shared between the CFQ grammar syntax and that of BIBREF16 include the following:
Grammar rules are notated as variations of context-free phrase-structure rules of the form $T_{0} \rightarrow T_{1}$ ... $T_{n}$, where each of the syntactic non-terminals and terminals $T_{0}$ ... $T_{n}$ are augmented with feature lists in parentheses.
Each grammar rule can be interpreted as specifying how a feature structure (with logical form) that is unifiable with the lefthand side can be re-written to the sequence of features structures (with logical form) indicated on the righthand side.
Features are represented as attribute-value pairs separated by a colon (i.e., $attribute$:$value$).
Shared values in feature structures are represented through the use of variables.
Specifically, in the rules index, CFQ grammar rules are described in the format
$T_{0}(F_{0})[H]/L_{0} \rightarrow T_{1}(F_{1})/L_{1}$ ... $T_{n}(F_{n})/L_{n}$
where:
Each $T_{i}$ is a syntactic category (syntactic nonterminal) or a string literal (syntactic terminal).
Each $L_{i}$ for $i \in [1, n]$ is either a variable representing a logical form or an empty string. In the case when $L_{i}$ is an empty string, we allow dropping the trailing slash from the $T_{i}(F_{i})/L_{i}$ expression, resulting in just $T_{i}(F_{i})$.
$L_{0}$ is a logical form expressed in terms of $L_{1}...L_{n}$.
Each $F_{i}$ is a comma-separated feature list of the form $(attribute_{1}$:$value_{1}$, ..., $attribute_{k}$:$value_{k})$. In the case where $F_{i}$ is empty, we allow dropping the parentheses from the $T_{i}(F_{i})$ expression, resulting in just $T_{i}$.
$H$ is either an empty string or one of the variables $L_{i}$ for $i \in [1, n]$, indicating that $F_{0}$ default inherits the features of $F_{i}$ (the syntactic “head”). In the case where $H$ is an empty string, we allow dropping the brackets from the $T_{0}(F_{0})[H]$ expression, resulting in just $T_{0}(F_{0})$.
Note that while the above notation adopts the convention of splitting out the syntactic category and logical form from the feature list for visual prominence and to highlight the relationship to its context-free phrase-structure rule core, behaviorally it is identical to adding two more features to the feature list (we can call them, for example, $cat$ and $sem$) to represent the syntactic category and logical form.
This means that, for example, the rule
ACTIVE_VP[_head]/_head
$\rightarrow $ VP_SIMPLE(form:infinitive)/_head
can be considered a notational shorthand for the following rule expressed purely using feature lists:
(cat:ACTIVE_VP, sem:_head)[_head]
$\rightarrow $ (cat:VP_SIMPLE, sem:_head, form:infinitive)
Disjunction of features. Similarly to BIBREF37, we allow disjunctive feature specifications, which we denote by separating the alternative values with a pipe (`$|$'). The feature specification (form:gerund|infinitive) would thus unify with either (form:gerund) or (form:infinitive), but not with (form:pastparticiple).
Absence of features. We use a special atomic value _none_ to indicate that a given feature must either be absent or else explicitly set to the value _none_. The feature specification (subject:_none_, object:yes) would thus unify with either (object:yes) or (subject:_none_, object:yes), but not with (subject:yes, object:yes).
Negation of features. Similarly to BIBREF37, we allow negated feature specifications, which we denote by prefixing the attribute with a minus sign (`-'). The feature specification (-form:gerund|infinitive) would thus unify with (form:pastparticiple) or (form:_none_), but not with (form:gerund) or (form:infinitive). In general, a feature specification of the form (-attribute:v$_{1}$|...|v$_{j}$) can be considered a notational shorthand for (attribute:v$_{j+1}$|...|v$_{k}$|_none_), where v$_{j+1}$|...|v$_{k}$ is an enumeration of all possible values of the feature attribute other than v$_{1}$|...|v$_{j}$.
Default inheritance of features. If the lefthand side term is notated as $T_{0}(F_{0})[H]$, with $H$ equal to one of the variables $L_{i}$ for $i \in [1, n]$, then this is interpreted as a notational shorthand for augmenting both $F_{0}$ and $F_{i}$ with an additional list of attribute-value pairs $(a_{1}$:$\$v_{1}, ..., a_{k}$:$\$v_{k})$, where $a_{1} ... a_{k}$ are all of the attributes listed in $F_{i}$ that were not originally listed in $F_{0}$.
Unification of logical forms. As described in Appendix SECREF16, we represent logical forms using a variation of description logic, rather than using feature structures. In the context of unification, we consider logical forms to unify if and only they achieve structural concept equality after variable replacement (using the same variable replacements applied during unification of the corresponding feature lists), while taking into account the commutativity and associativity of $\sqcap $. For example, under this criterion, the logical form GenderRel $\sqcap $ $\exists $RolePair(Predicate, Gender)._head would unify with either GenderRel $\sqcap $ $\exists $RolePair(Predicate, Gender).Male or with ($\exists $RolePair(Predicate, Gender).Male) $\sqcap $ GenderRel under a variable replacement mapping _head to Male, but would not unify with GenderRel $\sqcap $ $\exists $RolePair(Predicate, Gender).Male $\sqcap $ $\exists $RolePair(Predicate, GenderHaver).FilmProducer.
Rule Format ::: Knowledge rule format
CFQ knowledge rules output expressions representing facts that are known to be true. They have no direct effect on text, logical forms, or sparql, but the generated knowledge can be used as preconditions to other rules. In the rules index, they are described in the following format:
$\rightarrow K$, where $K$ is knowledge that is output.
By convention, we define the rule name of a knowledge rule to be simply the string representing the knowledge that the rule outputs, and we omit the rule name in the rules index for brevity.
The union of those rules defines a knowledge base which we denote with $KB^{CFQ}$.
All knowledge in CFQ is represented in the form $P(X_1,...,X_n)$, where $P$ is a predicate from the list below, and $X_1, ..., X_n$ are either logical forms or else raw strings. Knowledge rules do not use variable-based expressions.
Supported knowledge predicates:
BoundRolePairs
ExclusiveRolePair
FreebaseEntityMapping
FreebasePropertyMapping
FreebaseTypeMapping
NonExclusiveRolePair
Role
Rule Format ::: Inference rule format
CFQ inference rules transform logical forms and may be conditioned on knowledge. In the rules index, they are described in the following format:
$K: L_0 \rightarrow L_1$
where $K$ represents a comma-separated list of knowledge preconditions, and $L_0$ and $L_1$ represent the input and output logical forms, all expressed in terms of a shared set of variables $v_1,...,v_m$.
These rules are interpreted as stating that if there exists a variable replacement $r()$ replacing $v_1,...,v_m$ with some logical forms $l_1,...,l_m$ respectively, such that $r(K) \subseteq KB^{CFQ}$, then we can apply the inference rule by rewriting $r(L_0)$ to $r(L_1)$.
Rule Format ::: Resolution rule format
CFQ resolution rules transform sparql expressions and may be conditioned on knowledge. They do not affect text or logical forms.
In the rules index, they are described in the following format:
$K: S_0 \rightarrow S_1~...~S_n$
where $K$ represents a comma-separated list of knowledge preconditions, $S_0$ is a variable-based expression and $S_1~...~S_n$ are either raw sparql strings or else expressions described in terms of the same variables used in $S_0$ and $K$.
These rules are interpreted as stating that if there exists a variable replacement $r()$ replacing $v_1,...,v_m$ with some logical forms, strings, or expressions $l_1,...,l_m$ respectively, such that $r(K) \subseteq KB^{CFQ}$, then we can apply the resolution rule by rewriting $r(S_0)$ to the sequence of terms $r(S_1)~...~r(S_n)$.
Generation Algorithm
Our generation algorithm produces triples of the form $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ in a mixed top-down and bottom-up fashion, with the final program of rule applications output alongside each triple in the form of a rule application DAG. The top-down portion of generation is responsible for efficiently searching for rules that can be applied to produce a meaningful example, while the bottom-up portion is responsible for actually applying the rules (i.e., performing the composition) and for producing the DAG.
The generation process proceeds in two phases, each involving a top-down as well as bottom-up aspect. In the first phase, we apply grammar rules interleaved with inference rules to produce a pair of $\langle \text{question, logical form} \rangle $. Specifically, we apply a recursive top-down algorithm which starts with the $S$ nonterminal and at every step performs a random search over the rules in the grammar which could produce the target nonterminal with accompanying feature structure. This top-down process proceeds until a candidate syntactic parse tree is attained whose leaves consist purely of syntactic terminals (i.e., string literals or entity placeholders). The grammar rules from this candidate parse tree are then applied in a bottom-up fashion beginning with the syntactic terminals to yield a tree of $\langle \text{text, logical form} \rangle $ pairs. After each such bottom-up grammar rule application, we then greedily apply all possible inference rules on the resulting logical forms, applying an arbitrary deterministic ordering to the inference rules in cases where rules could be applied in multiple valid orderings. This ensures that inference rules and grammar rules are executed in an interleaved manner and each inference rule is applied at the earliest possible occasion.
When a $\langle \text{question, logical form} \rangle $ pair is generated for the $S$ nonterminal, we proceed to the second phase of the algorithm, in which resolution rules are applied to generate a corresponding sparql query to make up the third element of the desired $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple. In practice, the bulk of the work in this phase is performed in a top-down fashion, in which resolution rules are recursively applied to transform a starting expression of the form get_specializations($L) (where $L represents the logical form output from the grammar phase) into a sequence of text literals representing the sparql query. This is followed nominally by a bottom-up process to construct the rule application DAG, yielding a tree of resolution rule applications of a similar form to the tree of interleaved grammar and inference rules output from the grammar phase. Note that while the grammar phase involves a large degree of random choice, the resolution phase proceeds much more deterministically, as the CFQ resolution rules have been designed such that any given question can yield only one possible sparql query, modulo commutativity and associativity of $\sqcap $. In cases where resolution rules could be applied in multiple valid orderings, we again apply an arbitrary deterministic ordering to the resolution rules so as to yield as consistent as possible a rule application DAG and $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple for any given question.
Finally, to ease the task of tracking unique query patterns and to minimize the impact on the learning task of implementation details regarding choice of variable names or ordering of clauses, we normalize the final sparql query by alphabetically sorting the query clauses and re-numbering the variables to follow a standard increasing order.
The resulting $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple is then appended to the CFQ dataset.
Generation Algorithm ::: Join by Logical Form
In general, we do not explicitly track rules to represent the example-independent behaviors of the generation algorithm, as the universal applicability of these rules mean that the complete behavior of the generator should be observable on any reasonably-sized train set. The same applies to certain core behaviors of the description logic $\mathcal {EL}$, such as commutativity and associativity of $\sqcap $, which we omit tracking as explicit rules due to their similar ubiquity of application.
One example-independent rule, however, that we do explicitly track is the rule that describes the handover process between the grammar phase and the resolution phase – or in terms of the rule application DAG, the rule that joins the tree of interleaved grammar and inference rule applications with the tree of resolution rule applications. We call this rule JOIN_BY_LOGICAL_FORM. It is included in the rules list for every example in CFQ and appears as the head of the rule application tree for each example.
Generation Algorithm ::: Relationship between Generation and Parsing
Note that conceptually a similar approach for combining the different rule types could be applied to the semantic parsing task. The main difference would be that, instead of performing random search over the grammar, the semantic parsing task would need to find the set of rules which produce the desired input text.
Generation Algorithm ::: Selecting an appropriate sample set
For many domains, the set of examples generated by exhaustively combining rules is infinite or prohibitively large. For example, the CFQ grammar generates an infinite set of questions, and even when restricted to a reasonable complexity, the set is still too large for practical use. This means that we need to choose which subset of examples we want to include in our dataset. Given our goal of comprehensively measuring compositional generalization, we do this by:
maximizing the overall diversity of rule combinations (allowing us to test as many rule combinations as possible)
while using a uniform distribution from simple examples to increasingly more complex examples.
We measure the diversity of rule combinations of a dataset using the empirical entropy over the frequency distribution of the subgraphs of the rule application DAGs, and we measure the complexity of an example using the number of rule applications used to generate it.
For CFQ, we choose the following practical trade-off between these two criteria. We first generate a sufficiently large sample set by performing random rule applications. We then subsample from it to select a subset that maximizes the entropy of the subgraph distribution (while only taking into account subgraphs with a limited number of nodes for practicality). We use a greedy algorithm that incrementally assigns elements to the subsampled set while maximizing entropy at each step.
The subsampling is initially limited to examples with the smallest complexity level and continues with increasingly larger complexity levels. We cap the maximum number of examples per level to achieve a uniform distribution across levels, and we limit the maximum complexity level such that the questions remain relatively natural. Table TABREF15 shows examples of generated questions at varying levels of complexity.
Example of a rule application DAG
Figures FIGREF190 through FIGREF192 show the rule application DAG that was produced when generating the question “Who directed [entity]?”. They illustrate how grammar, inference, and knowledge rules are combined to generate a pair of text and logical form, and how resolution rules are used to generate the sparql query for the resulting logical form.
Example of a rule application DAG ::: DAG normalization
As discussed in Section SECREF3, nodes of this DAG represent rule applications while edges represent dependencies among the rules; i.e., an edge $A \rightarrow B$ means that rule $B$ strictly depends on rule $A$ in the sense that the generator cannot apply rule $B$ before applying rule $A$. The DAG is normalized to ensure that a certain rule combination is represented using the same DAG across all the examples where it occurs. This is important for meaningfully comparing measures such as entropy and divergence across subgraphs of different examples.
Specifically, together with adopting the measures described above to ensure that rules are applied in a deterministic order, we achieve the normalization of the DAG by only producing edges that represent “minimal dependencies”. This means that if a rule $A$ can be applied after rule $B$, but it could also be applied after rule $B^{\prime }$ with $B \rightarrow B^{\prime }$ (i.e., $B^{\prime }$ depends on $B$), we don't produce the edge $B^{\prime } \rightarrow A$.
Example of a rule application DAG ::: Concept abbreviations
For brevity, in the rule application DAG figures we have applied the following abbreviations for several lengthy concept names:
Director = FilmDirector
Directee = DirectedFilm
Directing = DirectingAFilm
SubjectAgentVerb = PredicateWithBoundRolePairs(RolePair( SubjectHaver, Subject), RolePair(Predicate, Agent))
ObjectUndergoerVerb = PredicateWithBoundRolePairs(RolePair( ObjectHaver, Object), RolePair(Predicate, Undergoer))
E1 = Entity('?E1')
Example of a rule application DAG ::: Entity placeholders
As described in Section SECREF16, during generation we initially generate a $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple containing entity placeholders, and then replace those placeholders with specific entities as a post-processing step. Conceptually, one could construct a rule application DAG describing either the process by which the original $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple with entity placeholders was generated, or alternatively the rules that would need to be applied if constructing the $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple containing the final entity MIDs directly. Structurally, these two DAGs are identical, differing only in the definition of two entity-related rules described below. The rule application DAG shown in the accompanying figures is the version using entity placeholders.
Versions of entity rules applicable when using entity placeholders:
ENTITY=[ENTITY]_HSz7QrdGdsX:
ENTITY(number:singular)/Entity(new_var(V1))
$\rightarrow $ '[entity]'
ENTITY_MID:
ent2sparql(Entity($X)) $\rightarrow $ $X
Versions of entity rules applicable when using actual entity MIDs:
ENTITY=[ENTITY]_HSz7QrdGdsX:
ENTITY(number:singular)/'m.'$X
$\rightarrow $ 'm.'$X
ENTITY_MID:
ent2sparql('m.'$X) $\rightarrow $ 'ns:m.'$X
Example of a rule application DAG ::: Subgraphs and their weights
Figure FIGREF203 shows an example of subgraphs in order to provide more details on the sampling and weighting of compounds. An example non-linear subgraph is highlighted by the red area, and two linear subgraphs are highlighted by the blue and the yellow areas, respectively.
As described in Section SECREF6, given a large subset $\mathbb {G}$ of subgraphs from the sample set as a whole, we calculate for each sample the weight of each subgraph $G \in \mathbb {G}$ that occurs in that sample as:
where $\text{occ}(G)$ is the set of all occurrences of $G$ in the sample, $\prec $ denotes the strict subgraph relation, and $P(G^{\prime }| G)$ is the empirical probability of $G^{\prime }$ occurring as a supergraph of $G$ over the full sample set.
Intuitively, we are trying to estimate how interesting the subgraph $G$ is in the sample. First, for every occurrence $g$ of a subgraph $G$, we look for the supergraph $G^{\prime }$ of $g$ that co-occurs most often with $G$ in the full sample set. The empirical probability of having $G^{\prime }$ as a supergraph of $G$ determines how interesting the occurrence $g$ is – the higher this probability, the less interesting the occurrence. Thus we compute the weight of the occurrence as the complement of this maximum empirical probability. Then we take the weight of $G$ to be the weight of the most interesting occurrence $g$ of $G$ in the sample.
E.g. in the extreme case that $G$ only occurs within the context $G^{\prime }$, the weight of $G$ will be 0 in all samples. Conversely, if $G$ occurs in many different contexts, such that there is no single other subgraph $G^{\prime }$ that subsumes it in many cases, then $w(G)$ will be high in all samples in which it occurs. This ensures that when calculating compound divergence based on a weighted subset of compounds, the most representative compounds are taken into account, while avoiding double-counting compounds whose frequency of occurrence is already largely explainable by the frequency of occurrence of one of its super-compounds.
Returning to our example in Figure FIGREF203, suppose that $G$ represents the smallest linear subgraph (yellow area), and suppose that the weight of $G$ in this sample is 0.4. Then this means that there exists some other subgraph $G^{\prime }$ (for instance, the linear subgraph highlighted by the blue area) that is a supergraph of $G$ in 60% of the occurrences of $G$ across the sample set.
Rules Index
Below is a selection of the rules used in the generation of CFQ. Specifically, this includes all rules involved in generating the question “Who directed [entity]?” (the same example illustrated in the rule application DAG in Appendix SECREF19). The format of the rules is discussed in Appendix SECREF17.
Rules Index ::: Grammar rules
S=WHQ_F6E9egkQqxj:
S/_x
$\rightarrow $ WHQ/_x
WHQ=NPQ_INDIRECT_VP_INDIRECT_TXCca9URgVm:
WHQ[_subject]/DropDependency(_subject) $\sqcap $ DropDependency($\exists $RolePair(Subject, SubjectHaver)._action)
$\rightarrow $ NPQ_INDIRECT(is_what:_none_, number:$n)/_subject
VP_INDIRECT(form:past, number:$n, object:yes, subject:_none_)/_action
NPQ_INDIRECT=WHO_5ptbPXXbuLZ:
NPQ_INDIRECT(number:singular)/Person
$\rightarrow $ 'who'
VP_INDIRECT=VP_INDIRECT_DP_ZJH4NhRkByc:
VP_INDIRECT(object:yes)[_action]/_action $\sqcap $ $\exists $RolePair(ObjectHaver, Object)._object
$\rightarrow $ VP_INDIRECT(object:_none_, subject:_none_)/_action
DP/_object
VP_INDIRECT=ACTIVE_VP_RX51Tm7RXPe:
VP_INDIRECT(object_type:$ut, subject_type:$at)[_head]/_head $\sqcap $ PredicateWithBoundRolePairs(RolePair(SubjectHaver, Subject), RolePair(Predicate, Agent)) $\sqcap $ PredicateWithBoundRolePairs(RolePair(ObjectHaver, Object), RolePair(Predicate, Undergoer))
$\rightarrow $ ACTIVE_VP(agent_type:$at, undergoer_type:$ut)/_head
ACTIVE_VP=VP_SIMPLE_hJqAyjRUYJp:
ACTIVE_VP(number:singular)[_head]/_head
$\rightarrow $ VP_SIMPLE(form:past)/_head
VP_SIMPLE=VP_GHWf3fcVRZg:
VP_SIMPLE(agent_type:person, undergoer_type:movie)[_head]/_head
$\rightarrow $ VP(concept_id:DirectingAFilm)/_head
VP=DIRECTED_JkYzNbQyXtv:
VP(concept_id:DirectingAFilm, form:past)/DirectingAFilm
$\rightarrow $ 'directed'
DP=ENTITY_M6fSP5GvRaN:
DP(is_proper_noun:yes, number:singular)[_head]/_head
$\rightarrow $ ENTITY/_head
ENTITY=[ENTITY]_HSz7QrdGdsX:
ENTITY(number:singular)/Entity(new_var(V1))
$\rightarrow $ '[entity]'
... (211 grammar rules total)
Rules Index ::: Inference rules
BOUND_ROLES_WITH_PREDICATE_OBJECT:
BoundRolePairs($A, RolePair($R, $Q), RolePair($T, $S)):
$\exists $RolePair($Q, $R).($A $\sqcap $ $B) $\rightarrow $ $\exists $RolePair($S, $T).($A $\sqcap $ $B)
BOUND_ROLES_WITH_PREDICATE_SUBJECT:
BoundRolePairs($B, RolePair($Q, $R), RolePair($S, $T)):
$B $\sqcap $ $\exists $RolePair($Q, $R).$A $\rightarrow $ $B $\sqcap $ $\exists $RolePair($S, $T).$A
IGNORE_BOUND_ROLE_PAIRS:
$A $\sqcap $ PredicateWithBoundRolePairs($X, $Y) $\rightarrow $ $A
IGNORE_DEPENDENCY_DROPPING:
DropDependency($X) $\rightarrow $ $X
PREDICATE_UNREIFICATION:
Role($Q, $P), Role($R, $P):
$\exists $RolePair($Q, Predicate).($P $\sqcap $ $\exists $RolePair(Predicate, $R).$A) $\rightarrow $ $\exists $RolePair($Q, $R).$A
... (17 inference rules total)
Rules Index ::: Resolution rules
CONJUNCTION_WITHOUT_ENTITY:
def2sparql($X $\sqcap $ $Y, $V1) $\rightarrow $ def2sparql($X, $V1) ' . ' def2sparql($Y, $V1)
ENTITY_MID:
ent2sparql(Entity($X)) $\rightarrow $ $X
GET_SPECIALIZATIONS:
get_specializations($X) $\rightarrow $ 'SELECT DISTINCT ' get_var($X, new_var($V0)) ' WHERE { ' def2sparql($X, get_var($X, $V0)) '}'
GET_VAR_CONJUNCTION:
get_var($X $\sqcap $ $Y, $V1) $\rightarrow $ shared_var(get_var($X, get_var($Y, $V1)), get_var($Y, get_var($X, $V1)))
GET_VAR_RELATION:
get_var($\exists $$R.$X, $V1) $\rightarrow $ $V1
GET_VAR_TYPE:
FreebaseTypeMapping($X, $F):
get_var($X, $V1) $\rightarrow $ $V1
PROPERTY_MAPPING:
FreebasePropertyMapping($R, $F):
role2sparql($R) $\rightarrow $ $F
RELATION_MAPPING_WITHOUT_EXCLUSION:
NonExclusiveRolePair($R):
rel2sparql($X, $R, $Y) $\rightarrow $ $X role2sparql($R) $Y
RELATION_TO_ENTITY:
def2sparql($\exists $$R.$X, $V1) $\rightarrow $ rel2sparql($V1, $R, ent2sparql($X))
SHARED_VAR:
shared_var($X, $X) $\rightarrow $ $X
SPECIALIZATION_OF_TYPE:
def2sparql($X, $V1) $\rightarrow $ $V1 ' a ' type2sparql($X)
TYPE_MAPPING:
FreebaseTypeMapping($X, $F):
type2sparql($X) $\rightarrow $ $F
... (21 resolution rules total)
Rules Index ::: Knowledge rules
$\rightarrow $ BoundRolePairs(DirectingFilm, RolePair(Predicate, Agent), RolePair(Predicate, FilmDirector))
$\rightarrow $ BoundRolePairs(DirectingFilm, RolePair(Predicate, Undergoer), RolePair(Predicate, DirectedFilm))
$\rightarrow $ BoundRolePairs(PredicateWithBoundRolePairs(RolePair(ObjectHaver, Object), RolePair(Predicate, Undergoer)), RolePair(ObjectHaver, Object), RolePair(Predicate, Undergoer))
$\rightarrow $ BoundRolePairs(PredicateWithBoundRolePairs(RolePair(Subject, SubjectHaver), RolePair(Agent, Predicate)), RolePair(Subject, SubjectHaver), RolePair(Agent, Predicate))
$\rightarrow $ FreebasePropertyMapping(RolePair(FilmDirector, DirectedFilm), 'ns:film.director.film')
$\rightarrow $ FreebaseTypeMapping(Person, 'ns:people.person')
$\rightarrow $ NonExclusiveRolePair(FilmDirector, DirectedFilm)
$\rightarrow $ Role(DirectedFilm, DirectingFilm)
$\rightarrow $ Role(FilmDirector, DirectingFilm)
... (194 knowledge rules total) | The MCD splits achieve a significantly higher compound divergence at a similar atom divergence when compared to the other experiments |
d9c6493e1c3d8d429d4ca608f5acf29e4e7c4c9b | d9c6493e1c3d8d429d4ca608f5acf29e4e7c4c9b_0 | Q: How authors justify that question answering dataset presented is realistic?
Text: Introduction
Human intelligence exhibits systematic compositionality BIBREF0, the capacity to understand and produce a potentially infinite number of novel combinations of known components, i.e., to make “infinite use of finite means” BIBREF1. In the context of learning from a set of training examples, we can observe compositionality as compositional generalization, which we take to mean the ability to systematically generalize to composed test examples of a certain distribution after being exposed to the necessary components during training on a different distribution.
Humans demonstrate this ability in many different domains, such as natural language understanding (NLU) and visual scene understanding. For example, we can learn the meaning of a new word and then apply it to other language contexts. As BIBREF2 put it: “Once a person learns the meaning of a new verb `dax', he or she can immediately understand the meaning of `dax twice' and `sing and dax'.” Similarly, we can learn a new object shape and then understand its compositions with previously learned colors or materials BIBREF3, BIBREF4.
In contrast, state-of-the-art machine learning (ML) methods often fail to capture the compositional structure that is underlying the problem domain and thus fail to generalize compositionally BIBREF2, BIBREF5, BIBREF6, BIBREF7, BIBREF3. We believe that part of the reason for this shortcoming is a lack of realistic benchmarks that comprehensively measure this aspect of learning in realistic scenarios.
As others have proposed, compositional generalization can be assessed using a train-test split based on observable properties of the examples that intuitively correlate with their underlying compositional structure. BIBREF8, for example, propose to test on different output patterns than are in the train set, while BIBREF2 propose, among others, to split examples by output length or to test on examples containing primitives that are rarely shown during training. In this paper, we formalize and generalize this intuition and make these contributions:
We introduce distribution-based compositionality assessment (DBCA), which is a novel method to quantitatively assess the adequacy of a particular dataset split for measuring compositional generalization and to construct splits that are ideally suited for this purpose (Section SECREF2).
We present the Compositional Freebase Questions (CFQ) , a simple yet realistic and large NLU dataset that is specifically designed to measure compositional generalization using the DBCA method, and we describe how to construct such a dataset (Section SECREF3).
We use the DBCA method to construct a series of experiments for measuring compositionality on CFQ and scan BIBREF2 and to quantitatively compare these experiments to other compositionality experiments (Section SECREF4).
We analyze the performance of three baseline ML architectures on these experiments and show that these architectures fail to generalize compositionally, and perhaps more surprisingly, that compound divergence between train and test sets is a good predictor of the test accuracy (Section SECREF5).
Distribution-Based Compositionality Assessment (DBCA)
Like other authors, we propose to measure a learner's ability to generalize compositionally by using a setup where the train and test sets come from different distributions. More specifically, we propose a setup where each example is obtained by composing primitive elements (atoms), and where these atoms are similarly represented in the train and test sets while the test set contains novel compounds, i.e., new ways of composing the atoms of the train set.
As a simple illustrative scenario, consider the task of answering simple questions such as “Who directed Inception?” and “Did Christopher Nolan produce Goldfinger?”. In this scenario, the atoms intuitively correspond to the primitive elements that are used to compose those questions, such as the predicates “direct(ed)” and “produce(d)”, the question patterns “Who [predicate] [entity]” and “Did [entity1] [predicate] [entity2]”, and the entities “Inception”, “Christopher Nolan”, etc. The compounds on the other hand correspond to the combinations of these atoms that appear in the various examples: "Who directed [entity]?", "Did Christopher Nolan [predicate] Inception?", etc.
To measure compositional generalization on such a task, one might therefore use the questions “Who directed Inception?” and “Did Christopher Nolan produce Goldfinger?” as training examples while testing on questions such as “Did Christopher Nolan direct Goldfinger?” and "Who produced Inception?" because the atoms are identically represented in the train and test sets while the compounds differ.
To make this intuition more precise, we focus on datasets such as CFQ (introduced in Section SECREF3) and scan BIBREF2, where each example can be created from a formal set of rules by successively applying a number of these rules. In this case, the atoms are the individual rules, while the compounds are the subgraphs of the directed acyclic graphs (DAGs) that correspond to the rule applications. (See Sections SECREF3 and SECREF4 for more details.)
Distribution-Based Compositionality Assessment (DBCA) ::: Principles for measuring compositionality
We use the term compositionality experiment to mean a particular way of splitting the data into train and test sets with the goal of measuring compositional generalization. Based on the notions of atoms and compounds described above, we say that an ideal compositionality experiment should adhere to the following two principles:
Similar atom distribution: All atoms present in the test set are also present in the train set, and the distribution of atoms in the train set is as similar as possible to their distribution in the test set.
Different compound distribution: The distribution of compounds in the train set is as different as possible from the distribution in the test set.
The second principle guarantees that the experiment is compositionally challenging in the sense that it tests the learner on compounds that are as different as possible from the compounds used during training. The first principle aims to guarantee that the experiment is exclusively measuring the effect of the difference in the way atoms are composed to form compounds (rather than some related but different property such as domain adaptation on the distribution of the atoms).
To determine to which degree a certain experiment adheres to these principles, we use the following formalization. For a sample set $T$, we use $\mathcal {F}_A(T)$ to denote the frequency distribution of atoms in $T$ and $\mathcal {F}_C(T)$ for the weighted frequency distribution of compounds in $T$, which correspond to the subgraphs of the rule application DAGs. For practicality, we do not consider all subgraphs of rule application DAGs when computing the compound divergence. Instead, we first generate a large subset $\mathbb {G}$ of subgraphs, then weight them in context of their occurrence, and keep only the ones with highest sum of weights. The purpose of the weighting is to avoid double-counting compounds that are highly correlated with some of their super-compounds. We achieve this by calculating the weight of $G \in \mathbb {G}$ in a sample as $w(G) = \max _{g \in \text{occ}(G)} (1 - \max _{G^{\prime }: g \prec g^{\prime } \in \text{occ}(G^{\prime })} P(G^{\prime }| G))$, where $\text{occ}(G)$ is the set of all occurrences of $G$ in the sample, $\prec $ denotes the strict subgraph relation, and $P(G^{\prime }| G)$ is the empirical probability of $G^{\prime }$ occurring as a supergraph of $G$ over the full sample set. See Appendix SECREF202 for example subgraphs and more details on the weighting.
We measure divergence (or similarity) of the weighted distributions using the Chernoff coefficient $C_\alpha (P \Vert Q) = \sum _{k} p_k^\alpha \, q_k^{1-\alpha } \in [0, 1]$ BIBREF9. For the atom divergence, we use $\alpha =0.5$, which corresponds to the Bhattacharyya coefficient and reflects the desire of making the atom distributions in train and test as similar as possible. For the compound divergence, we use $\alpha = 0.1$, which reflects the intuition that it is more important whether a certain compound occurs in $P$ (train) than whether the probabilities in $P$ (train) and $Q$ (test) match exactly. This allows us to formally define as follows the notions of compound divergence $\mathcal {D}_C$ and atom divergence $\mathcal {D}_A$ of a compositionality experiment consisting of a train set $V$ and a test set $W$:
Based on these principles, we suggest to use as a preferred compositionality benchmark for a given dataset the accuracy obtained by a learner on splits with maximum compound divergence and low atom divergence (we use $\mathcal {D}_A \le 0.02$). See Section SECREF4 for details about how to construct such splits.
The CFQ Dataset
We present the Compositional Freebase Questions (CFQ) as an example of how to construct a dataset that is specifically designed to measure compositional generalization using the DBCA method introduced above. CFQ is a simple yet realistic, large dataset of natural language questions and answers that also provides for each question a corresponding sparql query against the Freebase knowledge base BIBREF10. This means that CFQ can be used for semantic parsing BIBREF11, BIBREF12, which is the task that we focus on in this paper.
The CFQ Dataset ::: Automatic, rule-based generation
BIBREF13 describe a number of benefits for automated rule-based dataset generation, including scalability, control of scope, and avoidance of human errors. Beyond these benefits, however, such an approach is particularly attractive in the context of measuring compositional generalization using the DBCA method, as it allows us to precisely track the atoms (rules) and compounds (rule applications) of each example by recording the sequence of rule applications used to generate it.
Since the way we measure compositionality depends on how the examples can be broken down into atoms and compounds, we design the generation rules so as to have few and meaningful atoms. More precisely, we aim to have as few rules as possible so that the richness of the examples comes from composing them, which yields a large variety of compounds (enabling a large range of different compound divergences) while making it easy to obtain similar distributions of atoms. Also, we aim to make our rules truly “atomic” in the sense that the behavior of any rule is independent of the context where it is applied (e.g., rules may not contain “if-then-else” constructs).
In order to minimize the number of rules, we use an intermediate logical form that serves as a uniform semantic representation with relatively direct mappings to natural language and sparql. Our rules thus fall into the following four categories (a selection of rules is provided in Appendix SECREF20):
Grammar rules that generate natural language constructs and corresponding logical forms.
Inference rules that describe transformations on logical forms, allowing us to factor out transformations that are independent of specific linguistic and sparql constructs.
Resolution rules that map constructs of the logical form to sparql constructs.
Knowledge rules that supply logical form expressions that are universally applicable. Other rules can be kept more generic by parameterizing them on knowledge.
These rules define a language of triples of the form $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $. Our generation algorithm produces such triples in a mixed top-down and bottom-up fashion. We first apply grammar rules and inference rules to produce the natural language questions and their semantics in our logical form. Then we apply resolution rules to obtain the sparql query. See Figure FIGREF14 for an illustration. In addition, the generator produces a normalized, directed acyclic graph (DAG) of rule applications that corresponds to the normalized program that generated the triple. (Appendix SECREF19 shows an example.) Edges of this DAG represent dependencies among the rule applications, and the normalization ensures that a certain rule combination is represented using the same DAG across all the examples where it occurs.
The described approach can generate a potentially infinite set of questions, from which we first sample randomly and then subsample (to maximize the overall diversity of rule combinations while keeping a uniform distribution over complexity). We measure the diversity of rule combinations using the empirical entropy of a weighted subset of the rule application DAGs, and we use the number of rule applications as a measure of the complexity of an example. We also limit the maximum example complexity such that the questions remain relatively natural. Table TABREF15 shows examples of generated questions at varying levels of complexity. An example of a complete data item is shown in Appendix SECREF8, a more detailed data quality analysis is presented in Appendix SECREF9, and the generation algorithm is discussed in more detail in Appendix SECREF18.
The CFQ Dataset ::: Dataset details and statistics
Input and output. While the primary focus of the dataset is semantic parsing (natural language question to sparql query), we also provide natural language answers for each question. This allows the dataset to be used in a text-in-text-out scenario as well (see Appendix SECREF8).
Ambiguity. We largely avoid ambiguity in the questions. In particular, we make sure each name is used to refer to exactly one entity, and we avoid different possible parse trees, different interpretations of plurals, and the need for disambiguation that requires semantic knowledge.
Scope. We select the following language features as compositional building blocks: open questions and closed questions; subordinate clauses; active and passive voice; conjunctions of verb phrases and of noun phrases; possessives with roles (“X's parent”); adjectives; and type restrictions. For knowledge base features, we select roles, verbs, types, and adjectives from domains that are well-represented in Freebase and that can be combined easily. We start from the popular movie domain (e.g., directing, producing, editor, sequel) and extend this with personal relations (e.g., parent, spouse, sibling), companies (e.g., founding, employer), and adjectives (e.g., gender, nationality).
Logical form and grammar. For the internal logical form, we adopt a variation of the description logic $\mathcal {EL}$ BIBREF14, BIBREF15, augmented with additional constructors (see Appendix SECREF16) to more easily map to certain linguistic structures. For the grammar rules, we use a unification-based grammar syntax similar to that used in the Prolog extension GULP 3.1 BIBREF16, with addition of support for disjunction, negation, absence, and default inheritance of features for compactness of representation.
Grounding in Freebase. Once an example is generated by the CFQ rules, it still contains entity placeholders instead of Freebase machine ids (MIDs). For the task of semantic parsing, the examples could theoretically be used as-is, as our avoidance of semantic ambiguity means that a learner should not need knowledge of the specific entity in order to parse the question. To make the questions natural, however, we apply an additional step of replacing the placeholders with appropriate specific entities. To do this we first execute the generated sparql query against Freebase. This returns a set of candidate MID combinations that satisfy the query and can be used as substitutes. If the set is empty, we abandon the generated question candidate as unnatural. Otherwise, we pick one combination at random to yield a question with positive answer. In the case of a closed question, we also generate a variation that yields the answer “No”, which we do by mixing in MIDs from another substitution (or a more generic replacement if that fails) to keep the question as plausible-sounding as possible. We then randomly choose either the question with positive or with negative answer, to avoid spurious correlations between question structure and yes/no answer.
Semantic and structural filtering. Even among the questions that can be satisfied in Freebase, there are some that are meaningful but somewhat unnatural, such as “Was Strange Days directed by a female person whose gender is female?”. We automatically filter out such unnatural questions using semantic and structural rules. Note that since we do not require a learner to identify such questions, we do not track these filtering rules.
Release and statistics.
CFQ contains 239,357 English question-answer pairs that are answerable using the public Freebase data. (The data URL is not yet provided for anonymous review.) We include a list of MIDs such that their English names map unambiguously to a MID. Table TABREF17(a) summarizes the overall statistics of CFQ. Table TABREF17(b) uses numbers from BIBREF8 and from an analysis of WebQuestionsSP BIBREF17 and ComplexWebQuestions BIBREF18 to compare three key statistics of CFQ to other semantic parsing datasets (none of which provide annotations of their compositional structure). CFQ contains the most query patterns by an order of magnitude and also contains significantly more queries and questions than the other datasets. Note that it would be easy to boost the raw number of questions in CFQ almost arbitrarily by repeating the same question pattern with varying entities, but we use at most one entity substitution per question pattern. Appendix SECREF10 contains more detailed analyses of the data distribution.
Compositionality Experiments for CFQ and scan
The DBCA principles described in Section SECREF6 enable a generic and task-independent method for constructing compositionality experiments. To construct such an experiment for a dataset $U$ and a desired combination of atom and compound divergences, we use an iterative greedy algorithm that starts with empty sets $V$ (train) and $W$ (test), and then alternates between adding an example $u \in U$ to $V$ or $W$ (while maintaining the desired train/test ratio). At each iteration, the element $u$ is selected such that $\mathcal {D}_C(V \Vert W)$ and $\mathcal {D}_A(V \Vert W)$ are kept as closely as possible to the desired values. To reduce the risk of being stuck in a local optimum, we also allow removing examples at certain iterations.
In general, there are many different splits that satisfy a desired compound and atom divergence. This reflects the fact that a certain compound may either occur exclusively in the train set or the test set, or it may occur in both of them because the split may have achieved the desired compound divergence by separating other (possibly orthogonal) compounds. Our greedy algorithm addresses this by making random choices along the way, starting with picking the first example randomly.
For the goal of measuring compositional generalization as accurately as possible, it is particularly interesting to construct maximum compound divergence (MCD) splits, which aim for a maximum compound divergence at a low atom divergence (we use $\mathcal {D}_A \le 0.02$). Table TABREF18 compares the compound divergence $\mathcal {D}_C$ and atom divergence $\mathcal {D}_A$ of three MCD splits to a random split baseline as well as to several previously suggested compositionality experiments for both CFQ and the existing scan dataset (cf. Section SECREF30). The split methods (beyond random split) are the following:
Output length: Variation of the setup described by BIBREF2 where the train set consists of examples with output (sparql query or action sequence) length $\le \hspace{-2.5pt} N$, while the test set consists of examples with output length $> \hspace{-2.5pt} N$. For CFQ, we use $N = 7$ constraints. For scan, we use $N = 22$ actions.
Input length: Variation of the above setup, in which the train set consists of examples with input (question or command) length $\le N$, while test set consists of examples with input length $> N$. For CFQ, we use $N=19$ grammar leaves. For SCAN, we use $N=8$ tokens.
Output pattern: Variation of setup described by BIBREF8, in which the split is based on randomly assigning clusters of examples sharing the same output (query or action sequence) pattern. Query patterns are determined by anonymizing entities and properties; action sequence patterns collapse primitive actions and directions.
Input pattern: Variation of the previous setup in which the split is based on randomly assigning clusters of examples sharing the same input (question or command) pattern. Question patterns are determined by anonymizing entity and property names ; command patterns collapse verbs and the interchangeable pairs left/right, around/opposite, twice/thrice.
All of these experiments are based on the same train and validation/test sizes of 40% and 10% of the whole set, respectively. For CFQ, this corresponds to about 96k train and 12k validation and test examples, whereas for scan, it corresponds to about 8k train and 1k validation and test examples. We chose to use half of the full dataset for the train-test splits, as it led to an appropriate balance between high compound divergence and high train set size in informal experiments.
The MCD splits achieve a significantly higher compound divergence at a similar atom divergence when compared to the other experiments. The reason for this is that, instead of focusing on only one intuitive but rather arbitrary aspect of compositional generalization, the MCD splits aim to optimize divergence across all compounds directly.
Interestingly, the MCD splits still correlate with the aspects of compositional generalization that are targeted by the other experiments in this table. As shown in the four right columns of Table TABREF18, for each MCD split, the train set $V$ contains on average shorter examples than the test set $W$ (measured by the ratio of average lengths), and $V$ also contains only a small fraction of the input and output patterns used in $W$ (measured by the fraction of patterns covered). However, these correlations are less pronounced than for the experiments that specifically target these aspects, and they vary significantly across the different MCD splits.
This illustrates that MCD splits are comprehensive in the sense that they cover many different aspects of compositional generalization, especially when looking at multiple of them. It also means that whether a certain example ends up in train or test is not determined solely by a single criterion that is immediately observable when looking at the input and output (such as length). As we show in Appendix SECREF91, this generally makes the examples in train and test look fairly similar.
Experimental Results and Analysis ::: Experiment Setup
We use three encoder-decoder neural architectures as baselines: (1) LSTM+attention as an LSTM BIBREF19 with attention mechanism BIBREF20; (2) Transformer BIBREF21 and (3) Universal Transformer BIBREF22.
We tune the hyperparameters using a CFQ random split, and we keep the hyperparameters fixed for both CFQ and scan (listed in Appendix SECREF12). In particular the number of training steps is kept constant to remove this factor of variation. We train a fresh model for each experiment, and we replicate each experiment 5 times and report the resulting mean accuracy with 95% confidence intervals.
Note that while we construct test and validation sets from the same distribution, we suggest that hyperparameter tuning should be done on a random split (or random subset of the train set) if one wants to measure compositional generalization of a model with respect to an unknown test distribution as opposed to an architecture with respect to a known test distribution. Tuning on a validation set that has the same distribution as the test set would amount to optimizing for a particular type of compound divergence and thus measure the ability for a particular architecture to yield models that can be made to generalize in one particular way (through leaking information about the test set in the hyperparameters).
Similarly to BIBREF8, we anonymize the Freebase names and MIDs in the textual input and the SPARQL output, respectively, by replacing them with a placeholder (e.g., “M0” for the first MID). This removes the need for two learning sub-tasks that are orthogonal to our focus: named entity recognition and learning that the MIDs are patterns that need to be copied. An example input-output (question-query) pair then looks like the following: `Was M0 a screenwriter' $\mapsto $ `select count(*) where {M0 a ns:film.writer}'.
The main relation we are interested in is the one between compound divergence of the data split and accuracy. Specifically, we compute the accuracy of each model configuration on a series of divergence-based splits that we produce with target compound divergences that span the range between zero and the maximum achievable in 0.1 increments (while ensuring that atom divergence does not exceed the value of 0.02). For each target divergence, we produce at least 3 different splits with different randomization parameters (compare Section SECREF4). For comparison, we also compute accuracies on the other splits shown in Table TABREF18.
Experimental Results and Analysis ::: Results and analysis for CFQ
The mean accuracies of the three architectures on CFQ are shown in Figure FIGREF28(a) and Table TABREF29. We make three main observations:
All models achieve an accuracy larger than 95% on a random split, and this is true even if they are trained on 10 times fewer training instances (see Appendix SECREF15 for a more detailed analysis on the performance with varying training size).
The mean accuracy on the MCD splits is below 20% for all architectures, which means that even a large train set (about 96k instances) with a similar distribution of atoms between train and test is not sufficient for these architectures to perform well on the test distribution.
For all architectures, there is a strong negative correlation between the compound divergence and the mean accuracy.
This suggests that the baseline models are able to capture the superficial structure of the dataset, but fail to capture the compositional structure. We find it surprising that varying the compound divergence gives direct control of the (mean) accuracy, even though the examples in train and test look similar (see Appendix SECREF91). This means that compound divergence seems to capture the core difficulty for these ML architectures to generalize compositionally.
Note that the experiment based on output-length exhibits a worse accuracy than what we would expect based on its compositional divergence. One explanation for this is that the test distribution varies from the training distribution in other ways than compound divergence (namely in output length and a slightly higher atom divergence), which seems to make this split particularly difficult for the baseline architectures. To analyze the influence of the length ratio further, we compute the correlation between length ratios and accuracy of the baseline systems and compare it to the correlation between compound divergence and accuracy. We observe $R^2$ correlation coefficients between 0.11 and 0.22 for the input and output length ratios and between 0.81 and 0.88 for the compound divergence. This shows that despite the known phenomenon that the baseline systems struggle to generalize to longer lengths, the compound divergence seems to be a stronger explanation for the accuracy on different splits than the lengths ratios.
Error analysis. We perform an analysis of the errors for the split MCD$_{1}$ (the first MCD split that we constructed, with more details provided in Appendix SECREF13). We observe accuracies between 29% and 37% on the test set of this particular split. Qualitatively, all three systems seem to make similar errors at this point (68% of errors are on the same samples). They make more errors for longer sequences and predict about 20% too short output when they make an error. The most common category of error is the omission of a clause in the output (present in 43%-49% of the test samples), e.g.: (1) Omitted conjunctions: for the input “What spouse of a film producer executive produced and edited M0, M1, and M2?” the best system ignores “executive produced” in the output. (2) Omitted adjectives: for the input “Which female Spanish film producer was M3' s spouse?” the best system ignores the adjective “female”.
Experimental Results and Analysis ::: Results and analysis for scan
To demonstrate the use of our analysis method on another dataset, we re-create the scan dataset BIBREF2, which consists of compositional navigation commands (e.g, `turn left twice and jump') mapped to corresponding action sequences (e.g., `lturn lturn jump'). We use the original grammar while tracking the rule applications used for the construction of each input-output pair. This enables us to compare the compositional generalization abilities of the baseline systems on this dataset in a novel way.
Figure FIGREF28(b) shows the graphs for the scan data set in the same setup as Figure FIGREF28(a) does for CFQ. We observe that the compound divergence again is a good predictor for the mean accuracy for all three architectures. One difference is that for scan the systems are able to attain accuracies close to 100% for compound divergences up to around 0.2, which is not the case for CFQ. This seems to be in line with the fact that overall CFQ is a more complex task than scan: the total number of rules used in generating scan is only 38 in comparison to 443 rules in the construction of CFQ.
Appendix SECREF14 provides a comparison to other experiments presented in previous work, including experiments that have significantly different atom distributions. We observe that this generally causes lower accuracies but does not break the correlation between accuracy and compound divergence.
Related Work
To measure compositional generalization for semantic parsing to SQL, BIBREF8 propose to ensure that no SQL query pattern occurs in both the train and the test set (“query split”), and they provide such splits for several data sets. By evaluating several ML architectures the authors confirm that this query-pattern split is harder to learn than a conventional split.
BIBREF2 introduce the scan dataset, and several publications provide interesting analyses of compositional generalization using it BIBREF5, BIBREF6. BIBREF7 discuss a particular extension of a seq2seq model that is effective in handling difficult scan sub-tasks by separating semantic and syntactic information during learning. Our contributions extend the analyses on the scan data in several ways: CFQ provides richer annotations and covers a broader subset of English than the scan dataset, and we propose a comprehensive score for assessing aggregate compositionality of a system on a given task.
The mathematics dataset BIBREF13 is a large, automatically generated set of 112M samples in 56 separated sub-tasks. The authors present data and experiments that share common goals with our approach, but focus on mathematical reasoning instead of natural language. Our breakdown of generation rules per train sample is more fine-grained, which allows a more precise compositional generalization analysis. Being automatically generated also links our approach to datasets such as the bAbI tasks BIBREF23, which however do not focus on compositional generalization.
A dataset related to CFQ is ComplexWebQuestions BIBREF18, which consists of complex questions that are automatically generated from simpler sub-questions in WebQuestionsSP BIBREF17 and then reworded manually. While these datasets can be used for semantic parsing, we did not find them suitable for a thorough compositionality analysis because a consistent annotation with the compositional structure would be hard to obtain. Other approaches to semi-automatic dataset creation also use paraphrasing BIBREF24, BIBREF25.
BIBREF3 introduce the generated clevr dataset, which shares common goals with our work applied in the area of visual reasoning. The dataset's functional programs capture some of the structural information of the questions and are linked one-to-many to the 423 question patterns used. The authors specifically investigate generalization to new combinations of visual attributes in one experiment which uses a particular train-test split based on the colors used. BIBREF26 propose a neural-symbolic architecture and discuss promising results on additional specific splits of the clevr data, e.g. based on object counts and program depth. BIBREF27 describe how the application of compositional attention networks to the clevr data leads to structured and data-efficient learning. BIBREF28 present a large, compositional, generated visual question answering data set with functional programs, on which neural state machines achieve good performance BIBREF29. The use of specific splits between train and test data also occurs in the context of visual data. E.g., BIBREF30 propose a greedy split algorithm to maximize the coverage of test concepts in the train set while keeping question-type/answer pairs disjoint and observe performance degradation of existing approaches. BIBREF31 introduce a synthetic visual question answering dataset called sqoop, which is used to test whether a learner can answer questions about all possible object pairs after being trained on a subset.
While these datasets are very interesting, the additional annotation that we provide in CFQ indicating the exact rule trees needed to link input and output makes additional analyses regarding compositionality possible. Our analyses go beyond many of the presented discussions (that mostly focus on accuracy regarding particular holdouts) in formalizing an approach that uses the atom and compound divergences to measure compositionality.
A number of ML approaches have been developed for semantic parsing. BIBREF32 propose Key-Value Memory Networks – neural network-based architectures that internalize a knowledge base into the network – and introduce the WikiMovies dataset. BIBREF33 develop an end-to-end architecture that can handle noise in questions and learn multi-hop reasoning simultaneously. They introduce the MetaQA benchmark that is based on WikiMovies but uses a set of only 511 question patterns (mod entities) shared between train and test.
With regards to studying compositionality in ML, BIBREF34 argue that combinatorial generalization should be a top priority to achieve human-like abilities. BIBREF35 discusses measuring the compositionality of a trained representation, e.g. of a learned embedding. The author suggests to use a tree reconstruction error that is based on how well the oracle derivation of the input matches the structure that can be derived on the representations. BIBREF4 discuss an architecture that enables the learning of compositional concept operators on top of learned visual abstractions. BIBREF36 introduce the compositional recursive learner that “can generalize to more complex problems than the learner has previously encountered”.
Conclusion and Outlook
In this paper we presented what is (to the best of our knowledge) the largest and most comprehensive benchmark for compositional generalization on a realistic NLU task. It is based on a new dataset generated via a principled rule-based approach and a new method of splitting the dataset by optimizing the divergence of atom and compound distributions between train and test sets. The performance of three baselines indicates that in a simple but realistic NLU scenario, state-of-the-art learning systems fail to generalize compositionally even if they are provided with large amounts of training data and that the mean accuracy is strongly correlated with the compound divergence.
We hope our work will inspire others to use this benchmark as a yardstick to advance the compositional generalization capabilities of learning systems and achieve high accuracy at high compound divergence. Some specific directions that we consider promising include applying unsupervised pretraining on the input language or output queries and the use of more diverse or more targeted learning architectures, such as syntactic attention BIBREF7. We also believe it would be interesting to apply the DBCA approach to other domains such as visual reasoning, e.g. based on clevr BIBREF3.
In the area of compositionality benchmarks, we are interested in determining the performance of current architectures on the end-to-end task that expects a natural language answer given a natural language question in CFQ. We would like also to extend our approach to broader subsets of language understanding, including use of ambiguous constructs, negations, quantification, comparatives, additional languages, and other vertical domains.
Example Dataset Item
The following shows an example data item including the question text in various forms, the answer, the sparql query in various forms, some tracked statistics, and the set of used rules (atoms) and the applied rule tree (compound). Some details are omitted, indicated by ellipses (`...').
Data Quality Analysis
During the development of our data generation pipeline, we manually checked the generated examples for quality. Below is a random selection of 50 examples of the final CFQ dataset (no cherry-picking was used). Brackets around [entity names] are provided just for ease of human reading. Manual checking also indicated that all questions are associated with the semantically correct sparql queries. However, because we rely on the data present in Freebase, there are three debatable questions which sound somewhat unnatural (UNKREF33, UNKREF51, and UNKREF59, see further discussion below the list).
Who was a writer, star, and cinematographer of [Tetsuo: The Bullet Man], [Nightmare Detective], and [Bullet Ballet]?
Which male person was a sibling of [Andrew Klavan]?
Did [Wallace Stevens] influence [Levi Seeley]'s spouse and parent?
Did a producer, writer, and art director of [Thelma & Luis] produce, direct, and write [Light Girls]?
Were [Hangover Square], [Zack and Miri Make a Porno], and [Clerks II] edited by a founder and employee of a film producer?
What American parent of [Charlie Sistovaris] was a British screenwriter's sibling?
Did [Anne Williams Rubinstein] marry a person that influenced a screenwriter and influenced [John Most]?
Was [Cachún cachún ra ra!]'s director a film director's American child?
Did [Maisy's Garden]'s executive producer write, edit, and executive produce [Pakalppooram], [It's Not About the Shawerma], [Rick's Canoe], and [The Fifth Wall]?
Was [Holly Ellenson]'s child [Wally Ellenson]?
Did [Emerald Cities]'s cinematographer, writer, and editor edit, executive produce, and direct [Blues for the Avatar] and [White Stork Is Coming]?
Was a film producer [Lilies of the Ghetto]'s distributor and producer?
Which child of [Mimi Iger] did a film producer employ and [The Walt Disney Company] employ?
What Japanese spouse of [Hong Kong Paradise]'s star did [Ineko Arima] and [Nishiki Kô] marry?
Who influenced and was influenced by [Black Dynamite]'s star?
What was written by, edited by, directed by, produced by, and executive produced by [Pauline Collins]'s child's sibling?
Which Swedish film director that [Théo Van Horn]'s actor influenced did [Egen ingȧng] star?
Who was influenced by [Golden Yeggs]'s star, was influenced by [Richard Pryor], was influenced by [Bill Murray], and married [Elaine Chappelle]?
What did [This Is My Show]'s director, cinematographer, and star direct, edit, produce, and executive produce?
Who was a male costume designer and director of [Ene... due... like... fake...] and [The Windmill Bar]?
Was [Kumudu Munasinghe] a Dutch film producer's country of nationality's employee?
Did an art director, editor, director, writer, cinematographer, and star of [Tetsuo II: Body Hammer] produce [Nightmare Detective], [Tetsuo: The Iron Man], and [A Snake of June]?
Was [Alexandra Naoum] [Monsieur Verdoux]'s producer, writer, and star?
What film director founded [THX], was employed by [American Zoetrope], [LucasArts], [Skywalker Sound], and [Lucasfilm], and founded [Industrial Light & Magic]?
What male employee of [Weta Workshop] was [Bad Taste]'s editor?
Were [Weta Digital] and [Weta Workshop] founded by a cinematographer and founded by a film editor?
What art director influenced [DreamWorks Animation]'s founder?
Did [Daisies] star [Fruit of Paradise]'s costume designer and writer, star [Jaromír Vomácka], and star [Jirina Myskova]?
What character was influenced by a costume designer, influenced by [Pedro Calderón de la Barca], influenced by [William Shakespeare] and [Luis Buñuel], and influenced by [Miguel de Unamuno]?
What British costume designer of [The Love Letter] and [The Chamber] was a screenwriter's child?
Was [Eric Massa] a cinematographer's parent's sibling's American sibling?
What art director of [Stepping Sisters 1932] was a parent of [Imre Sándorházi]?
What was executive produced by, written by, produced by, and edited by a director of [V/H/S/2]'s sequel?
What did an editor and cinematographer of [Tongue Twister Variations] direct?
Who was a Canadian screenwriter that produced [Her Painted Hero] and [The Nick of Time Baby]?
Which American parent of [Janet Friedman] did [Rose Friedman] influence and marry?
Did [George Carlin] influence [Louis C.K.: Shameless]'s executive producer and influence [Joan Rivers]?
Who was a male writer, star, director, and costume designer of [The Wizard of Speed and Time]?
Who was [Lost Boys: The Thirst]'s prequel's sequel's art director?
Did a cinematographer's female parent executive produce, direct, and write [Hit Dat Shit 5]?
Who married [Siri von Essen], influenced [A Lesson in Love]'s director and art director, influenced [Tennessee Williams], and influenced [Maxim Gorky]?
What Italian film director directed [Children of Hannibal]?
What film producer directed, wrote, edited, and produced [la estrella], [la ardilla], and [el valiente]?
Were [Flames: The Movie] and [Soltera] directed by a male person and executive produced by [Hilda Russoff]'s spouse?
Was a sibling of [Fawwaz bin Abdulaziz Al Saud] [Badr bin Abdulaziz Al Saud]'s sibling?
What did a sibling of [Louise Rohr] executive produce, produce, and edit?
Did a French cinematographer of [Le Volcan interdit] edit [The Last Bolshevik] and direct [A.K.] and [Statues Also Die]?
Was [Mannai Thottu Kumbidanum] directed by and written by a Dutch male cinematographer?
Was a director, art director, executive producer, and costume designer of [But I'm a Genderqueer] [Lauren Soldano]?
Was [When We Were Kings] produced by a film editor whose spouse was employed by [Royal Academy of Dramatic Art] and distributed by [PolyGram Filmed Entertainment]?
Further discussion of the debatable questions:
Did [Wallace Stevens] influence [Levi Seeley]'s spouse and parent?
The occurrence of the seemingly implausible combination of roles “spouse and parent” is due to incorrect data in Freebase, in which there are 502 entities asserted to be both the spouse and parent of other entities. For instance, “Anne Dacre” is both the spouse and parent of “Christopher Conyers”. We can also find occasional occurrences in CFQ of other implausible role combinations, such as “parent and child”, “spouse and sibling” etc., triggered by similar Freebase data issues.
Was [Kumudu Munasinghe] a Dutch film producer's country of nationality's employee?
The somewhat unnatural phrasing of “country's employee” occurs due to a modeling choice in Freebase, in which the same entity is used to represent both a country and the government of that country. This makes it possible for a country to employ a person.
What character was influenced by a costume designer, influenced by [Pedro Calderón de la Barca], influenced by [William Shakespeare] and [Luis Buñuel], and influenced by [Miguel de Unamuno]?
The somewhat unnatural phrasing of “a character was influenced by” occurs due to a modeling choice in Freebase, in which when a film character is based on a real person, Freebase commonly uses the same entity to represent both. This makes “person” and “character” exchangeable in the questions where the person is also a film character.
Data Distribution Analysis ::: Answer frequencies
Table TABREF85 shows the most frequently occurring answers in CFQ. Not surprisingly, after the answers “Yes” and “No”, entities related in Freebase to the domain of movies have highest frequency.
Data Distribution Analysis ::: Impact of subsampling on the distribution of complexity levels
Figure FIGREF87 illustrates how subsampling changes the distribution of questions in CFQ with different levels of complexity to become more even.
Data Distribution Analysis ::: Impact of subsampling on the frequency of rules and rule combinations
Subsampling increases the frequency of rarely used rules and rule combinations and decreases the frequency of commonly used ones. For rules, this is illustrated by Figure FIGREF89 which shows the ratio of examples each rule appears in, before and after subsampling, in the order of their frequency. Figure FIGREF90 shows the same comparison for rule combinations.
Divergence-Based Split Analysis ::: Qualitative analysis of MCD@!START@$_{1}$@!END@
Traditional compositionality experiments often use train-test splits based on observable properties of the input and output (e.g., input/output complexity, input/output patterns, and input/output feature holdouts). One consequence of this is that the difference between train and test examples is relatively easily observable “with the naked eye”. The lists below illustrate that this is not usually the case for divergence-based splits. Similar to the random sample of the general data in Appendix SECREF9 we provide a random sample of size 20 from both the train and test set here. Indeed, even for the MCD$_{1}$ split with a high divergence of 0.694, the 20 random samples of train and test questions shown below cannot easily be distinguished as they both contain the same kind of questions of different sizes.
Train samples from MCD$_{1}$:
What was founded by a costume designer, founded by [Forgotten Silver]'s star, and founded by [Jamie Selkirk]?
Which male person influenced and was influenced by [William Dean Howells]?
Did [Marco Bellocchio] produce, write, and direct [Greek Pete]?
What did [Rick Schmidt] edit, [Philip Rashkovetsky] edit, and a cinematographer edit?
Were [The Living Playing Cards] and [The Haunted Castle] edited by, directed by, and produced by a French writer of [Le cauchemar de Méliès]?
What did a spouse of [Shorts]'s producer's spouse executive produce and direct?
Did [P. G. Wodehouse], [Raymond Chandler], [Edward Bunker], [Pauline Kael], and [Michael Cimino] influence [Grindhouse]'s cinematographer and star?
What Mexican person did a film producer employ?
Did [The Midnight After]'s Chinese executive producer edit [Perfect Life] and [Dumplings]?
Who did [For the Secret Service]'s director's female spouse influence?
Who married, was influenced by, and influenced a company's founder?
Was [MAN SE]'s French male German employee's employer [Sulzer]?
Who influenced an actor that [Robin Santana] was influenced by and [K. J. Stevens] was influenced by and was influenced by [Virgil]?
Did [Pirates of Malaysia] star [Giuseppe Addobbati] and star a Spanish screenwriter?
Was [The Silence of the Sea] written by, produced by, executive produced by, directed by, and edited by [The Red Circle]'s French editor?
Did [Chanel] employ a German costume designer, employ [Gaspard Ulliel] and [Maureen Chiquet], and employ [Jacques Polge]?
Who was influenced by [Adam Sandler] and married a film producer?
Did a Spanish screenwriter's child direct and edit [Bakuchi-uchi: Nagaremono]?
Was a founder of [IG Port] employed by a film producer?
Was [Orizzonti Orizzonti!] executive produced by and written by an art director's sibling?
Test samples from MCD$_{1}$:
What sequel of [Paranormal Activity 2] was edited by and written by a film director?
What spouse of a film producer founded [Grand Hustle Records] and was employed by [40/40 Club], [Roc-A-Fella Records], and [Def Jam Recordings]?
Did [Pixar] employ an art director and employ [Susham Bedi]?
Was a sibling of [David Lindbland] [Dynamit Nobel]'s Swedish founder?
What prequel of [Charlie the Unicorn 2] starred, was edited by, was produced by, was written by, and was directed by [Jason Steele]?
Did [Rick Schmidt] direct, produce, executive produce, and edit [Blues for the Avatar], [White Stork Is Coming], [The Fifth Wall], and [It's Not About the Shawerma]?
Was [Luke Larkin Music] an art director's employer?
What prequel of [Goat Story 2] was executive produced, written, directed, edited, and produced by [Jan Tománek]?
Was [Bullet Ballet]'s editor, star, director, and cinematographer [Promises Written in Water]'s star, director, writer, executive producer, and art director?
What was edited by, produced by, directed by, and written by [Ellis Kaan Ozen], [Thaw Bwe], [Jeffrey Malkofsky-Berger], and [Leslie Berkley]?
Was a person's female sibling [Reggae in a Babylon]'s producer?
Who was a director, cinematographer, executive producer, art director, producer, star, and writer of [The Man Who Killed God]?
Was [My Sweet Home]'s director, editor, writer, art director, producer, cinematographer, and costume designer a person?
Which art director, star, and editor of [The Brown Bunny] and [Promises Written in Water] did [Cord] star?
Did an employee and founder of [Virgin Mobile Australia], [Virgin Mobile USA], and [Virgin Mobile France] found [Virgin America] and found [V2 Records]?
Was a Chinese executive producer and star of [Happy Ghost II] and [All's Well, Ends Well 2010] a film director?
Was [The Voyeur]'s executive producer an actor's parent?
Did [Erasable Cities]'s writer, producer, editor, art director, cinematographer, and director produce and executive produce [Promises Written in Water]?
Who was an editor, star, and cinematographer of [Tetsuo: The Iron Man], [A Snake of June], and [Bullet Ballet]?
Was a costume designer's employer [Philips High School]?
Divergence-Based Split Analysis ::: Quantitative analysis of MCD@!START@$_{1}$@!END@
Figure FIGREF133 shows the frequency of atoms (upper graph) and compounds (lower graph) in the train and test sets of the maximum compound divergence split for the CFQ data. As the frequency of an atom resp. compound we use the fraction of examples it appears in. Both atoms and compounds are indexed primarily by their frequency in the train set, secondarily by their frequency in the test set, in decreasing order. For practical reasons we only look at a small subset of compounds here but we believe the analysis is representative.
We can see that the frequency of atoms in the two sets is very aligned and that all atoms from the test set appear in the train set. The frequency of compounds however is wildly different: While some invariably occur in both sets, the frequencies are often not aligned and most compounds appear only in either the train or the test set.
Hyperparameters
The experiments were run using the tensor2tensor framework BIBREF39 with some of the hyperparameters tuned using a random split of a previous, smaller version of the data set during development. We use the default hyperparameter sets publicly available in the tensor2tensor implementation (obtained from https://github.com/tensorflow/tensor2tensor) and override the tuned hyperparameters. The hyperparameters used are summarized in Table TABREF134.
Detailed error analysis ::: Breakdown of error types
Table TABREF136 shows a more detailed analysis of the errors that the baseline models make on CFQ for MCD$_{1}$ (compare Section SECREF24). The reported errors are bucketized into three main types: sparql property clause error, sparql filter clause error and malformed sparql query in the model's output. The total number of test set examples exhibiting any clause or filter error is reported (sum column), as well as the number of insertions (ins), deletions (del), and substitutions (sub) in the model's output with respect to the correct query. Property clause substitution errors are further subdivided into those where only the property itself is wrong while subject and object are correct (prop), those where the property is correct but either subject or object is wrong (node) and those where both the property and the subject or the object are wrong (both).
The accuracy metric requires the model response and the golden (correct) answer to be exactly equal to each other. Thus, a sparql query with the same clauses as the golden answer but in a different order or with some of the clauses appearing multiple times is also considered to be an error despite being equivalent to the golden answer in its meaning. The amount of such errors is relatively small though, accounting for 1.8%, 0.6% and 1.5% of total test set size for LSTM+Attention, Transformer and Universal Transformer respectively.
Detailed error analysis ::: Qualitative error analysis
Below we qualitatively analyze a number of instances the models fail on. We anonymize the MIDs in the same way as the data is provided to the models (see Section SECREF5). We first select queries on which all machine learning systems fail in all replicated runs (about 5k instances out of a total of about 12k), and then randomly select queries from this list. In the following we discuss a few cases in more detail. Note that, for readability, we use the following abbreviations for the sparql properties in Query 1:
ns:people.person.child = ns:people.person.children|
ns:fictional_universe.fictional_character.children|
ns:organization.organization.child/
ns:organization.organization_relationship.child
ns:people.person.sibling = ns:people.person.siblings/
ns:people.siblingrelationship.sibling|
ns:fictionaluniverse.fictionalcharacter.siblings/
ns:fictionaluniverse.
siblingrelationshipoffictionalcharacters.siblings
Query 1: “What sibling of M0 was M1' s parent?”
Golden (correct) sparql query:
SELECT DISTINCT ?x0 WHERE {
?x0 ns:people.person.child M1 .
?x0 ns:people.person.sibling M0 .
FILTER ( ?x0 != M0 )
}
Inferred (system) sparql query:
SELECT DISTINCT ?x0 WHERE {
?x0 ns:people.person.sibling ?x1 .
?x0 ns:people.person.sibling M0 .
?x1 ns:people.person.child M1 .
FILTER ( ?x0 != ?x1 )
}
Analysis. The meaning of the sparql query generated by the system is “What sibling of M0 was a sibling of M1's parent?”, which is incorrect. We next analyze the train set, in order to show that we believe enough information has been provided in the train set for the question to be answered correctly.
Some subqueries of the query and their occurrences are shown in Table TABREF140. While the exact subquery “What sibling” does not occur at training, the two words have been shown separately in many instances: the subqueries “sibling of Mx”, and “Mx's parent” occur 2,331 and 1,222 times, respectively. We can analyze this example in more detail by comparing parts of the rule tree of this example with those shown at training. As can be read from the table, similar sentences have been shown during training. Some examples are:
What was executive produced by and written by a sibling of M0?
What costume designer did M1's parent employ?
What cinematographer was a film editor that M2 and M3 married?
What film director was a character influenced by M2?
Query 2: “Did a male film director edit and direct M0 and M1?”
Golden (correct) sparql query:
SELECT count ( * ) WHERE {
?x0 ns:film.director.film M0 .
?x0 ns:film.director.film M1 .
?x0 ns:film.editor.film M0 .
?x0 ns:film.editor.film M1 .
?x0 ns:people.person.gender m_05zppz
}
Inferred (system) sparql query:
SELECT count ( * ) WHERE {
?x0 ns:film.director.film M0 .
?x0 ns:film.director.film M1 .
?x0 ns:film.editor.film M0 .
?x0 ns:people.person.gender m_05zppz
}
Analysis. The meaning of the inferred sparql query is “Did a male film director edit M0 and direct M0 and M1?”. It thus seems the model `forgets' to include the relation between the director and movie M1.
Looking at subqueries and their occurrence count (Table TABREF145), we see again that various subqueries occur often during training. However, “edit and direct” have not been shown often together. When looking at the rule trees, we see that both conjunctions in the query occur often at training separately: “Did [DetNP] [VP] and [VP] [DetNP]” occurs 1,432 times, and “Did [DetNP] [VP] [Entity] and [Entity]” occurs 909 times. However, they never occur together: “Did [DetNP] [VP] and [VP] [DetNP] and [DetNP]” does not occur at training. This may be the reason why all systems fail on this example, but at the same time we believe a compositional learner should be able to generalize correctly given the training instances. Some examples are:
Did a male film director that M3's parent married influence an art director?
Did a film producer that played M2 edit and direct M1?
Did a screenwriter edit and direct a sequel of M1
Did a Chinese male film director edit M1 and M2?
Additional experimental results on scan
Figure FIGREF150 shows a scatter plot of accuracy vs. compound divergence for the three baseline architectures (see Section SECREF5) on existing splits of the scan data. These splits are discussed in BIBREF2 and BIBREF6, and the exact split data is available. (Data splits obtained from https://github.com/brendenlake/SCAN). We map these splits onto the re-created scan data, which enables us to measure the atom and compound divergences. The authors present a total of six split experiments (some with several sub-experiments):
BIBREF2:
simple (random)
by action sequence length
adding a primitive and adding a primitive along with complex combinations
BIBREF6:
adding a template
adding template fillers
adding more training examples of fillers (fewshot)
In the plot, we omit some data points that are too close to be distinguished easily. The point labels have the form `(abbreviated experiment name)<(parameter)>@(number of samples) (baseline system abbreviation) [(train set size fraction), (split atom divergence)]'. The train set size fraction is given as a percentage of the overall data size. The baseline system abbreviations are LSTM, T for Transformer, UT for Universal Transformer, T/UT where both transformer models are indistinguishable, and empty where all three systems perform indistinguishably. The abbreviated experiment name is one of the names in italics above.
We can observe a strong dependency of the accuracies on the compound divergence of the data split. Again, this seems to indicate that the compound divergence is correlated with accuracy for these baseline architectures. One difference to the data shown in Figure FIGREF28(b) is that for this set of experiments the accuracy drops faster with increasing compound divergence. One explanation for this effect is that the experiments are directly aimed at highlighting one specific potentially problematic scenario for learning. E.g. in the experiment `primitive<jump>' (with very low accuracies for all three systems) the jump command is shown exactly in one combination (namely alone) in the training data while it occurs in all test examples in arbitrary combinations.
This is reflected in the higher atom divergence value of 0.08 for this split, as well as in all other splits that exhibit a low accuracy at a low compound divergence in Figure FIGREF150. Note that BIBREF2 already compare the experiment `primitive<jump>' to the experiment `primitive<turn left>' for which all three systems achieve a much higher accuracy. In their interpretation of this phenomenon, they mainly focus on the fact that in contrast to 'jump', the action 'turn left' is also generated by other inputs. We additionally observe that the latter experiment also has a slightly lower atom divergence of 0.07, a lower compound divergence, and it covers a much larger part of the data in the train set (94% vs. 63%).
While the accuracies we observe for the `primitive' experiments are very much in line with the results reported by BIBREF2, we noticed a few interesting differences for other experiments: All three systems go to 100% accuracy on the fewshot task even for one example (while BIBREF6 report a slowly increasing accuracy for the architecture they evaluate). On the other hand, both transformer models only reach 0% accuracy on the length split, while the LSTM obtains around 14% (which is in line with what previous work reports).
Analysis of relations between accuracy, compound divergence, and training size
Figure FIGREF28 shows for all baseline systems a strong correlation between accuracy and compound divergence for the chosen training sizes (96k for CFQ and 8k for scan). One interesting question is whether and how this correlation is changed for different training sizes. Figures FIGREF159 and FIGREF159 show that this correlation holds also for smaller training sizes but that the accuracy is generally somewhat lower for smaller training sizes.
At the same time, we observe that the difference between accuracies of various training sizes gets smaller as the training size increases. This can be seen even more clearly in Figures FIGREF159 and FIGREF159, which plot the training size rather than the compound divergence on the x-axis. These figures show that the increase in accuracy flattens out significantly as we reach training size of about 80k for CFQ and about 6k for SCAN. This indicates that further increasing train set size may not be sufficient to do well on these compositionality experiments.
Logical Form
To represent our logical form we use syntax of the description logic $\mathcal {EL}$ BIBREF14, BIBREF15 with additional concept and role constructors. These constructors do not have description logic semantics; instead, their meaning is completely determined by the set of generation rules of the CFQ dataset.
Let $A$ be a concept name, $C, C_1, C_2$ be concepts, $R, R_1, R_2$ be roles, and $v$ be a raw string. Then the following would be concepts:
and the following would be roles:
Note that our logical form does not have roles other than those in a form of RolePair($C_1$, $C_2$).
New strings are generated by using a special function new_var($\$S$). This function generates a unique string of the form ?x<N>, where N is a unique number, and assigns that string to variable $\$S$. This string can later be used as a variable in a sparql constraint.
Rule Format
This section describes the format of each of the rule types we use for generating the CFQ dataset, in the form in which they appear in the rules index in Appendix SECREF20.
General formatting conventions shared across all rule types:
Variable names are prefixed by `$'. Example: $X.
(Exception: In grammar rules, while variables standing for constants are prefixed by `$', variables standing for logical forms are prefixed by `_'. Example: _action.)
Concept names are written in camel case. Example: FilmProducer.
Names of functions that output logical forms (concepts, roles, or knowledge) are also written in camel case. Examples: DropDependency, BoundRolePairs, RolePair.
Names of functions that output string literals or which are used for converting logical forms to sparql are written in lowercase with underscores. Examples: def2sparql, get_specializations, new_var.
String literals are enclosed in single quotes. Example: 'ns:film:director'.
Rule Format ::: Grammar rule format
The CFQ grammar is a unification-based grammar of recursive rewriting rules used to generate pairs of strings and their corresponding logical form. For an introductory overview of unification-based grammars including several popular variations, see BIBREF38. The rules in the CFQ grammar follow a similar syntax in particular to that used in the Prolog extension GULP 3.1 BIBREF16, with the addition of support for disjunction, negation, absence, and default inheritance of features, and with minor differences in formatting described below.
Properties shared between the CFQ grammar syntax and that of BIBREF16 include the following:
Grammar rules are notated as variations of context-free phrase-structure rules of the form $T_{0} \rightarrow T_{1}$ ... $T_{n}$, where each of the syntactic non-terminals and terminals $T_{0}$ ... $T_{n}$ are augmented with feature lists in parentheses.
Each grammar rule can be interpreted as specifying how a feature structure (with logical form) that is unifiable with the lefthand side can be re-written to the sequence of features structures (with logical form) indicated on the righthand side.
Features are represented as attribute-value pairs separated by a colon (i.e., $attribute$:$value$).
Shared values in feature structures are represented through the use of variables.
Specifically, in the rules index, CFQ grammar rules are described in the format
$T_{0}(F_{0})[H]/L_{0} \rightarrow T_{1}(F_{1})/L_{1}$ ... $T_{n}(F_{n})/L_{n}$
where:
Each $T_{i}$ is a syntactic category (syntactic nonterminal) or a string literal (syntactic terminal).
Each $L_{i}$ for $i \in [1, n]$ is either a variable representing a logical form or an empty string. In the case when $L_{i}$ is an empty string, we allow dropping the trailing slash from the $T_{i}(F_{i})/L_{i}$ expression, resulting in just $T_{i}(F_{i})$.
$L_{0}$ is a logical form expressed in terms of $L_{1}...L_{n}$.
Each $F_{i}$ is a comma-separated feature list of the form $(attribute_{1}$:$value_{1}$, ..., $attribute_{k}$:$value_{k})$. In the case where $F_{i}$ is empty, we allow dropping the parentheses from the $T_{i}(F_{i})$ expression, resulting in just $T_{i}$.
$H$ is either an empty string or one of the variables $L_{i}$ for $i \in [1, n]$, indicating that $F_{0}$ default inherits the features of $F_{i}$ (the syntactic “head”). In the case where $H$ is an empty string, we allow dropping the brackets from the $T_{0}(F_{0})[H]$ expression, resulting in just $T_{0}(F_{0})$.
Note that while the above notation adopts the convention of splitting out the syntactic category and logical form from the feature list for visual prominence and to highlight the relationship to its context-free phrase-structure rule core, behaviorally it is identical to adding two more features to the feature list (we can call them, for example, $cat$ and $sem$) to represent the syntactic category and logical form.
This means that, for example, the rule
ACTIVE_VP[_head]/_head
$\rightarrow $ VP_SIMPLE(form:infinitive)/_head
can be considered a notational shorthand for the following rule expressed purely using feature lists:
(cat:ACTIVE_VP, sem:_head)[_head]
$\rightarrow $ (cat:VP_SIMPLE, sem:_head, form:infinitive)
Disjunction of features. Similarly to BIBREF37, we allow disjunctive feature specifications, which we denote by separating the alternative values with a pipe (`$|$'). The feature specification (form:gerund|infinitive) would thus unify with either (form:gerund) or (form:infinitive), but not with (form:pastparticiple).
Absence of features. We use a special atomic value _none_ to indicate that a given feature must either be absent or else explicitly set to the value _none_. The feature specification (subject:_none_, object:yes) would thus unify with either (object:yes) or (subject:_none_, object:yes), but not with (subject:yes, object:yes).
Negation of features. Similarly to BIBREF37, we allow negated feature specifications, which we denote by prefixing the attribute with a minus sign (`-'). The feature specification (-form:gerund|infinitive) would thus unify with (form:pastparticiple) or (form:_none_), but not with (form:gerund) or (form:infinitive). In general, a feature specification of the form (-attribute:v$_{1}$|...|v$_{j}$) can be considered a notational shorthand for (attribute:v$_{j+1}$|...|v$_{k}$|_none_), where v$_{j+1}$|...|v$_{k}$ is an enumeration of all possible values of the feature attribute other than v$_{1}$|...|v$_{j}$.
Default inheritance of features. If the lefthand side term is notated as $T_{0}(F_{0})[H]$, with $H$ equal to one of the variables $L_{i}$ for $i \in [1, n]$, then this is interpreted as a notational shorthand for augmenting both $F_{0}$ and $F_{i}$ with an additional list of attribute-value pairs $(a_{1}$:$\$v_{1}, ..., a_{k}$:$\$v_{k})$, where $a_{1} ... a_{k}$ are all of the attributes listed in $F_{i}$ that were not originally listed in $F_{0}$.
Unification of logical forms. As described in Appendix SECREF16, we represent logical forms using a variation of description logic, rather than using feature structures. In the context of unification, we consider logical forms to unify if and only they achieve structural concept equality after variable replacement (using the same variable replacements applied during unification of the corresponding feature lists), while taking into account the commutativity and associativity of $\sqcap $. For example, under this criterion, the logical form GenderRel $\sqcap $ $\exists $RolePair(Predicate, Gender)._head would unify with either GenderRel $\sqcap $ $\exists $RolePair(Predicate, Gender).Male or with ($\exists $RolePair(Predicate, Gender).Male) $\sqcap $ GenderRel under a variable replacement mapping _head to Male, but would not unify with GenderRel $\sqcap $ $\exists $RolePair(Predicate, Gender).Male $\sqcap $ $\exists $RolePair(Predicate, GenderHaver).FilmProducer.
Rule Format ::: Knowledge rule format
CFQ knowledge rules output expressions representing facts that are known to be true. They have no direct effect on text, logical forms, or sparql, but the generated knowledge can be used as preconditions to other rules. In the rules index, they are described in the following format:
$\rightarrow K$, where $K$ is knowledge that is output.
By convention, we define the rule name of a knowledge rule to be simply the string representing the knowledge that the rule outputs, and we omit the rule name in the rules index for brevity.
The union of those rules defines a knowledge base which we denote with $KB^{CFQ}$.
All knowledge in CFQ is represented in the form $P(X_1,...,X_n)$, where $P$ is a predicate from the list below, and $X_1, ..., X_n$ are either logical forms or else raw strings. Knowledge rules do not use variable-based expressions.
Supported knowledge predicates:
BoundRolePairs
ExclusiveRolePair
FreebaseEntityMapping
FreebasePropertyMapping
FreebaseTypeMapping
NonExclusiveRolePair
Role
Rule Format ::: Inference rule format
CFQ inference rules transform logical forms and may be conditioned on knowledge. In the rules index, they are described in the following format:
$K: L_0 \rightarrow L_1$
where $K$ represents a comma-separated list of knowledge preconditions, and $L_0$ and $L_1$ represent the input and output logical forms, all expressed in terms of a shared set of variables $v_1,...,v_m$.
These rules are interpreted as stating that if there exists a variable replacement $r()$ replacing $v_1,...,v_m$ with some logical forms $l_1,...,l_m$ respectively, such that $r(K) \subseteq KB^{CFQ}$, then we can apply the inference rule by rewriting $r(L_0)$ to $r(L_1)$.
Rule Format ::: Resolution rule format
CFQ resolution rules transform sparql expressions and may be conditioned on knowledge. They do not affect text or logical forms.
In the rules index, they are described in the following format:
$K: S_0 \rightarrow S_1~...~S_n$
where $K$ represents a comma-separated list of knowledge preconditions, $S_0$ is a variable-based expression and $S_1~...~S_n$ are either raw sparql strings or else expressions described in terms of the same variables used in $S_0$ and $K$.
These rules are interpreted as stating that if there exists a variable replacement $r()$ replacing $v_1,...,v_m$ with some logical forms, strings, or expressions $l_1,...,l_m$ respectively, such that $r(K) \subseteq KB^{CFQ}$, then we can apply the resolution rule by rewriting $r(S_0)$ to the sequence of terms $r(S_1)~...~r(S_n)$.
Generation Algorithm
Our generation algorithm produces triples of the form $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ in a mixed top-down and bottom-up fashion, with the final program of rule applications output alongside each triple in the form of a rule application DAG. The top-down portion of generation is responsible for efficiently searching for rules that can be applied to produce a meaningful example, while the bottom-up portion is responsible for actually applying the rules (i.e., performing the composition) and for producing the DAG.
The generation process proceeds in two phases, each involving a top-down as well as bottom-up aspect. In the first phase, we apply grammar rules interleaved with inference rules to produce a pair of $\langle \text{question, logical form} \rangle $. Specifically, we apply a recursive top-down algorithm which starts with the $S$ nonterminal and at every step performs a random search over the rules in the grammar which could produce the target nonterminal with accompanying feature structure. This top-down process proceeds until a candidate syntactic parse tree is attained whose leaves consist purely of syntactic terminals (i.e., string literals or entity placeholders). The grammar rules from this candidate parse tree are then applied in a bottom-up fashion beginning with the syntactic terminals to yield a tree of $\langle \text{text, logical form} \rangle $ pairs. After each such bottom-up grammar rule application, we then greedily apply all possible inference rules on the resulting logical forms, applying an arbitrary deterministic ordering to the inference rules in cases where rules could be applied in multiple valid orderings. This ensures that inference rules and grammar rules are executed in an interleaved manner and each inference rule is applied at the earliest possible occasion.
When a $\langle \text{question, logical form} \rangle $ pair is generated for the $S$ nonterminal, we proceed to the second phase of the algorithm, in which resolution rules are applied to generate a corresponding sparql query to make up the third element of the desired $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple. In practice, the bulk of the work in this phase is performed in a top-down fashion, in which resolution rules are recursively applied to transform a starting expression of the form get_specializations($L) (where $L represents the logical form output from the grammar phase) into a sequence of text literals representing the sparql query. This is followed nominally by a bottom-up process to construct the rule application DAG, yielding a tree of resolution rule applications of a similar form to the tree of interleaved grammar and inference rules output from the grammar phase. Note that while the grammar phase involves a large degree of random choice, the resolution phase proceeds much more deterministically, as the CFQ resolution rules have been designed such that any given question can yield only one possible sparql query, modulo commutativity and associativity of $\sqcap $. In cases where resolution rules could be applied in multiple valid orderings, we again apply an arbitrary deterministic ordering to the resolution rules so as to yield as consistent as possible a rule application DAG and $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple for any given question.
Finally, to ease the task of tracking unique query patterns and to minimize the impact on the learning task of implementation details regarding choice of variable names or ordering of clauses, we normalize the final sparql query by alphabetically sorting the query clauses and re-numbering the variables to follow a standard increasing order.
The resulting $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple is then appended to the CFQ dataset.
Generation Algorithm ::: Join by Logical Form
In general, we do not explicitly track rules to represent the example-independent behaviors of the generation algorithm, as the universal applicability of these rules mean that the complete behavior of the generator should be observable on any reasonably-sized train set. The same applies to certain core behaviors of the description logic $\mathcal {EL}$, such as commutativity and associativity of $\sqcap $, which we omit tracking as explicit rules due to their similar ubiquity of application.
One example-independent rule, however, that we do explicitly track is the rule that describes the handover process between the grammar phase and the resolution phase – or in terms of the rule application DAG, the rule that joins the tree of interleaved grammar and inference rule applications with the tree of resolution rule applications. We call this rule JOIN_BY_LOGICAL_FORM. It is included in the rules list for every example in CFQ and appears as the head of the rule application tree for each example.
Generation Algorithm ::: Relationship between Generation and Parsing
Note that conceptually a similar approach for combining the different rule types could be applied to the semantic parsing task. The main difference would be that, instead of performing random search over the grammar, the semantic parsing task would need to find the set of rules which produce the desired input text.
Generation Algorithm ::: Selecting an appropriate sample set
For many domains, the set of examples generated by exhaustively combining rules is infinite or prohibitively large. For example, the CFQ grammar generates an infinite set of questions, and even when restricted to a reasonable complexity, the set is still too large for practical use. This means that we need to choose which subset of examples we want to include in our dataset. Given our goal of comprehensively measuring compositional generalization, we do this by:
maximizing the overall diversity of rule combinations (allowing us to test as many rule combinations as possible)
while using a uniform distribution from simple examples to increasingly more complex examples.
We measure the diversity of rule combinations of a dataset using the empirical entropy over the frequency distribution of the subgraphs of the rule application DAGs, and we measure the complexity of an example using the number of rule applications used to generate it.
For CFQ, we choose the following practical trade-off between these two criteria. We first generate a sufficiently large sample set by performing random rule applications. We then subsample from it to select a subset that maximizes the entropy of the subgraph distribution (while only taking into account subgraphs with a limited number of nodes for practicality). We use a greedy algorithm that incrementally assigns elements to the subsampled set while maximizing entropy at each step.
The subsampling is initially limited to examples with the smallest complexity level and continues with increasingly larger complexity levels. We cap the maximum number of examples per level to achieve a uniform distribution across levels, and we limit the maximum complexity level such that the questions remain relatively natural. Table TABREF15 shows examples of generated questions at varying levels of complexity.
Example of a rule application DAG
Figures FIGREF190 through FIGREF192 show the rule application DAG that was produced when generating the question “Who directed [entity]?”. They illustrate how grammar, inference, and knowledge rules are combined to generate a pair of text and logical form, and how resolution rules are used to generate the sparql query for the resulting logical form.
Example of a rule application DAG ::: DAG normalization
As discussed in Section SECREF3, nodes of this DAG represent rule applications while edges represent dependencies among the rules; i.e., an edge $A \rightarrow B$ means that rule $B$ strictly depends on rule $A$ in the sense that the generator cannot apply rule $B$ before applying rule $A$. The DAG is normalized to ensure that a certain rule combination is represented using the same DAG across all the examples where it occurs. This is important for meaningfully comparing measures such as entropy and divergence across subgraphs of different examples.
Specifically, together with adopting the measures described above to ensure that rules are applied in a deterministic order, we achieve the normalization of the DAG by only producing edges that represent “minimal dependencies”. This means that if a rule $A$ can be applied after rule $B$, but it could also be applied after rule $B^{\prime }$ with $B \rightarrow B^{\prime }$ (i.e., $B^{\prime }$ depends on $B$), we don't produce the edge $B^{\prime } \rightarrow A$.
Example of a rule application DAG ::: Concept abbreviations
For brevity, in the rule application DAG figures we have applied the following abbreviations for several lengthy concept names:
Director = FilmDirector
Directee = DirectedFilm
Directing = DirectingAFilm
SubjectAgentVerb = PredicateWithBoundRolePairs(RolePair( SubjectHaver, Subject), RolePair(Predicate, Agent))
ObjectUndergoerVerb = PredicateWithBoundRolePairs(RolePair( ObjectHaver, Object), RolePair(Predicate, Undergoer))
E1 = Entity('?E1')
Example of a rule application DAG ::: Entity placeholders
As described in Section SECREF16, during generation we initially generate a $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple containing entity placeholders, and then replace those placeholders with specific entities as a post-processing step. Conceptually, one could construct a rule application DAG describing either the process by which the original $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple with entity placeholders was generated, or alternatively the rules that would need to be applied if constructing the $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple containing the final entity MIDs directly. Structurally, these two DAGs are identical, differing only in the definition of two entity-related rules described below. The rule application DAG shown in the accompanying figures is the version using entity placeholders.
Versions of entity rules applicable when using entity placeholders:
ENTITY=[ENTITY]_HSz7QrdGdsX:
ENTITY(number:singular)/Entity(new_var(V1))
$\rightarrow $ '[entity]'
ENTITY_MID:
ent2sparql(Entity($X)) $\rightarrow $ $X
Versions of entity rules applicable when using actual entity MIDs:
ENTITY=[ENTITY]_HSz7QrdGdsX:
ENTITY(number:singular)/'m.'$X
$\rightarrow $ 'm.'$X
ENTITY_MID:
ent2sparql('m.'$X) $\rightarrow $ 'ns:m.'$X
Example of a rule application DAG ::: Subgraphs and their weights
Figure FIGREF203 shows an example of subgraphs in order to provide more details on the sampling and weighting of compounds. An example non-linear subgraph is highlighted by the red area, and two linear subgraphs are highlighted by the blue and the yellow areas, respectively.
As described in Section SECREF6, given a large subset $\mathbb {G}$ of subgraphs from the sample set as a whole, we calculate for each sample the weight of each subgraph $G \in \mathbb {G}$ that occurs in that sample as:
where $\text{occ}(G)$ is the set of all occurrences of $G$ in the sample, $\prec $ denotes the strict subgraph relation, and $P(G^{\prime }| G)$ is the empirical probability of $G^{\prime }$ occurring as a supergraph of $G$ over the full sample set.
Intuitively, we are trying to estimate how interesting the subgraph $G$ is in the sample. First, for every occurrence $g$ of a subgraph $G$, we look for the supergraph $G^{\prime }$ of $g$ that co-occurs most often with $G$ in the full sample set. The empirical probability of having $G^{\prime }$ as a supergraph of $G$ determines how interesting the occurrence $g$ is – the higher this probability, the less interesting the occurrence. Thus we compute the weight of the occurrence as the complement of this maximum empirical probability. Then we take the weight of $G$ to be the weight of the most interesting occurrence $g$ of $G$ in the sample.
E.g. in the extreme case that $G$ only occurs within the context $G^{\prime }$, the weight of $G$ will be 0 in all samples. Conversely, if $G$ occurs in many different contexts, such that there is no single other subgraph $G^{\prime }$ that subsumes it in many cases, then $w(G)$ will be high in all samples in which it occurs. This ensures that when calculating compound divergence based on a weighted subset of compounds, the most representative compounds are taken into account, while avoiding double-counting compounds whose frequency of occurrence is already largely explainable by the frequency of occurrence of one of its super-compounds.
Returning to our example in Figure FIGREF203, suppose that $G$ represents the smallest linear subgraph (yellow area), and suppose that the weight of $G$ in this sample is 0.4. Then this means that there exists some other subgraph $G^{\prime }$ (for instance, the linear subgraph highlighted by the blue area) that is a supergraph of $G$ in 60% of the occurrences of $G$ across the sample set.
Rules Index
Below is a selection of the rules used in the generation of CFQ. Specifically, this includes all rules involved in generating the question “Who directed [entity]?” (the same example illustrated in the rule application DAG in Appendix SECREF19). The format of the rules is discussed in Appendix SECREF17.
Rules Index ::: Grammar rules
S=WHQ_F6E9egkQqxj:
S/_x
$\rightarrow $ WHQ/_x
WHQ=NPQ_INDIRECT_VP_INDIRECT_TXCca9URgVm:
WHQ[_subject]/DropDependency(_subject) $\sqcap $ DropDependency($\exists $RolePair(Subject, SubjectHaver)._action)
$\rightarrow $ NPQ_INDIRECT(is_what:_none_, number:$n)/_subject
VP_INDIRECT(form:past, number:$n, object:yes, subject:_none_)/_action
NPQ_INDIRECT=WHO_5ptbPXXbuLZ:
NPQ_INDIRECT(number:singular)/Person
$\rightarrow $ 'who'
VP_INDIRECT=VP_INDIRECT_DP_ZJH4NhRkByc:
VP_INDIRECT(object:yes)[_action]/_action $\sqcap $ $\exists $RolePair(ObjectHaver, Object)._object
$\rightarrow $ VP_INDIRECT(object:_none_, subject:_none_)/_action
DP/_object
VP_INDIRECT=ACTIVE_VP_RX51Tm7RXPe:
VP_INDIRECT(object_type:$ut, subject_type:$at)[_head]/_head $\sqcap $ PredicateWithBoundRolePairs(RolePair(SubjectHaver, Subject), RolePair(Predicate, Agent)) $\sqcap $ PredicateWithBoundRolePairs(RolePair(ObjectHaver, Object), RolePair(Predicate, Undergoer))
$\rightarrow $ ACTIVE_VP(agent_type:$at, undergoer_type:$ut)/_head
ACTIVE_VP=VP_SIMPLE_hJqAyjRUYJp:
ACTIVE_VP(number:singular)[_head]/_head
$\rightarrow $ VP_SIMPLE(form:past)/_head
VP_SIMPLE=VP_GHWf3fcVRZg:
VP_SIMPLE(agent_type:person, undergoer_type:movie)[_head]/_head
$\rightarrow $ VP(concept_id:DirectingAFilm)/_head
VP=DIRECTED_JkYzNbQyXtv:
VP(concept_id:DirectingAFilm, form:past)/DirectingAFilm
$\rightarrow $ 'directed'
DP=ENTITY_M6fSP5GvRaN:
DP(is_proper_noun:yes, number:singular)[_head]/_head
$\rightarrow $ ENTITY/_head
ENTITY=[ENTITY]_HSz7QrdGdsX:
ENTITY(number:singular)/Entity(new_var(V1))
$\rightarrow $ '[entity]'
... (211 grammar rules total)
Rules Index ::: Inference rules
BOUND_ROLES_WITH_PREDICATE_OBJECT:
BoundRolePairs($A, RolePair($R, $Q), RolePair($T, $S)):
$\exists $RolePair($Q, $R).($A $\sqcap $ $B) $\rightarrow $ $\exists $RolePair($S, $T).($A $\sqcap $ $B)
BOUND_ROLES_WITH_PREDICATE_SUBJECT:
BoundRolePairs($B, RolePair($Q, $R), RolePair($S, $T)):
$B $\sqcap $ $\exists $RolePair($Q, $R).$A $\rightarrow $ $B $\sqcap $ $\exists $RolePair($S, $T).$A
IGNORE_BOUND_ROLE_PAIRS:
$A $\sqcap $ PredicateWithBoundRolePairs($X, $Y) $\rightarrow $ $A
IGNORE_DEPENDENCY_DROPPING:
DropDependency($X) $\rightarrow $ $X
PREDICATE_UNREIFICATION:
Role($Q, $P), Role($R, $P):
$\exists $RolePair($Q, Predicate).($P $\sqcap $ $\exists $RolePair(Predicate, $R).$A) $\rightarrow $ $\exists $RolePair($Q, $R).$A
... (17 inference rules total)
Rules Index ::: Resolution rules
CONJUNCTION_WITHOUT_ENTITY:
def2sparql($X $\sqcap $ $Y, $V1) $\rightarrow $ def2sparql($X, $V1) ' . ' def2sparql($Y, $V1)
ENTITY_MID:
ent2sparql(Entity($X)) $\rightarrow $ $X
GET_SPECIALIZATIONS:
get_specializations($X) $\rightarrow $ 'SELECT DISTINCT ' get_var($X, new_var($V0)) ' WHERE { ' def2sparql($X, get_var($X, $V0)) '}'
GET_VAR_CONJUNCTION:
get_var($X $\sqcap $ $Y, $V1) $\rightarrow $ shared_var(get_var($X, get_var($Y, $V1)), get_var($Y, get_var($X, $V1)))
GET_VAR_RELATION:
get_var($\exists $$R.$X, $V1) $\rightarrow $ $V1
GET_VAR_TYPE:
FreebaseTypeMapping($X, $F):
get_var($X, $V1) $\rightarrow $ $V1
PROPERTY_MAPPING:
FreebasePropertyMapping($R, $F):
role2sparql($R) $\rightarrow $ $F
RELATION_MAPPING_WITHOUT_EXCLUSION:
NonExclusiveRolePair($R):
rel2sparql($X, $R, $Y) $\rightarrow $ $X role2sparql($R) $Y
RELATION_TO_ENTITY:
def2sparql($\exists $$R.$X, $V1) $\rightarrow $ rel2sparql($V1, $R, ent2sparql($X))
SHARED_VAR:
shared_var($X, $X) $\rightarrow $ $X
SPECIALIZATION_OF_TYPE:
def2sparql($X, $V1) $\rightarrow $ $V1 ' a ' type2sparql($X)
TYPE_MAPPING:
FreebaseTypeMapping($X, $F):
type2sparql($X) $\rightarrow $ $F
... (21 resolution rules total)
Rules Index ::: Knowledge rules
$\rightarrow $ BoundRolePairs(DirectingFilm, RolePair(Predicate, Agent), RolePair(Predicate, FilmDirector))
$\rightarrow $ BoundRolePairs(DirectingFilm, RolePair(Predicate, Undergoer), RolePair(Predicate, DirectedFilm))
$\rightarrow $ BoundRolePairs(PredicateWithBoundRolePairs(RolePair(ObjectHaver, Object), RolePair(Predicate, Undergoer)), RolePair(ObjectHaver, Object), RolePair(Predicate, Undergoer))
$\rightarrow $ BoundRolePairs(PredicateWithBoundRolePairs(RolePair(Subject, SubjectHaver), RolePair(Agent, Predicate)), RolePair(Subject, SubjectHaver), RolePair(Agent, Predicate))
$\rightarrow $ FreebasePropertyMapping(RolePair(FilmDirector, DirectedFilm), 'ns:film.director.film')
$\rightarrow $ FreebaseTypeMapping(Person, 'ns:people.person')
$\rightarrow $ NonExclusiveRolePair(FilmDirector, DirectedFilm)
$\rightarrow $ Role(DirectedFilm, DirectingFilm)
$\rightarrow $ Role(FilmDirector, DirectingFilm)
... (194 knowledge rules total) | CFQ contains the most query patterns by an order of magnitude and also contains significantly more queries and questions than the other datasets |
0427ca83d6bf4ec113bc6fec484b2578714ae8ec | 0427ca83d6bf4ec113bc6fec484b2578714ae8ec_0 | Q: What three machine architectures are analyzed?
Text: Introduction
Human intelligence exhibits systematic compositionality BIBREF0, the capacity to understand and produce a potentially infinite number of novel combinations of known components, i.e., to make “infinite use of finite means” BIBREF1. In the context of learning from a set of training examples, we can observe compositionality as compositional generalization, which we take to mean the ability to systematically generalize to composed test examples of a certain distribution after being exposed to the necessary components during training on a different distribution.
Humans demonstrate this ability in many different domains, such as natural language understanding (NLU) and visual scene understanding. For example, we can learn the meaning of a new word and then apply it to other language contexts. As BIBREF2 put it: “Once a person learns the meaning of a new verb `dax', he or she can immediately understand the meaning of `dax twice' and `sing and dax'.” Similarly, we can learn a new object shape and then understand its compositions with previously learned colors or materials BIBREF3, BIBREF4.
In contrast, state-of-the-art machine learning (ML) methods often fail to capture the compositional structure that is underlying the problem domain and thus fail to generalize compositionally BIBREF2, BIBREF5, BIBREF6, BIBREF7, BIBREF3. We believe that part of the reason for this shortcoming is a lack of realistic benchmarks that comprehensively measure this aspect of learning in realistic scenarios.
As others have proposed, compositional generalization can be assessed using a train-test split based on observable properties of the examples that intuitively correlate with their underlying compositional structure. BIBREF8, for example, propose to test on different output patterns than are in the train set, while BIBREF2 propose, among others, to split examples by output length or to test on examples containing primitives that are rarely shown during training. In this paper, we formalize and generalize this intuition and make these contributions:
We introduce distribution-based compositionality assessment (DBCA), which is a novel method to quantitatively assess the adequacy of a particular dataset split for measuring compositional generalization and to construct splits that are ideally suited for this purpose (Section SECREF2).
We present the Compositional Freebase Questions (CFQ) , a simple yet realistic and large NLU dataset that is specifically designed to measure compositional generalization using the DBCA method, and we describe how to construct such a dataset (Section SECREF3).
We use the DBCA method to construct a series of experiments for measuring compositionality on CFQ and scan BIBREF2 and to quantitatively compare these experiments to other compositionality experiments (Section SECREF4).
We analyze the performance of three baseline ML architectures on these experiments and show that these architectures fail to generalize compositionally, and perhaps more surprisingly, that compound divergence between train and test sets is a good predictor of the test accuracy (Section SECREF5).
Distribution-Based Compositionality Assessment (DBCA)
Like other authors, we propose to measure a learner's ability to generalize compositionally by using a setup where the train and test sets come from different distributions. More specifically, we propose a setup where each example is obtained by composing primitive elements (atoms), and where these atoms are similarly represented in the train and test sets while the test set contains novel compounds, i.e., new ways of composing the atoms of the train set.
As a simple illustrative scenario, consider the task of answering simple questions such as “Who directed Inception?” and “Did Christopher Nolan produce Goldfinger?”. In this scenario, the atoms intuitively correspond to the primitive elements that are used to compose those questions, such as the predicates “direct(ed)” and “produce(d)”, the question patterns “Who [predicate] [entity]” and “Did [entity1] [predicate] [entity2]”, and the entities “Inception”, “Christopher Nolan”, etc. The compounds on the other hand correspond to the combinations of these atoms that appear in the various examples: "Who directed [entity]?", "Did Christopher Nolan [predicate] Inception?", etc.
To measure compositional generalization on such a task, one might therefore use the questions “Who directed Inception?” and “Did Christopher Nolan produce Goldfinger?” as training examples while testing on questions such as “Did Christopher Nolan direct Goldfinger?” and "Who produced Inception?" because the atoms are identically represented in the train and test sets while the compounds differ.
To make this intuition more precise, we focus on datasets such as CFQ (introduced in Section SECREF3) and scan BIBREF2, where each example can be created from a formal set of rules by successively applying a number of these rules. In this case, the atoms are the individual rules, while the compounds are the subgraphs of the directed acyclic graphs (DAGs) that correspond to the rule applications. (See Sections SECREF3 and SECREF4 for more details.)
Distribution-Based Compositionality Assessment (DBCA) ::: Principles for measuring compositionality
We use the term compositionality experiment to mean a particular way of splitting the data into train and test sets with the goal of measuring compositional generalization. Based on the notions of atoms and compounds described above, we say that an ideal compositionality experiment should adhere to the following two principles:
Similar atom distribution: All atoms present in the test set are also present in the train set, and the distribution of atoms in the train set is as similar as possible to their distribution in the test set.
Different compound distribution: The distribution of compounds in the train set is as different as possible from the distribution in the test set.
The second principle guarantees that the experiment is compositionally challenging in the sense that it tests the learner on compounds that are as different as possible from the compounds used during training. The first principle aims to guarantee that the experiment is exclusively measuring the effect of the difference in the way atoms are composed to form compounds (rather than some related but different property such as domain adaptation on the distribution of the atoms).
To determine to which degree a certain experiment adheres to these principles, we use the following formalization. For a sample set $T$, we use $\mathcal {F}_A(T)$ to denote the frequency distribution of atoms in $T$ and $\mathcal {F}_C(T)$ for the weighted frequency distribution of compounds in $T$, which correspond to the subgraphs of the rule application DAGs. For practicality, we do not consider all subgraphs of rule application DAGs when computing the compound divergence. Instead, we first generate a large subset $\mathbb {G}$ of subgraphs, then weight them in context of their occurrence, and keep only the ones with highest sum of weights. The purpose of the weighting is to avoid double-counting compounds that are highly correlated with some of their super-compounds. We achieve this by calculating the weight of $G \in \mathbb {G}$ in a sample as $w(G) = \max _{g \in \text{occ}(G)} (1 - \max _{G^{\prime }: g \prec g^{\prime } \in \text{occ}(G^{\prime })} P(G^{\prime }| G))$, where $\text{occ}(G)$ is the set of all occurrences of $G$ in the sample, $\prec $ denotes the strict subgraph relation, and $P(G^{\prime }| G)$ is the empirical probability of $G^{\prime }$ occurring as a supergraph of $G$ over the full sample set. See Appendix SECREF202 for example subgraphs and more details on the weighting.
We measure divergence (or similarity) of the weighted distributions using the Chernoff coefficient $C_\alpha (P \Vert Q) = \sum _{k} p_k^\alpha \, q_k^{1-\alpha } \in [0, 1]$ BIBREF9. For the atom divergence, we use $\alpha =0.5$, which corresponds to the Bhattacharyya coefficient and reflects the desire of making the atom distributions in train and test as similar as possible. For the compound divergence, we use $\alpha = 0.1$, which reflects the intuition that it is more important whether a certain compound occurs in $P$ (train) than whether the probabilities in $P$ (train) and $Q$ (test) match exactly. This allows us to formally define as follows the notions of compound divergence $\mathcal {D}_C$ and atom divergence $\mathcal {D}_A$ of a compositionality experiment consisting of a train set $V$ and a test set $W$:
Based on these principles, we suggest to use as a preferred compositionality benchmark for a given dataset the accuracy obtained by a learner on splits with maximum compound divergence and low atom divergence (we use $\mathcal {D}_A \le 0.02$). See Section SECREF4 for details about how to construct such splits.
The CFQ Dataset
We present the Compositional Freebase Questions (CFQ) as an example of how to construct a dataset that is specifically designed to measure compositional generalization using the DBCA method introduced above. CFQ is a simple yet realistic, large dataset of natural language questions and answers that also provides for each question a corresponding sparql query against the Freebase knowledge base BIBREF10. This means that CFQ can be used for semantic parsing BIBREF11, BIBREF12, which is the task that we focus on in this paper.
The CFQ Dataset ::: Automatic, rule-based generation
BIBREF13 describe a number of benefits for automated rule-based dataset generation, including scalability, control of scope, and avoidance of human errors. Beyond these benefits, however, such an approach is particularly attractive in the context of measuring compositional generalization using the DBCA method, as it allows us to precisely track the atoms (rules) and compounds (rule applications) of each example by recording the sequence of rule applications used to generate it.
Since the way we measure compositionality depends on how the examples can be broken down into atoms and compounds, we design the generation rules so as to have few and meaningful atoms. More precisely, we aim to have as few rules as possible so that the richness of the examples comes from composing them, which yields a large variety of compounds (enabling a large range of different compound divergences) while making it easy to obtain similar distributions of atoms. Also, we aim to make our rules truly “atomic” in the sense that the behavior of any rule is independent of the context where it is applied (e.g., rules may not contain “if-then-else” constructs).
In order to minimize the number of rules, we use an intermediate logical form that serves as a uniform semantic representation with relatively direct mappings to natural language and sparql. Our rules thus fall into the following four categories (a selection of rules is provided in Appendix SECREF20):
Grammar rules that generate natural language constructs and corresponding logical forms.
Inference rules that describe transformations on logical forms, allowing us to factor out transformations that are independent of specific linguistic and sparql constructs.
Resolution rules that map constructs of the logical form to sparql constructs.
Knowledge rules that supply logical form expressions that are universally applicable. Other rules can be kept more generic by parameterizing them on knowledge.
These rules define a language of triples of the form $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $. Our generation algorithm produces such triples in a mixed top-down and bottom-up fashion. We first apply grammar rules and inference rules to produce the natural language questions and their semantics in our logical form. Then we apply resolution rules to obtain the sparql query. See Figure FIGREF14 for an illustration. In addition, the generator produces a normalized, directed acyclic graph (DAG) of rule applications that corresponds to the normalized program that generated the triple. (Appendix SECREF19 shows an example.) Edges of this DAG represent dependencies among the rule applications, and the normalization ensures that a certain rule combination is represented using the same DAG across all the examples where it occurs.
The described approach can generate a potentially infinite set of questions, from which we first sample randomly and then subsample (to maximize the overall diversity of rule combinations while keeping a uniform distribution over complexity). We measure the diversity of rule combinations using the empirical entropy of a weighted subset of the rule application DAGs, and we use the number of rule applications as a measure of the complexity of an example. We also limit the maximum example complexity such that the questions remain relatively natural. Table TABREF15 shows examples of generated questions at varying levels of complexity. An example of a complete data item is shown in Appendix SECREF8, a more detailed data quality analysis is presented in Appendix SECREF9, and the generation algorithm is discussed in more detail in Appendix SECREF18.
The CFQ Dataset ::: Dataset details and statistics
Input and output. While the primary focus of the dataset is semantic parsing (natural language question to sparql query), we also provide natural language answers for each question. This allows the dataset to be used in a text-in-text-out scenario as well (see Appendix SECREF8).
Ambiguity. We largely avoid ambiguity in the questions. In particular, we make sure each name is used to refer to exactly one entity, and we avoid different possible parse trees, different interpretations of plurals, and the need for disambiguation that requires semantic knowledge.
Scope. We select the following language features as compositional building blocks: open questions and closed questions; subordinate clauses; active and passive voice; conjunctions of verb phrases and of noun phrases; possessives with roles (“X's parent”); adjectives; and type restrictions. For knowledge base features, we select roles, verbs, types, and adjectives from domains that are well-represented in Freebase and that can be combined easily. We start from the popular movie domain (e.g., directing, producing, editor, sequel) and extend this with personal relations (e.g., parent, spouse, sibling), companies (e.g., founding, employer), and adjectives (e.g., gender, nationality).
Logical form and grammar. For the internal logical form, we adopt a variation of the description logic $\mathcal {EL}$ BIBREF14, BIBREF15, augmented with additional constructors (see Appendix SECREF16) to more easily map to certain linguistic structures. For the grammar rules, we use a unification-based grammar syntax similar to that used in the Prolog extension GULP 3.1 BIBREF16, with addition of support for disjunction, negation, absence, and default inheritance of features for compactness of representation.
Grounding in Freebase. Once an example is generated by the CFQ rules, it still contains entity placeholders instead of Freebase machine ids (MIDs). For the task of semantic parsing, the examples could theoretically be used as-is, as our avoidance of semantic ambiguity means that a learner should not need knowledge of the specific entity in order to parse the question. To make the questions natural, however, we apply an additional step of replacing the placeholders with appropriate specific entities. To do this we first execute the generated sparql query against Freebase. This returns a set of candidate MID combinations that satisfy the query and can be used as substitutes. If the set is empty, we abandon the generated question candidate as unnatural. Otherwise, we pick one combination at random to yield a question with positive answer. In the case of a closed question, we also generate a variation that yields the answer “No”, which we do by mixing in MIDs from another substitution (or a more generic replacement if that fails) to keep the question as plausible-sounding as possible. We then randomly choose either the question with positive or with negative answer, to avoid spurious correlations between question structure and yes/no answer.
Semantic and structural filtering. Even among the questions that can be satisfied in Freebase, there are some that are meaningful but somewhat unnatural, such as “Was Strange Days directed by a female person whose gender is female?”. We automatically filter out such unnatural questions using semantic and structural rules. Note that since we do not require a learner to identify such questions, we do not track these filtering rules.
Release and statistics.
CFQ contains 239,357 English question-answer pairs that are answerable using the public Freebase data. (The data URL is not yet provided for anonymous review.) We include a list of MIDs such that their English names map unambiguously to a MID. Table TABREF17(a) summarizes the overall statistics of CFQ. Table TABREF17(b) uses numbers from BIBREF8 and from an analysis of WebQuestionsSP BIBREF17 and ComplexWebQuestions BIBREF18 to compare three key statistics of CFQ to other semantic parsing datasets (none of which provide annotations of their compositional structure). CFQ contains the most query patterns by an order of magnitude and also contains significantly more queries and questions than the other datasets. Note that it would be easy to boost the raw number of questions in CFQ almost arbitrarily by repeating the same question pattern with varying entities, but we use at most one entity substitution per question pattern. Appendix SECREF10 contains more detailed analyses of the data distribution.
Compositionality Experiments for CFQ and scan
The DBCA principles described in Section SECREF6 enable a generic and task-independent method for constructing compositionality experiments. To construct such an experiment for a dataset $U$ and a desired combination of atom and compound divergences, we use an iterative greedy algorithm that starts with empty sets $V$ (train) and $W$ (test), and then alternates between adding an example $u \in U$ to $V$ or $W$ (while maintaining the desired train/test ratio). At each iteration, the element $u$ is selected such that $\mathcal {D}_C(V \Vert W)$ and $\mathcal {D}_A(V \Vert W)$ are kept as closely as possible to the desired values. To reduce the risk of being stuck in a local optimum, we also allow removing examples at certain iterations.
In general, there are many different splits that satisfy a desired compound and atom divergence. This reflects the fact that a certain compound may either occur exclusively in the train set or the test set, or it may occur in both of them because the split may have achieved the desired compound divergence by separating other (possibly orthogonal) compounds. Our greedy algorithm addresses this by making random choices along the way, starting with picking the first example randomly.
For the goal of measuring compositional generalization as accurately as possible, it is particularly interesting to construct maximum compound divergence (MCD) splits, which aim for a maximum compound divergence at a low atom divergence (we use $\mathcal {D}_A \le 0.02$). Table TABREF18 compares the compound divergence $\mathcal {D}_C$ and atom divergence $\mathcal {D}_A$ of three MCD splits to a random split baseline as well as to several previously suggested compositionality experiments for both CFQ and the existing scan dataset (cf. Section SECREF30). The split methods (beyond random split) are the following:
Output length: Variation of the setup described by BIBREF2 where the train set consists of examples with output (sparql query or action sequence) length $\le \hspace{-2.5pt} N$, while the test set consists of examples with output length $> \hspace{-2.5pt} N$. For CFQ, we use $N = 7$ constraints. For scan, we use $N = 22$ actions.
Input length: Variation of the above setup, in which the train set consists of examples with input (question or command) length $\le N$, while test set consists of examples with input length $> N$. For CFQ, we use $N=19$ grammar leaves. For SCAN, we use $N=8$ tokens.
Output pattern: Variation of setup described by BIBREF8, in which the split is based on randomly assigning clusters of examples sharing the same output (query or action sequence) pattern. Query patterns are determined by anonymizing entities and properties; action sequence patterns collapse primitive actions and directions.
Input pattern: Variation of the previous setup in which the split is based on randomly assigning clusters of examples sharing the same input (question or command) pattern. Question patterns are determined by anonymizing entity and property names ; command patterns collapse verbs and the interchangeable pairs left/right, around/opposite, twice/thrice.
All of these experiments are based on the same train and validation/test sizes of 40% and 10% of the whole set, respectively. For CFQ, this corresponds to about 96k train and 12k validation and test examples, whereas for scan, it corresponds to about 8k train and 1k validation and test examples. We chose to use half of the full dataset for the train-test splits, as it led to an appropriate balance between high compound divergence and high train set size in informal experiments.
The MCD splits achieve a significantly higher compound divergence at a similar atom divergence when compared to the other experiments. The reason for this is that, instead of focusing on only one intuitive but rather arbitrary aspect of compositional generalization, the MCD splits aim to optimize divergence across all compounds directly.
Interestingly, the MCD splits still correlate with the aspects of compositional generalization that are targeted by the other experiments in this table. As shown in the four right columns of Table TABREF18, for each MCD split, the train set $V$ contains on average shorter examples than the test set $W$ (measured by the ratio of average lengths), and $V$ also contains only a small fraction of the input and output patterns used in $W$ (measured by the fraction of patterns covered). However, these correlations are less pronounced than for the experiments that specifically target these aspects, and they vary significantly across the different MCD splits.
This illustrates that MCD splits are comprehensive in the sense that they cover many different aspects of compositional generalization, especially when looking at multiple of them. It also means that whether a certain example ends up in train or test is not determined solely by a single criterion that is immediately observable when looking at the input and output (such as length). As we show in Appendix SECREF91, this generally makes the examples in train and test look fairly similar.
Experimental Results and Analysis ::: Experiment Setup
We use three encoder-decoder neural architectures as baselines: (1) LSTM+attention as an LSTM BIBREF19 with attention mechanism BIBREF20; (2) Transformer BIBREF21 and (3) Universal Transformer BIBREF22.
We tune the hyperparameters using a CFQ random split, and we keep the hyperparameters fixed for both CFQ and scan (listed in Appendix SECREF12). In particular the number of training steps is kept constant to remove this factor of variation. We train a fresh model for each experiment, and we replicate each experiment 5 times and report the resulting mean accuracy with 95% confidence intervals.
Note that while we construct test and validation sets from the same distribution, we suggest that hyperparameter tuning should be done on a random split (or random subset of the train set) if one wants to measure compositional generalization of a model with respect to an unknown test distribution as opposed to an architecture with respect to a known test distribution. Tuning on a validation set that has the same distribution as the test set would amount to optimizing for a particular type of compound divergence and thus measure the ability for a particular architecture to yield models that can be made to generalize in one particular way (through leaking information about the test set in the hyperparameters).
Similarly to BIBREF8, we anonymize the Freebase names and MIDs in the textual input and the SPARQL output, respectively, by replacing them with a placeholder (e.g., “M0” for the first MID). This removes the need for two learning sub-tasks that are orthogonal to our focus: named entity recognition and learning that the MIDs are patterns that need to be copied. An example input-output (question-query) pair then looks like the following: `Was M0 a screenwriter' $\mapsto $ `select count(*) where {M0 a ns:film.writer}'.
The main relation we are interested in is the one between compound divergence of the data split and accuracy. Specifically, we compute the accuracy of each model configuration on a series of divergence-based splits that we produce with target compound divergences that span the range between zero and the maximum achievable in 0.1 increments (while ensuring that atom divergence does not exceed the value of 0.02). For each target divergence, we produce at least 3 different splits with different randomization parameters (compare Section SECREF4). For comparison, we also compute accuracies on the other splits shown in Table TABREF18.
Experimental Results and Analysis ::: Results and analysis for CFQ
The mean accuracies of the three architectures on CFQ are shown in Figure FIGREF28(a) and Table TABREF29. We make three main observations:
All models achieve an accuracy larger than 95% on a random split, and this is true even if they are trained on 10 times fewer training instances (see Appendix SECREF15 for a more detailed analysis on the performance with varying training size).
The mean accuracy on the MCD splits is below 20% for all architectures, which means that even a large train set (about 96k instances) with a similar distribution of atoms between train and test is not sufficient for these architectures to perform well on the test distribution.
For all architectures, there is a strong negative correlation between the compound divergence and the mean accuracy.
This suggests that the baseline models are able to capture the superficial structure of the dataset, but fail to capture the compositional structure. We find it surprising that varying the compound divergence gives direct control of the (mean) accuracy, even though the examples in train and test look similar (see Appendix SECREF91). This means that compound divergence seems to capture the core difficulty for these ML architectures to generalize compositionally.
Note that the experiment based on output-length exhibits a worse accuracy than what we would expect based on its compositional divergence. One explanation for this is that the test distribution varies from the training distribution in other ways than compound divergence (namely in output length and a slightly higher atom divergence), which seems to make this split particularly difficult for the baseline architectures. To analyze the influence of the length ratio further, we compute the correlation between length ratios and accuracy of the baseline systems and compare it to the correlation between compound divergence and accuracy. We observe $R^2$ correlation coefficients between 0.11 and 0.22 for the input and output length ratios and between 0.81 and 0.88 for the compound divergence. This shows that despite the known phenomenon that the baseline systems struggle to generalize to longer lengths, the compound divergence seems to be a stronger explanation for the accuracy on different splits than the lengths ratios.
Error analysis. We perform an analysis of the errors for the split MCD$_{1}$ (the first MCD split that we constructed, with more details provided in Appendix SECREF13). We observe accuracies between 29% and 37% on the test set of this particular split. Qualitatively, all three systems seem to make similar errors at this point (68% of errors are on the same samples). They make more errors for longer sequences and predict about 20% too short output when they make an error. The most common category of error is the omission of a clause in the output (present in 43%-49% of the test samples), e.g.: (1) Omitted conjunctions: for the input “What spouse of a film producer executive produced and edited M0, M1, and M2?” the best system ignores “executive produced” in the output. (2) Omitted adjectives: for the input “Which female Spanish film producer was M3' s spouse?” the best system ignores the adjective “female”.
Experimental Results and Analysis ::: Results and analysis for scan
To demonstrate the use of our analysis method on another dataset, we re-create the scan dataset BIBREF2, which consists of compositional navigation commands (e.g, `turn left twice and jump') mapped to corresponding action sequences (e.g., `lturn lturn jump'). We use the original grammar while tracking the rule applications used for the construction of each input-output pair. This enables us to compare the compositional generalization abilities of the baseline systems on this dataset in a novel way.
Figure FIGREF28(b) shows the graphs for the scan data set in the same setup as Figure FIGREF28(a) does for CFQ. We observe that the compound divergence again is a good predictor for the mean accuracy for all three architectures. One difference is that for scan the systems are able to attain accuracies close to 100% for compound divergences up to around 0.2, which is not the case for CFQ. This seems to be in line with the fact that overall CFQ is a more complex task than scan: the total number of rules used in generating scan is only 38 in comparison to 443 rules in the construction of CFQ.
Appendix SECREF14 provides a comparison to other experiments presented in previous work, including experiments that have significantly different atom distributions. We observe that this generally causes lower accuracies but does not break the correlation between accuracy and compound divergence.
Related Work
To measure compositional generalization for semantic parsing to SQL, BIBREF8 propose to ensure that no SQL query pattern occurs in both the train and the test set (“query split”), and they provide such splits for several data sets. By evaluating several ML architectures the authors confirm that this query-pattern split is harder to learn than a conventional split.
BIBREF2 introduce the scan dataset, and several publications provide interesting analyses of compositional generalization using it BIBREF5, BIBREF6. BIBREF7 discuss a particular extension of a seq2seq model that is effective in handling difficult scan sub-tasks by separating semantic and syntactic information during learning. Our contributions extend the analyses on the scan data in several ways: CFQ provides richer annotations and covers a broader subset of English than the scan dataset, and we propose a comprehensive score for assessing aggregate compositionality of a system on a given task.
The mathematics dataset BIBREF13 is a large, automatically generated set of 112M samples in 56 separated sub-tasks. The authors present data and experiments that share common goals with our approach, but focus on mathematical reasoning instead of natural language. Our breakdown of generation rules per train sample is more fine-grained, which allows a more precise compositional generalization analysis. Being automatically generated also links our approach to datasets such as the bAbI tasks BIBREF23, which however do not focus on compositional generalization.
A dataset related to CFQ is ComplexWebQuestions BIBREF18, which consists of complex questions that are automatically generated from simpler sub-questions in WebQuestionsSP BIBREF17 and then reworded manually. While these datasets can be used for semantic parsing, we did not find them suitable for a thorough compositionality analysis because a consistent annotation with the compositional structure would be hard to obtain. Other approaches to semi-automatic dataset creation also use paraphrasing BIBREF24, BIBREF25.
BIBREF3 introduce the generated clevr dataset, which shares common goals with our work applied in the area of visual reasoning. The dataset's functional programs capture some of the structural information of the questions and are linked one-to-many to the 423 question patterns used. The authors specifically investigate generalization to new combinations of visual attributes in one experiment which uses a particular train-test split based on the colors used. BIBREF26 propose a neural-symbolic architecture and discuss promising results on additional specific splits of the clevr data, e.g. based on object counts and program depth. BIBREF27 describe how the application of compositional attention networks to the clevr data leads to structured and data-efficient learning. BIBREF28 present a large, compositional, generated visual question answering data set with functional programs, on which neural state machines achieve good performance BIBREF29. The use of specific splits between train and test data also occurs in the context of visual data. E.g., BIBREF30 propose a greedy split algorithm to maximize the coverage of test concepts in the train set while keeping question-type/answer pairs disjoint and observe performance degradation of existing approaches. BIBREF31 introduce a synthetic visual question answering dataset called sqoop, which is used to test whether a learner can answer questions about all possible object pairs after being trained on a subset.
While these datasets are very interesting, the additional annotation that we provide in CFQ indicating the exact rule trees needed to link input and output makes additional analyses regarding compositionality possible. Our analyses go beyond many of the presented discussions (that mostly focus on accuracy regarding particular holdouts) in formalizing an approach that uses the atom and compound divergences to measure compositionality.
A number of ML approaches have been developed for semantic parsing. BIBREF32 propose Key-Value Memory Networks – neural network-based architectures that internalize a knowledge base into the network – and introduce the WikiMovies dataset. BIBREF33 develop an end-to-end architecture that can handle noise in questions and learn multi-hop reasoning simultaneously. They introduce the MetaQA benchmark that is based on WikiMovies but uses a set of only 511 question patterns (mod entities) shared between train and test.
With regards to studying compositionality in ML, BIBREF34 argue that combinatorial generalization should be a top priority to achieve human-like abilities. BIBREF35 discusses measuring the compositionality of a trained representation, e.g. of a learned embedding. The author suggests to use a tree reconstruction error that is based on how well the oracle derivation of the input matches the structure that can be derived on the representations. BIBREF4 discuss an architecture that enables the learning of compositional concept operators on top of learned visual abstractions. BIBREF36 introduce the compositional recursive learner that “can generalize to more complex problems than the learner has previously encountered”.
Conclusion and Outlook
In this paper we presented what is (to the best of our knowledge) the largest and most comprehensive benchmark for compositional generalization on a realistic NLU task. It is based on a new dataset generated via a principled rule-based approach and a new method of splitting the dataset by optimizing the divergence of atom and compound distributions between train and test sets. The performance of three baselines indicates that in a simple but realistic NLU scenario, state-of-the-art learning systems fail to generalize compositionally even if they are provided with large amounts of training data and that the mean accuracy is strongly correlated with the compound divergence.
We hope our work will inspire others to use this benchmark as a yardstick to advance the compositional generalization capabilities of learning systems and achieve high accuracy at high compound divergence. Some specific directions that we consider promising include applying unsupervised pretraining on the input language or output queries and the use of more diverse or more targeted learning architectures, such as syntactic attention BIBREF7. We also believe it would be interesting to apply the DBCA approach to other domains such as visual reasoning, e.g. based on clevr BIBREF3.
In the area of compositionality benchmarks, we are interested in determining the performance of current architectures on the end-to-end task that expects a natural language answer given a natural language question in CFQ. We would like also to extend our approach to broader subsets of language understanding, including use of ambiguous constructs, negations, quantification, comparatives, additional languages, and other vertical domains.
Example Dataset Item
The following shows an example data item including the question text in various forms, the answer, the sparql query in various forms, some tracked statistics, and the set of used rules (atoms) and the applied rule tree (compound). Some details are omitted, indicated by ellipses (`...').
Data Quality Analysis
During the development of our data generation pipeline, we manually checked the generated examples for quality. Below is a random selection of 50 examples of the final CFQ dataset (no cherry-picking was used). Brackets around [entity names] are provided just for ease of human reading. Manual checking also indicated that all questions are associated with the semantically correct sparql queries. However, because we rely on the data present in Freebase, there are three debatable questions which sound somewhat unnatural (UNKREF33, UNKREF51, and UNKREF59, see further discussion below the list).
Who was a writer, star, and cinematographer of [Tetsuo: The Bullet Man], [Nightmare Detective], and [Bullet Ballet]?
Which male person was a sibling of [Andrew Klavan]?
Did [Wallace Stevens] influence [Levi Seeley]'s spouse and parent?
Did a producer, writer, and art director of [Thelma & Luis] produce, direct, and write [Light Girls]?
Were [Hangover Square], [Zack and Miri Make a Porno], and [Clerks II] edited by a founder and employee of a film producer?
What American parent of [Charlie Sistovaris] was a British screenwriter's sibling?
Did [Anne Williams Rubinstein] marry a person that influenced a screenwriter and influenced [John Most]?
Was [Cachún cachún ra ra!]'s director a film director's American child?
Did [Maisy's Garden]'s executive producer write, edit, and executive produce [Pakalppooram], [It's Not About the Shawerma], [Rick's Canoe], and [The Fifth Wall]?
Was [Holly Ellenson]'s child [Wally Ellenson]?
Did [Emerald Cities]'s cinematographer, writer, and editor edit, executive produce, and direct [Blues for the Avatar] and [White Stork Is Coming]?
Was a film producer [Lilies of the Ghetto]'s distributor and producer?
Which child of [Mimi Iger] did a film producer employ and [The Walt Disney Company] employ?
What Japanese spouse of [Hong Kong Paradise]'s star did [Ineko Arima] and [Nishiki Kô] marry?
Who influenced and was influenced by [Black Dynamite]'s star?
What was written by, edited by, directed by, produced by, and executive produced by [Pauline Collins]'s child's sibling?
Which Swedish film director that [Théo Van Horn]'s actor influenced did [Egen ingȧng] star?
Who was influenced by [Golden Yeggs]'s star, was influenced by [Richard Pryor], was influenced by [Bill Murray], and married [Elaine Chappelle]?
What did [This Is My Show]'s director, cinematographer, and star direct, edit, produce, and executive produce?
Who was a male costume designer and director of [Ene... due... like... fake...] and [The Windmill Bar]?
Was [Kumudu Munasinghe] a Dutch film producer's country of nationality's employee?
Did an art director, editor, director, writer, cinematographer, and star of [Tetsuo II: Body Hammer] produce [Nightmare Detective], [Tetsuo: The Iron Man], and [A Snake of June]?
Was [Alexandra Naoum] [Monsieur Verdoux]'s producer, writer, and star?
What film director founded [THX], was employed by [American Zoetrope], [LucasArts], [Skywalker Sound], and [Lucasfilm], and founded [Industrial Light & Magic]?
What male employee of [Weta Workshop] was [Bad Taste]'s editor?
Were [Weta Digital] and [Weta Workshop] founded by a cinematographer and founded by a film editor?
What art director influenced [DreamWorks Animation]'s founder?
Did [Daisies] star [Fruit of Paradise]'s costume designer and writer, star [Jaromír Vomácka], and star [Jirina Myskova]?
What character was influenced by a costume designer, influenced by [Pedro Calderón de la Barca], influenced by [William Shakespeare] and [Luis Buñuel], and influenced by [Miguel de Unamuno]?
What British costume designer of [The Love Letter] and [The Chamber] was a screenwriter's child?
Was [Eric Massa] a cinematographer's parent's sibling's American sibling?
What art director of [Stepping Sisters 1932] was a parent of [Imre Sándorházi]?
What was executive produced by, written by, produced by, and edited by a director of [V/H/S/2]'s sequel?
What did an editor and cinematographer of [Tongue Twister Variations] direct?
Who was a Canadian screenwriter that produced [Her Painted Hero] and [The Nick of Time Baby]?
Which American parent of [Janet Friedman] did [Rose Friedman] influence and marry?
Did [George Carlin] influence [Louis C.K.: Shameless]'s executive producer and influence [Joan Rivers]?
Who was a male writer, star, director, and costume designer of [The Wizard of Speed and Time]?
Who was [Lost Boys: The Thirst]'s prequel's sequel's art director?
Did a cinematographer's female parent executive produce, direct, and write [Hit Dat Shit 5]?
Who married [Siri von Essen], influenced [A Lesson in Love]'s director and art director, influenced [Tennessee Williams], and influenced [Maxim Gorky]?
What Italian film director directed [Children of Hannibal]?
What film producer directed, wrote, edited, and produced [la estrella], [la ardilla], and [el valiente]?
Were [Flames: The Movie] and [Soltera] directed by a male person and executive produced by [Hilda Russoff]'s spouse?
Was a sibling of [Fawwaz bin Abdulaziz Al Saud] [Badr bin Abdulaziz Al Saud]'s sibling?
What did a sibling of [Louise Rohr] executive produce, produce, and edit?
Did a French cinematographer of [Le Volcan interdit] edit [The Last Bolshevik] and direct [A.K.] and [Statues Also Die]?
Was [Mannai Thottu Kumbidanum] directed by and written by a Dutch male cinematographer?
Was a director, art director, executive producer, and costume designer of [But I'm a Genderqueer] [Lauren Soldano]?
Was [When We Were Kings] produced by a film editor whose spouse was employed by [Royal Academy of Dramatic Art] and distributed by [PolyGram Filmed Entertainment]?
Further discussion of the debatable questions:
Did [Wallace Stevens] influence [Levi Seeley]'s spouse and parent?
The occurrence of the seemingly implausible combination of roles “spouse and parent” is due to incorrect data in Freebase, in which there are 502 entities asserted to be both the spouse and parent of other entities. For instance, “Anne Dacre” is both the spouse and parent of “Christopher Conyers”. We can also find occasional occurrences in CFQ of other implausible role combinations, such as “parent and child”, “spouse and sibling” etc., triggered by similar Freebase data issues.
Was [Kumudu Munasinghe] a Dutch film producer's country of nationality's employee?
The somewhat unnatural phrasing of “country's employee” occurs due to a modeling choice in Freebase, in which the same entity is used to represent both a country and the government of that country. This makes it possible for a country to employ a person.
What character was influenced by a costume designer, influenced by [Pedro Calderón de la Barca], influenced by [William Shakespeare] and [Luis Buñuel], and influenced by [Miguel de Unamuno]?
The somewhat unnatural phrasing of “a character was influenced by” occurs due to a modeling choice in Freebase, in which when a film character is based on a real person, Freebase commonly uses the same entity to represent both. This makes “person” and “character” exchangeable in the questions where the person is also a film character.
Data Distribution Analysis ::: Answer frequencies
Table TABREF85 shows the most frequently occurring answers in CFQ. Not surprisingly, after the answers “Yes” and “No”, entities related in Freebase to the domain of movies have highest frequency.
Data Distribution Analysis ::: Impact of subsampling on the distribution of complexity levels
Figure FIGREF87 illustrates how subsampling changes the distribution of questions in CFQ with different levels of complexity to become more even.
Data Distribution Analysis ::: Impact of subsampling on the frequency of rules and rule combinations
Subsampling increases the frequency of rarely used rules and rule combinations and decreases the frequency of commonly used ones. For rules, this is illustrated by Figure FIGREF89 which shows the ratio of examples each rule appears in, before and after subsampling, in the order of their frequency. Figure FIGREF90 shows the same comparison for rule combinations.
Divergence-Based Split Analysis ::: Qualitative analysis of MCD@!START@$_{1}$@!END@
Traditional compositionality experiments often use train-test splits based on observable properties of the input and output (e.g., input/output complexity, input/output patterns, and input/output feature holdouts). One consequence of this is that the difference between train and test examples is relatively easily observable “with the naked eye”. The lists below illustrate that this is not usually the case for divergence-based splits. Similar to the random sample of the general data in Appendix SECREF9 we provide a random sample of size 20 from both the train and test set here. Indeed, even for the MCD$_{1}$ split with a high divergence of 0.694, the 20 random samples of train and test questions shown below cannot easily be distinguished as they both contain the same kind of questions of different sizes.
Train samples from MCD$_{1}$:
What was founded by a costume designer, founded by [Forgotten Silver]'s star, and founded by [Jamie Selkirk]?
Which male person influenced and was influenced by [William Dean Howells]?
Did [Marco Bellocchio] produce, write, and direct [Greek Pete]?
What did [Rick Schmidt] edit, [Philip Rashkovetsky] edit, and a cinematographer edit?
Were [The Living Playing Cards] and [The Haunted Castle] edited by, directed by, and produced by a French writer of [Le cauchemar de Méliès]?
What did a spouse of [Shorts]'s producer's spouse executive produce and direct?
Did [P. G. Wodehouse], [Raymond Chandler], [Edward Bunker], [Pauline Kael], and [Michael Cimino] influence [Grindhouse]'s cinematographer and star?
What Mexican person did a film producer employ?
Did [The Midnight After]'s Chinese executive producer edit [Perfect Life] and [Dumplings]?
Who did [For the Secret Service]'s director's female spouse influence?
Who married, was influenced by, and influenced a company's founder?
Was [MAN SE]'s French male German employee's employer [Sulzer]?
Who influenced an actor that [Robin Santana] was influenced by and [K. J. Stevens] was influenced by and was influenced by [Virgil]?
Did [Pirates of Malaysia] star [Giuseppe Addobbati] and star a Spanish screenwriter?
Was [The Silence of the Sea] written by, produced by, executive produced by, directed by, and edited by [The Red Circle]'s French editor?
Did [Chanel] employ a German costume designer, employ [Gaspard Ulliel] and [Maureen Chiquet], and employ [Jacques Polge]?
Who was influenced by [Adam Sandler] and married a film producer?
Did a Spanish screenwriter's child direct and edit [Bakuchi-uchi: Nagaremono]?
Was a founder of [IG Port] employed by a film producer?
Was [Orizzonti Orizzonti!] executive produced by and written by an art director's sibling?
Test samples from MCD$_{1}$:
What sequel of [Paranormal Activity 2] was edited by and written by a film director?
What spouse of a film producer founded [Grand Hustle Records] and was employed by [40/40 Club], [Roc-A-Fella Records], and [Def Jam Recordings]?
Did [Pixar] employ an art director and employ [Susham Bedi]?
Was a sibling of [David Lindbland] [Dynamit Nobel]'s Swedish founder?
What prequel of [Charlie the Unicorn 2] starred, was edited by, was produced by, was written by, and was directed by [Jason Steele]?
Did [Rick Schmidt] direct, produce, executive produce, and edit [Blues for the Avatar], [White Stork Is Coming], [The Fifth Wall], and [It's Not About the Shawerma]?
Was [Luke Larkin Music] an art director's employer?
What prequel of [Goat Story 2] was executive produced, written, directed, edited, and produced by [Jan Tománek]?
Was [Bullet Ballet]'s editor, star, director, and cinematographer [Promises Written in Water]'s star, director, writer, executive producer, and art director?
What was edited by, produced by, directed by, and written by [Ellis Kaan Ozen], [Thaw Bwe], [Jeffrey Malkofsky-Berger], and [Leslie Berkley]?
Was a person's female sibling [Reggae in a Babylon]'s producer?
Who was a director, cinematographer, executive producer, art director, producer, star, and writer of [The Man Who Killed God]?
Was [My Sweet Home]'s director, editor, writer, art director, producer, cinematographer, and costume designer a person?
Which art director, star, and editor of [The Brown Bunny] and [Promises Written in Water] did [Cord] star?
Did an employee and founder of [Virgin Mobile Australia], [Virgin Mobile USA], and [Virgin Mobile France] found [Virgin America] and found [V2 Records]?
Was a Chinese executive producer and star of [Happy Ghost II] and [All's Well, Ends Well 2010] a film director?
Was [The Voyeur]'s executive producer an actor's parent?
Did [Erasable Cities]'s writer, producer, editor, art director, cinematographer, and director produce and executive produce [Promises Written in Water]?
Who was an editor, star, and cinematographer of [Tetsuo: The Iron Man], [A Snake of June], and [Bullet Ballet]?
Was a costume designer's employer [Philips High School]?
Divergence-Based Split Analysis ::: Quantitative analysis of MCD@!START@$_{1}$@!END@
Figure FIGREF133 shows the frequency of atoms (upper graph) and compounds (lower graph) in the train and test sets of the maximum compound divergence split for the CFQ data. As the frequency of an atom resp. compound we use the fraction of examples it appears in. Both atoms and compounds are indexed primarily by their frequency in the train set, secondarily by their frequency in the test set, in decreasing order. For practical reasons we only look at a small subset of compounds here but we believe the analysis is representative.
We can see that the frequency of atoms in the two sets is very aligned and that all atoms from the test set appear in the train set. The frequency of compounds however is wildly different: While some invariably occur in both sets, the frequencies are often not aligned and most compounds appear only in either the train or the test set.
Hyperparameters
The experiments were run using the tensor2tensor framework BIBREF39 with some of the hyperparameters tuned using a random split of a previous, smaller version of the data set during development. We use the default hyperparameter sets publicly available in the tensor2tensor implementation (obtained from https://github.com/tensorflow/tensor2tensor) and override the tuned hyperparameters. The hyperparameters used are summarized in Table TABREF134.
Detailed error analysis ::: Breakdown of error types
Table TABREF136 shows a more detailed analysis of the errors that the baseline models make on CFQ for MCD$_{1}$ (compare Section SECREF24). The reported errors are bucketized into three main types: sparql property clause error, sparql filter clause error and malformed sparql query in the model's output. The total number of test set examples exhibiting any clause or filter error is reported (sum column), as well as the number of insertions (ins), deletions (del), and substitutions (sub) in the model's output with respect to the correct query. Property clause substitution errors are further subdivided into those where only the property itself is wrong while subject and object are correct (prop), those where the property is correct but either subject or object is wrong (node) and those where both the property and the subject or the object are wrong (both).
The accuracy metric requires the model response and the golden (correct) answer to be exactly equal to each other. Thus, a sparql query with the same clauses as the golden answer but in a different order or with some of the clauses appearing multiple times is also considered to be an error despite being equivalent to the golden answer in its meaning. The amount of such errors is relatively small though, accounting for 1.8%, 0.6% and 1.5% of total test set size for LSTM+Attention, Transformer and Universal Transformer respectively.
Detailed error analysis ::: Qualitative error analysis
Below we qualitatively analyze a number of instances the models fail on. We anonymize the MIDs in the same way as the data is provided to the models (see Section SECREF5). We first select queries on which all machine learning systems fail in all replicated runs (about 5k instances out of a total of about 12k), and then randomly select queries from this list. In the following we discuss a few cases in more detail. Note that, for readability, we use the following abbreviations for the sparql properties in Query 1:
ns:people.person.child = ns:people.person.children|
ns:fictional_universe.fictional_character.children|
ns:organization.organization.child/
ns:organization.organization_relationship.child
ns:people.person.sibling = ns:people.person.siblings/
ns:people.siblingrelationship.sibling|
ns:fictionaluniverse.fictionalcharacter.siblings/
ns:fictionaluniverse.
siblingrelationshipoffictionalcharacters.siblings
Query 1: “What sibling of M0 was M1' s parent?”
Golden (correct) sparql query:
SELECT DISTINCT ?x0 WHERE {
?x0 ns:people.person.child M1 .
?x0 ns:people.person.sibling M0 .
FILTER ( ?x0 != M0 )
}
Inferred (system) sparql query:
SELECT DISTINCT ?x0 WHERE {
?x0 ns:people.person.sibling ?x1 .
?x0 ns:people.person.sibling M0 .
?x1 ns:people.person.child M1 .
FILTER ( ?x0 != ?x1 )
}
Analysis. The meaning of the sparql query generated by the system is “What sibling of M0 was a sibling of M1's parent?”, which is incorrect. We next analyze the train set, in order to show that we believe enough information has been provided in the train set for the question to be answered correctly.
Some subqueries of the query and their occurrences are shown in Table TABREF140. While the exact subquery “What sibling” does not occur at training, the two words have been shown separately in many instances: the subqueries “sibling of Mx”, and “Mx's parent” occur 2,331 and 1,222 times, respectively. We can analyze this example in more detail by comparing parts of the rule tree of this example with those shown at training. As can be read from the table, similar sentences have been shown during training. Some examples are:
What was executive produced by and written by a sibling of M0?
What costume designer did M1's parent employ?
What cinematographer was a film editor that M2 and M3 married?
What film director was a character influenced by M2?
Query 2: “Did a male film director edit and direct M0 and M1?”
Golden (correct) sparql query:
SELECT count ( * ) WHERE {
?x0 ns:film.director.film M0 .
?x0 ns:film.director.film M1 .
?x0 ns:film.editor.film M0 .
?x0 ns:film.editor.film M1 .
?x0 ns:people.person.gender m_05zppz
}
Inferred (system) sparql query:
SELECT count ( * ) WHERE {
?x0 ns:film.director.film M0 .
?x0 ns:film.director.film M1 .
?x0 ns:film.editor.film M0 .
?x0 ns:people.person.gender m_05zppz
}
Analysis. The meaning of the inferred sparql query is “Did a male film director edit M0 and direct M0 and M1?”. It thus seems the model `forgets' to include the relation between the director and movie M1.
Looking at subqueries and their occurrence count (Table TABREF145), we see again that various subqueries occur often during training. However, “edit and direct” have not been shown often together. When looking at the rule trees, we see that both conjunctions in the query occur often at training separately: “Did [DetNP] [VP] and [VP] [DetNP]” occurs 1,432 times, and “Did [DetNP] [VP] [Entity] and [Entity]” occurs 909 times. However, they never occur together: “Did [DetNP] [VP] and [VP] [DetNP] and [DetNP]” does not occur at training. This may be the reason why all systems fail on this example, but at the same time we believe a compositional learner should be able to generalize correctly given the training instances. Some examples are:
Did a male film director that M3's parent married influence an art director?
Did a film producer that played M2 edit and direct M1?
Did a screenwriter edit and direct a sequel of M1
Did a Chinese male film director edit M1 and M2?
Additional experimental results on scan
Figure FIGREF150 shows a scatter plot of accuracy vs. compound divergence for the three baseline architectures (see Section SECREF5) on existing splits of the scan data. These splits are discussed in BIBREF2 and BIBREF6, and the exact split data is available. (Data splits obtained from https://github.com/brendenlake/SCAN). We map these splits onto the re-created scan data, which enables us to measure the atom and compound divergences. The authors present a total of six split experiments (some with several sub-experiments):
BIBREF2:
simple (random)
by action sequence length
adding a primitive and adding a primitive along with complex combinations
BIBREF6:
adding a template
adding template fillers
adding more training examples of fillers (fewshot)
In the plot, we omit some data points that are too close to be distinguished easily. The point labels have the form `(abbreviated experiment name)<(parameter)>@(number of samples) (baseline system abbreviation) [(train set size fraction), (split atom divergence)]'. The train set size fraction is given as a percentage of the overall data size. The baseline system abbreviations are LSTM, T for Transformer, UT for Universal Transformer, T/UT where both transformer models are indistinguishable, and empty where all three systems perform indistinguishably. The abbreviated experiment name is one of the names in italics above.
We can observe a strong dependency of the accuracies on the compound divergence of the data split. Again, this seems to indicate that the compound divergence is correlated with accuracy for these baseline architectures. One difference to the data shown in Figure FIGREF28(b) is that for this set of experiments the accuracy drops faster with increasing compound divergence. One explanation for this effect is that the experiments are directly aimed at highlighting one specific potentially problematic scenario for learning. E.g. in the experiment `primitive<jump>' (with very low accuracies for all three systems) the jump command is shown exactly in one combination (namely alone) in the training data while it occurs in all test examples in arbitrary combinations.
This is reflected in the higher atom divergence value of 0.08 for this split, as well as in all other splits that exhibit a low accuracy at a low compound divergence in Figure FIGREF150. Note that BIBREF2 already compare the experiment `primitive<jump>' to the experiment `primitive<turn left>' for which all three systems achieve a much higher accuracy. In their interpretation of this phenomenon, they mainly focus on the fact that in contrast to 'jump', the action 'turn left' is also generated by other inputs. We additionally observe that the latter experiment also has a slightly lower atom divergence of 0.07, a lower compound divergence, and it covers a much larger part of the data in the train set (94% vs. 63%).
While the accuracies we observe for the `primitive' experiments are very much in line with the results reported by BIBREF2, we noticed a few interesting differences for other experiments: All three systems go to 100% accuracy on the fewshot task even for one example (while BIBREF6 report a slowly increasing accuracy for the architecture they evaluate). On the other hand, both transformer models only reach 0% accuracy on the length split, while the LSTM obtains around 14% (which is in line with what previous work reports).
Analysis of relations between accuracy, compound divergence, and training size
Figure FIGREF28 shows for all baseline systems a strong correlation between accuracy and compound divergence for the chosen training sizes (96k for CFQ and 8k for scan). One interesting question is whether and how this correlation is changed for different training sizes. Figures FIGREF159 and FIGREF159 show that this correlation holds also for smaller training sizes but that the accuracy is generally somewhat lower for smaller training sizes.
At the same time, we observe that the difference between accuracies of various training sizes gets smaller as the training size increases. This can be seen even more clearly in Figures FIGREF159 and FIGREF159, which plot the training size rather than the compound divergence on the x-axis. These figures show that the increase in accuracy flattens out significantly as we reach training size of about 80k for CFQ and about 6k for SCAN. This indicates that further increasing train set size may not be sufficient to do well on these compositionality experiments.
Logical Form
To represent our logical form we use syntax of the description logic $\mathcal {EL}$ BIBREF14, BIBREF15 with additional concept and role constructors. These constructors do not have description logic semantics; instead, their meaning is completely determined by the set of generation rules of the CFQ dataset.
Let $A$ be a concept name, $C, C_1, C_2$ be concepts, $R, R_1, R_2$ be roles, and $v$ be a raw string. Then the following would be concepts:
and the following would be roles:
Note that our logical form does not have roles other than those in a form of RolePair($C_1$, $C_2$).
New strings are generated by using a special function new_var($\$S$). This function generates a unique string of the form ?x<N>, where N is a unique number, and assigns that string to variable $\$S$. This string can later be used as a variable in a sparql constraint.
Rule Format
This section describes the format of each of the rule types we use for generating the CFQ dataset, in the form in which they appear in the rules index in Appendix SECREF20.
General formatting conventions shared across all rule types:
Variable names are prefixed by `$'. Example: $X.
(Exception: In grammar rules, while variables standing for constants are prefixed by `$', variables standing for logical forms are prefixed by `_'. Example: _action.)
Concept names are written in camel case. Example: FilmProducer.
Names of functions that output logical forms (concepts, roles, or knowledge) are also written in camel case. Examples: DropDependency, BoundRolePairs, RolePair.
Names of functions that output string literals or which are used for converting logical forms to sparql are written in lowercase with underscores. Examples: def2sparql, get_specializations, new_var.
String literals are enclosed in single quotes. Example: 'ns:film:director'.
Rule Format ::: Grammar rule format
The CFQ grammar is a unification-based grammar of recursive rewriting rules used to generate pairs of strings and their corresponding logical form. For an introductory overview of unification-based grammars including several popular variations, see BIBREF38. The rules in the CFQ grammar follow a similar syntax in particular to that used in the Prolog extension GULP 3.1 BIBREF16, with the addition of support for disjunction, negation, absence, and default inheritance of features, and with minor differences in formatting described below.
Properties shared between the CFQ grammar syntax and that of BIBREF16 include the following:
Grammar rules are notated as variations of context-free phrase-structure rules of the form $T_{0} \rightarrow T_{1}$ ... $T_{n}$, where each of the syntactic non-terminals and terminals $T_{0}$ ... $T_{n}$ are augmented with feature lists in parentheses.
Each grammar rule can be interpreted as specifying how a feature structure (with logical form) that is unifiable with the lefthand side can be re-written to the sequence of features structures (with logical form) indicated on the righthand side.
Features are represented as attribute-value pairs separated by a colon (i.e., $attribute$:$value$).
Shared values in feature structures are represented through the use of variables.
Specifically, in the rules index, CFQ grammar rules are described in the format
$T_{0}(F_{0})[H]/L_{0} \rightarrow T_{1}(F_{1})/L_{1}$ ... $T_{n}(F_{n})/L_{n}$
where:
Each $T_{i}$ is a syntactic category (syntactic nonterminal) or a string literal (syntactic terminal).
Each $L_{i}$ for $i \in [1, n]$ is either a variable representing a logical form or an empty string. In the case when $L_{i}$ is an empty string, we allow dropping the trailing slash from the $T_{i}(F_{i})/L_{i}$ expression, resulting in just $T_{i}(F_{i})$.
$L_{0}$ is a logical form expressed in terms of $L_{1}...L_{n}$.
Each $F_{i}$ is a comma-separated feature list of the form $(attribute_{1}$:$value_{1}$, ..., $attribute_{k}$:$value_{k})$. In the case where $F_{i}$ is empty, we allow dropping the parentheses from the $T_{i}(F_{i})$ expression, resulting in just $T_{i}$.
$H$ is either an empty string or one of the variables $L_{i}$ for $i \in [1, n]$, indicating that $F_{0}$ default inherits the features of $F_{i}$ (the syntactic “head”). In the case where $H$ is an empty string, we allow dropping the brackets from the $T_{0}(F_{0})[H]$ expression, resulting in just $T_{0}(F_{0})$.
Note that while the above notation adopts the convention of splitting out the syntactic category and logical form from the feature list for visual prominence and to highlight the relationship to its context-free phrase-structure rule core, behaviorally it is identical to adding two more features to the feature list (we can call them, for example, $cat$ and $sem$) to represent the syntactic category and logical form.
This means that, for example, the rule
ACTIVE_VP[_head]/_head
$\rightarrow $ VP_SIMPLE(form:infinitive)/_head
can be considered a notational shorthand for the following rule expressed purely using feature lists:
(cat:ACTIVE_VP, sem:_head)[_head]
$\rightarrow $ (cat:VP_SIMPLE, sem:_head, form:infinitive)
Disjunction of features. Similarly to BIBREF37, we allow disjunctive feature specifications, which we denote by separating the alternative values with a pipe (`$|$'). The feature specification (form:gerund|infinitive) would thus unify with either (form:gerund) or (form:infinitive), but not with (form:pastparticiple).
Absence of features. We use a special atomic value _none_ to indicate that a given feature must either be absent or else explicitly set to the value _none_. The feature specification (subject:_none_, object:yes) would thus unify with either (object:yes) or (subject:_none_, object:yes), but not with (subject:yes, object:yes).
Negation of features. Similarly to BIBREF37, we allow negated feature specifications, which we denote by prefixing the attribute with a minus sign (`-'). The feature specification (-form:gerund|infinitive) would thus unify with (form:pastparticiple) or (form:_none_), but not with (form:gerund) or (form:infinitive). In general, a feature specification of the form (-attribute:v$_{1}$|...|v$_{j}$) can be considered a notational shorthand for (attribute:v$_{j+1}$|...|v$_{k}$|_none_), where v$_{j+1}$|...|v$_{k}$ is an enumeration of all possible values of the feature attribute other than v$_{1}$|...|v$_{j}$.
Default inheritance of features. If the lefthand side term is notated as $T_{0}(F_{0})[H]$, with $H$ equal to one of the variables $L_{i}$ for $i \in [1, n]$, then this is interpreted as a notational shorthand for augmenting both $F_{0}$ and $F_{i}$ with an additional list of attribute-value pairs $(a_{1}$:$\$v_{1}, ..., a_{k}$:$\$v_{k})$, where $a_{1} ... a_{k}$ are all of the attributes listed in $F_{i}$ that were not originally listed in $F_{0}$.
Unification of logical forms. As described in Appendix SECREF16, we represent logical forms using a variation of description logic, rather than using feature structures. In the context of unification, we consider logical forms to unify if and only they achieve structural concept equality after variable replacement (using the same variable replacements applied during unification of the corresponding feature lists), while taking into account the commutativity and associativity of $\sqcap $. For example, under this criterion, the logical form GenderRel $\sqcap $ $\exists $RolePair(Predicate, Gender)._head would unify with either GenderRel $\sqcap $ $\exists $RolePair(Predicate, Gender).Male or with ($\exists $RolePair(Predicate, Gender).Male) $\sqcap $ GenderRel under a variable replacement mapping _head to Male, but would not unify with GenderRel $\sqcap $ $\exists $RolePair(Predicate, Gender).Male $\sqcap $ $\exists $RolePair(Predicate, GenderHaver).FilmProducer.
Rule Format ::: Knowledge rule format
CFQ knowledge rules output expressions representing facts that are known to be true. They have no direct effect on text, logical forms, or sparql, but the generated knowledge can be used as preconditions to other rules. In the rules index, they are described in the following format:
$\rightarrow K$, where $K$ is knowledge that is output.
By convention, we define the rule name of a knowledge rule to be simply the string representing the knowledge that the rule outputs, and we omit the rule name in the rules index for brevity.
The union of those rules defines a knowledge base which we denote with $KB^{CFQ}$.
All knowledge in CFQ is represented in the form $P(X_1,...,X_n)$, where $P$ is a predicate from the list below, and $X_1, ..., X_n$ are either logical forms or else raw strings. Knowledge rules do not use variable-based expressions.
Supported knowledge predicates:
BoundRolePairs
ExclusiveRolePair
FreebaseEntityMapping
FreebasePropertyMapping
FreebaseTypeMapping
NonExclusiveRolePair
Role
Rule Format ::: Inference rule format
CFQ inference rules transform logical forms and may be conditioned on knowledge. In the rules index, they are described in the following format:
$K: L_0 \rightarrow L_1$
where $K$ represents a comma-separated list of knowledge preconditions, and $L_0$ and $L_1$ represent the input and output logical forms, all expressed in terms of a shared set of variables $v_1,...,v_m$.
These rules are interpreted as stating that if there exists a variable replacement $r()$ replacing $v_1,...,v_m$ with some logical forms $l_1,...,l_m$ respectively, such that $r(K) \subseteq KB^{CFQ}$, then we can apply the inference rule by rewriting $r(L_0)$ to $r(L_1)$.
Rule Format ::: Resolution rule format
CFQ resolution rules transform sparql expressions and may be conditioned on knowledge. They do not affect text or logical forms.
In the rules index, they are described in the following format:
$K: S_0 \rightarrow S_1~...~S_n$
where $K$ represents a comma-separated list of knowledge preconditions, $S_0$ is a variable-based expression and $S_1~...~S_n$ are either raw sparql strings or else expressions described in terms of the same variables used in $S_0$ and $K$.
These rules are interpreted as stating that if there exists a variable replacement $r()$ replacing $v_1,...,v_m$ with some logical forms, strings, or expressions $l_1,...,l_m$ respectively, such that $r(K) \subseteq KB^{CFQ}$, then we can apply the resolution rule by rewriting $r(S_0)$ to the sequence of terms $r(S_1)~...~r(S_n)$.
Generation Algorithm
Our generation algorithm produces triples of the form $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ in a mixed top-down and bottom-up fashion, with the final program of rule applications output alongside each triple in the form of a rule application DAG. The top-down portion of generation is responsible for efficiently searching for rules that can be applied to produce a meaningful example, while the bottom-up portion is responsible for actually applying the rules (i.e., performing the composition) and for producing the DAG.
The generation process proceeds in two phases, each involving a top-down as well as bottom-up aspect. In the first phase, we apply grammar rules interleaved with inference rules to produce a pair of $\langle \text{question, logical form} \rangle $. Specifically, we apply a recursive top-down algorithm which starts with the $S$ nonterminal and at every step performs a random search over the rules in the grammar which could produce the target nonterminal with accompanying feature structure. This top-down process proceeds until a candidate syntactic parse tree is attained whose leaves consist purely of syntactic terminals (i.e., string literals or entity placeholders). The grammar rules from this candidate parse tree are then applied in a bottom-up fashion beginning with the syntactic terminals to yield a tree of $\langle \text{text, logical form} \rangle $ pairs. After each such bottom-up grammar rule application, we then greedily apply all possible inference rules on the resulting logical forms, applying an arbitrary deterministic ordering to the inference rules in cases where rules could be applied in multiple valid orderings. This ensures that inference rules and grammar rules are executed in an interleaved manner and each inference rule is applied at the earliest possible occasion.
When a $\langle \text{question, logical form} \rangle $ pair is generated for the $S$ nonterminal, we proceed to the second phase of the algorithm, in which resolution rules are applied to generate a corresponding sparql query to make up the third element of the desired $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple. In practice, the bulk of the work in this phase is performed in a top-down fashion, in which resolution rules are recursively applied to transform a starting expression of the form get_specializations($L) (where $L represents the logical form output from the grammar phase) into a sequence of text literals representing the sparql query. This is followed nominally by a bottom-up process to construct the rule application DAG, yielding a tree of resolution rule applications of a similar form to the tree of interleaved grammar and inference rules output from the grammar phase. Note that while the grammar phase involves a large degree of random choice, the resolution phase proceeds much more deterministically, as the CFQ resolution rules have been designed such that any given question can yield only one possible sparql query, modulo commutativity and associativity of $\sqcap $. In cases where resolution rules could be applied in multiple valid orderings, we again apply an arbitrary deterministic ordering to the resolution rules so as to yield as consistent as possible a rule application DAG and $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple for any given question.
Finally, to ease the task of tracking unique query patterns and to minimize the impact on the learning task of implementation details regarding choice of variable names or ordering of clauses, we normalize the final sparql query by alphabetically sorting the query clauses and re-numbering the variables to follow a standard increasing order.
The resulting $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple is then appended to the CFQ dataset.
Generation Algorithm ::: Join by Logical Form
In general, we do not explicitly track rules to represent the example-independent behaviors of the generation algorithm, as the universal applicability of these rules mean that the complete behavior of the generator should be observable on any reasonably-sized train set. The same applies to certain core behaviors of the description logic $\mathcal {EL}$, such as commutativity and associativity of $\sqcap $, which we omit tracking as explicit rules due to their similar ubiquity of application.
One example-independent rule, however, that we do explicitly track is the rule that describes the handover process between the grammar phase and the resolution phase – or in terms of the rule application DAG, the rule that joins the tree of interleaved grammar and inference rule applications with the tree of resolution rule applications. We call this rule JOIN_BY_LOGICAL_FORM. It is included in the rules list for every example in CFQ and appears as the head of the rule application tree for each example.
Generation Algorithm ::: Relationship between Generation and Parsing
Note that conceptually a similar approach for combining the different rule types could be applied to the semantic parsing task. The main difference would be that, instead of performing random search over the grammar, the semantic parsing task would need to find the set of rules which produce the desired input text.
Generation Algorithm ::: Selecting an appropriate sample set
For many domains, the set of examples generated by exhaustively combining rules is infinite or prohibitively large. For example, the CFQ grammar generates an infinite set of questions, and even when restricted to a reasonable complexity, the set is still too large for practical use. This means that we need to choose which subset of examples we want to include in our dataset. Given our goal of comprehensively measuring compositional generalization, we do this by:
maximizing the overall diversity of rule combinations (allowing us to test as many rule combinations as possible)
while using a uniform distribution from simple examples to increasingly more complex examples.
We measure the diversity of rule combinations of a dataset using the empirical entropy over the frequency distribution of the subgraphs of the rule application DAGs, and we measure the complexity of an example using the number of rule applications used to generate it.
For CFQ, we choose the following practical trade-off between these two criteria. We first generate a sufficiently large sample set by performing random rule applications. We then subsample from it to select a subset that maximizes the entropy of the subgraph distribution (while only taking into account subgraphs with a limited number of nodes for practicality). We use a greedy algorithm that incrementally assigns elements to the subsampled set while maximizing entropy at each step.
The subsampling is initially limited to examples with the smallest complexity level and continues with increasingly larger complexity levels. We cap the maximum number of examples per level to achieve a uniform distribution across levels, and we limit the maximum complexity level such that the questions remain relatively natural. Table TABREF15 shows examples of generated questions at varying levels of complexity.
Example of a rule application DAG
Figures FIGREF190 through FIGREF192 show the rule application DAG that was produced when generating the question “Who directed [entity]?”. They illustrate how grammar, inference, and knowledge rules are combined to generate a pair of text and logical form, and how resolution rules are used to generate the sparql query for the resulting logical form.
Example of a rule application DAG ::: DAG normalization
As discussed in Section SECREF3, nodes of this DAG represent rule applications while edges represent dependencies among the rules; i.e., an edge $A \rightarrow B$ means that rule $B$ strictly depends on rule $A$ in the sense that the generator cannot apply rule $B$ before applying rule $A$. The DAG is normalized to ensure that a certain rule combination is represented using the same DAG across all the examples where it occurs. This is important for meaningfully comparing measures such as entropy and divergence across subgraphs of different examples.
Specifically, together with adopting the measures described above to ensure that rules are applied in a deterministic order, we achieve the normalization of the DAG by only producing edges that represent “minimal dependencies”. This means that if a rule $A$ can be applied after rule $B$, but it could also be applied after rule $B^{\prime }$ with $B \rightarrow B^{\prime }$ (i.e., $B^{\prime }$ depends on $B$), we don't produce the edge $B^{\prime } \rightarrow A$.
Example of a rule application DAG ::: Concept abbreviations
For brevity, in the rule application DAG figures we have applied the following abbreviations for several lengthy concept names:
Director = FilmDirector
Directee = DirectedFilm
Directing = DirectingAFilm
SubjectAgentVerb = PredicateWithBoundRolePairs(RolePair( SubjectHaver, Subject), RolePair(Predicate, Agent))
ObjectUndergoerVerb = PredicateWithBoundRolePairs(RolePair( ObjectHaver, Object), RolePair(Predicate, Undergoer))
E1 = Entity('?E1')
Example of a rule application DAG ::: Entity placeholders
As described in Section SECREF16, during generation we initially generate a $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple containing entity placeholders, and then replace those placeholders with specific entities as a post-processing step. Conceptually, one could construct a rule application DAG describing either the process by which the original $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple with entity placeholders was generated, or alternatively the rules that would need to be applied if constructing the $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple containing the final entity MIDs directly. Structurally, these two DAGs are identical, differing only in the definition of two entity-related rules described below. The rule application DAG shown in the accompanying figures is the version using entity placeholders.
Versions of entity rules applicable when using entity placeholders:
ENTITY=[ENTITY]_HSz7QrdGdsX:
ENTITY(number:singular)/Entity(new_var(V1))
$\rightarrow $ '[entity]'
ENTITY_MID:
ent2sparql(Entity($X)) $\rightarrow $ $X
Versions of entity rules applicable when using actual entity MIDs:
ENTITY=[ENTITY]_HSz7QrdGdsX:
ENTITY(number:singular)/'m.'$X
$\rightarrow $ 'm.'$X
ENTITY_MID:
ent2sparql('m.'$X) $\rightarrow $ 'ns:m.'$X
Example of a rule application DAG ::: Subgraphs and their weights
Figure FIGREF203 shows an example of subgraphs in order to provide more details on the sampling and weighting of compounds. An example non-linear subgraph is highlighted by the red area, and two linear subgraphs are highlighted by the blue and the yellow areas, respectively.
As described in Section SECREF6, given a large subset $\mathbb {G}$ of subgraphs from the sample set as a whole, we calculate for each sample the weight of each subgraph $G \in \mathbb {G}$ that occurs in that sample as:
where $\text{occ}(G)$ is the set of all occurrences of $G$ in the sample, $\prec $ denotes the strict subgraph relation, and $P(G^{\prime }| G)$ is the empirical probability of $G^{\prime }$ occurring as a supergraph of $G$ over the full sample set.
Intuitively, we are trying to estimate how interesting the subgraph $G$ is in the sample. First, for every occurrence $g$ of a subgraph $G$, we look for the supergraph $G^{\prime }$ of $g$ that co-occurs most often with $G$ in the full sample set. The empirical probability of having $G^{\prime }$ as a supergraph of $G$ determines how interesting the occurrence $g$ is – the higher this probability, the less interesting the occurrence. Thus we compute the weight of the occurrence as the complement of this maximum empirical probability. Then we take the weight of $G$ to be the weight of the most interesting occurrence $g$ of $G$ in the sample.
E.g. in the extreme case that $G$ only occurs within the context $G^{\prime }$, the weight of $G$ will be 0 in all samples. Conversely, if $G$ occurs in many different contexts, such that there is no single other subgraph $G^{\prime }$ that subsumes it in many cases, then $w(G)$ will be high in all samples in which it occurs. This ensures that when calculating compound divergence based on a weighted subset of compounds, the most representative compounds are taken into account, while avoiding double-counting compounds whose frequency of occurrence is already largely explainable by the frequency of occurrence of one of its super-compounds.
Returning to our example in Figure FIGREF203, suppose that $G$ represents the smallest linear subgraph (yellow area), and suppose that the weight of $G$ in this sample is 0.4. Then this means that there exists some other subgraph $G^{\prime }$ (for instance, the linear subgraph highlighted by the blue area) that is a supergraph of $G$ in 60% of the occurrences of $G$ across the sample set.
Rules Index
Below is a selection of the rules used in the generation of CFQ. Specifically, this includes all rules involved in generating the question “Who directed [entity]?” (the same example illustrated in the rule application DAG in Appendix SECREF19). The format of the rules is discussed in Appendix SECREF17.
Rules Index ::: Grammar rules
S=WHQ_F6E9egkQqxj:
S/_x
$\rightarrow $ WHQ/_x
WHQ=NPQ_INDIRECT_VP_INDIRECT_TXCca9URgVm:
WHQ[_subject]/DropDependency(_subject) $\sqcap $ DropDependency($\exists $RolePair(Subject, SubjectHaver)._action)
$\rightarrow $ NPQ_INDIRECT(is_what:_none_, number:$n)/_subject
VP_INDIRECT(form:past, number:$n, object:yes, subject:_none_)/_action
NPQ_INDIRECT=WHO_5ptbPXXbuLZ:
NPQ_INDIRECT(number:singular)/Person
$\rightarrow $ 'who'
VP_INDIRECT=VP_INDIRECT_DP_ZJH4NhRkByc:
VP_INDIRECT(object:yes)[_action]/_action $\sqcap $ $\exists $RolePair(ObjectHaver, Object)._object
$\rightarrow $ VP_INDIRECT(object:_none_, subject:_none_)/_action
DP/_object
VP_INDIRECT=ACTIVE_VP_RX51Tm7RXPe:
VP_INDIRECT(object_type:$ut, subject_type:$at)[_head]/_head $\sqcap $ PredicateWithBoundRolePairs(RolePair(SubjectHaver, Subject), RolePair(Predicate, Agent)) $\sqcap $ PredicateWithBoundRolePairs(RolePair(ObjectHaver, Object), RolePair(Predicate, Undergoer))
$\rightarrow $ ACTIVE_VP(agent_type:$at, undergoer_type:$ut)/_head
ACTIVE_VP=VP_SIMPLE_hJqAyjRUYJp:
ACTIVE_VP(number:singular)[_head]/_head
$\rightarrow $ VP_SIMPLE(form:past)/_head
VP_SIMPLE=VP_GHWf3fcVRZg:
VP_SIMPLE(agent_type:person, undergoer_type:movie)[_head]/_head
$\rightarrow $ VP(concept_id:DirectingAFilm)/_head
VP=DIRECTED_JkYzNbQyXtv:
VP(concept_id:DirectingAFilm, form:past)/DirectingAFilm
$\rightarrow $ 'directed'
DP=ENTITY_M6fSP5GvRaN:
DP(is_proper_noun:yes, number:singular)[_head]/_head
$\rightarrow $ ENTITY/_head
ENTITY=[ENTITY]_HSz7QrdGdsX:
ENTITY(number:singular)/Entity(new_var(V1))
$\rightarrow $ '[entity]'
... (211 grammar rules total)
Rules Index ::: Inference rules
BOUND_ROLES_WITH_PREDICATE_OBJECT:
BoundRolePairs($A, RolePair($R, $Q), RolePair($T, $S)):
$\exists $RolePair($Q, $R).($A $\sqcap $ $B) $\rightarrow $ $\exists $RolePair($S, $T).($A $\sqcap $ $B)
BOUND_ROLES_WITH_PREDICATE_SUBJECT:
BoundRolePairs($B, RolePair($Q, $R), RolePair($S, $T)):
$B $\sqcap $ $\exists $RolePair($Q, $R).$A $\rightarrow $ $B $\sqcap $ $\exists $RolePair($S, $T).$A
IGNORE_BOUND_ROLE_PAIRS:
$A $\sqcap $ PredicateWithBoundRolePairs($X, $Y) $\rightarrow $ $A
IGNORE_DEPENDENCY_DROPPING:
DropDependency($X) $\rightarrow $ $X
PREDICATE_UNREIFICATION:
Role($Q, $P), Role($R, $P):
$\exists $RolePair($Q, Predicate).($P $\sqcap $ $\exists $RolePair(Predicate, $R).$A) $\rightarrow $ $\exists $RolePair($Q, $R).$A
... (17 inference rules total)
Rules Index ::: Resolution rules
CONJUNCTION_WITHOUT_ENTITY:
def2sparql($X $\sqcap $ $Y, $V1) $\rightarrow $ def2sparql($X, $V1) ' . ' def2sparql($Y, $V1)
ENTITY_MID:
ent2sparql(Entity($X)) $\rightarrow $ $X
GET_SPECIALIZATIONS:
get_specializations($X) $\rightarrow $ 'SELECT DISTINCT ' get_var($X, new_var($V0)) ' WHERE { ' def2sparql($X, get_var($X, $V0)) '}'
GET_VAR_CONJUNCTION:
get_var($X $\sqcap $ $Y, $V1) $\rightarrow $ shared_var(get_var($X, get_var($Y, $V1)), get_var($Y, get_var($X, $V1)))
GET_VAR_RELATION:
get_var($\exists $$R.$X, $V1) $\rightarrow $ $V1
GET_VAR_TYPE:
FreebaseTypeMapping($X, $F):
get_var($X, $V1) $\rightarrow $ $V1
PROPERTY_MAPPING:
FreebasePropertyMapping($R, $F):
role2sparql($R) $\rightarrow $ $F
RELATION_MAPPING_WITHOUT_EXCLUSION:
NonExclusiveRolePair($R):
rel2sparql($X, $R, $Y) $\rightarrow $ $X role2sparql($R) $Y
RELATION_TO_ENTITY:
def2sparql($\exists $$R.$X, $V1) $\rightarrow $ rel2sparql($V1, $R, ent2sparql($X))
SHARED_VAR:
shared_var($X, $X) $\rightarrow $ $X
SPECIALIZATION_OF_TYPE:
def2sparql($X, $V1) $\rightarrow $ $V1 ' a ' type2sparql($X)
TYPE_MAPPING:
FreebaseTypeMapping($X, $F):
type2sparql($X) $\rightarrow $ $F
... (21 resolution rules total)
Rules Index ::: Knowledge rules
$\rightarrow $ BoundRolePairs(DirectingFilm, RolePair(Predicate, Agent), RolePair(Predicate, FilmDirector))
$\rightarrow $ BoundRolePairs(DirectingFilm, RolePair(Predicate, Undergoer), RolePair(Predicate, DirectedFilm))
$\rightarrow $ BoundRolePairs(PredicateWithBoundRolePairs(RolePair(ObjectHaver, Object), RolePair(Predicate, Undergoer)), RolePair(ObjectHaver, Object), RolePair(Predicate, Undergoer))
$\rightarrow $ BoundRolePairs(PredicateWithBoundRolePairs(RolePair(Subject, SubjectHaver), RolePair(Agent, Predicate)), RolePair(Subject, SubjectHaver), RolePair(Agent, Predicate))
$\rightarrow $ FreebasePropertyMapping(RolePair(FilmDirector, DirectedFilm), 'ns:film.director.film')
$\rightarrow $ FreebaseTypeMapping(Person, 'ns:people.person')
$\rightarrow $ NonExclusiveRolePair(FilmDirector, DirectedFilm)
$\rightarrow $ Role(DirectedFilm, DirectingFilm)
$\rightarrow $ Role(FilmDirector, DirectingFilm)
... (194 knowledge rules total) | LSTM+attention, Transformer , Universal Transformer |
f1c70baee0fd02b8ecb0af4b2daa5a56f3e9ccc3 | f1c70baee0fd02b8ecb0af4b2daa5a56f3e9ccc3_0 | Q: How big is new question answering dataset?
Text: Introduction
Human intelligence exhibits systematic compositionality BIBREF0, the capacity to understand and produce a potentially infinite number of novel combinations of known components, i.e., to make “infinite use of finite means” BIBREF1. In the context of learning from a set of training examples, we can observe compositionality as compositional generalization, which we take to mean the ability to systematically generalize to composed test examples of a certain distribution after being exposed to the necessary components during training on a different distribution.
Humans demonstrate this ability in many different domains, such as natural language understanding (NLU) and visual scene understanding. For example, we can learn the meaning of a new word and then apply it to other language contexts. As BIBREF2 put it: “Once a person learns the meaning of a new verb `dax', he or she can immediately understand the meaning of `dax twice' and `sing and dax'.” Similarly, we can learn a new object shape and then understand its compositions with previously learned colors or materials BIBREF3, BIBREF4.
In contrast, state-of-the-art machine learning (ML) methods often fail to capture the compositional structure that is underlying the problem domain and thus fail to generalize compositionally BIBREF2, BIBREF5, BIBREF6, BIBREF7, BIBREF3. We believe that part of the reason for this shortcoming is a lack of realistic benchmarks that comprehensively measure this aspect of learning in realistic scenarios.
As others have proposed, compositional generalization can be assessed using a train-test split based on observable properties of the examples that intuitively correlate with their underlying compositional structure. BIBREF8, for example, propose to test on different output patterns than are in the train set, while BIBREF2 propose, among others, to split examples by output length or to test on examples containing primitives that are rarely shown during training. In this paper, we formalize and generalize this intuition and make these contributions:
We introduce distribution-based compositionality assessment (DBCA), which is a novel method to quantitatively assess the adequacy of a particular dataset split for measuring compositional generalization and to construct splits that are ideally suited for this purpose (Section SECREF2).
We present the Compositional Freebase Questions (CFQ) , a simple yet realistic and large NLU dataset that is specifically designed to measure compositional generalization using the DBCA method, and we describe how to construct such a dataset (Section SECREF3).
We use the DBCA method to construct a series of experiments for measuring compositionality on CFQ and scan BIBREF2 and to quantitatively compare these experiments to other compositionality experiments (Section SECREF4).
We analyze the performance of three baseline ML architectures on these experiments and show that these architectures fail to generalize compositionally, and perhaps more surprisingly, that compound divergence between train and test sets is a good predictor of the test accuracy (Section SECREF5).
Distribution-Based Compositionality Assessment (DBCA)
Like other authors, we propose to measure a learner's ability to generalize compositionally by using a setup where the train and test sets come from different distributions. More specifically, we propose a setup where each example is obtained by composing primitive elements (atoms), and where these atoms are similarly represented in the train and test sets while the test set contains novel compounds, i.e., new ways of composing the atoms of the train set.
As a simple illustrative scenario, consider the task of answering simple questions such as “Who directed Inception?” and “Did Christopher Nolan produce Goldfinger?”. In this scenario, the atoms intuitively correspond to the primitive elements that are used to compose those questions, such as the predicates “direct(ed)” and “produce(d)”, the question patterns “Who [predicate] [entity]” and “Did [entity1] [predicate] [entity2]”, and the entities “Inception”, “Christopher Nolan”, etc. The compounds on the other hand correspond to the combinations of these atoms that appear in the various examples: "Who directed [entity]?", "Did Christopher Nolan [predicate] Inception?", etc.
To measure compositional generalization on such a task, one might therefore use the questions “Who directed Inception?” and “Did Christopher Nolan produce Goldfinger?” as training examples while testing on questions such as “Did Christopher Nolan direct Goldfinger?” and "Who produced Inception?" because the atoms are identically represented in the train and test sets while the compounds differ.
To make this intuition more precise, we focus on datasets such as CFQ (introduced in Section SECREF3) and scan BIBREF2, where each example can be created from a formal set of rules by successively applying a number of these rules. In this case, the atoms are the individual rules, while the compounds are the subgraphs of the directed acyclic graphs (DAGs) that correspond to the rule applications. (See Sections SECREF3 and SECREF4 for more details.)
Distribution-Based Compositionality Assessment (DBCA) ::: Principles for measuring compositionality
We use the term compositionality experiment to mean a particular way of splitting the data into train and test sets with the goal of measuring compositional generalization. Based on the notions of atoms and compounds described above, we say that an ideal compositionality experiment should adhere to the following two principles:
Similar atom distribution: All atoms present in the test set are also present in the train set, and the distribution of atoms in the train set is as similar as possible to their distribution in the test set.
Different compound distribution: The distribution of compounds in the train set is as different as possible from the distribution in the test set.
The second principle guarantees that the experiment is compositionally challenging in the sense that it tests the learner on compounds that are as different as possible from the compounds used during training. The first principle aims to guarantee that the experiment is exclusively measuring the effect of the difference in the way atoms are composed to form compounds (rather than some related but different property such as domain adaptation on the distribution of the atoms).
To determine to which degree a certain experiment adheres to these principles, we use the following formalization. For a sample set $T$, we use $\mathcal {F}_A(T)$ to denote the frequency distribution of atoms in $T$ and $\mathcal {F}_C(T)$ for the weighted frequency distribution of compounds in $T$, which correspond to the subgraphs of the rule application DAGs. For practicality, we do not consider all subgraphs of rule application DAGs when computing the compound divergence. Instead, we first generate a large subset $\mathbb {G}$ of subgraphs, then weight them in context of their occurrence, and keep only the ones with highest sum of weights. The purpose of the weighting is to avoid double-counting compounds that are highly correlated with some of their super-compounds. We achieve this by calculating the weight of $G \in \mathbb {G}$ in a sample as $w(G) = \max _{g \in \text{occ}(G)} (1 - \max _{G^{\prime }: g \prec g^{\prime } \in \text{occ}(G^{\prime })} P(G^{\prime }| G))$, where $\text{occ}(G)$ is the set of all occurrences of $G$ in the sample, $\prec $ denotes the strict subgraph relation, and $P(G^{\prime }| G)$ is the empirical probability of $G^{\prime }$ occurring as a supergraph of $G$ over the full sample set. See Appendix SECREF202 for example subgraphs and more details on the weighting.
We measure divergence (or similarity) of the weighted distributions using the Chernoff coefficient $C_\alpha (P \Vert Q) = \sum _{k} p_k^\alpha \, q_k^{1-\alpha } \in [0, 1]$ BIBREF9. For the atom divergence, we use $\alpha =0.5$, which corresponds to the Bhattacharyya coefficient and reflects the desire of making the atom distributions in train and test as similar as possible. For the compound divergence, we use $\alpha = 0.1$, which reflects the intuition that it is more important whether a certain compound occurs in $P$ (train) than whether the probabilities in $P$ (train) and $Q$ (test) match exactly. This allows us to formally define as follows the notions of compound divergence $\mathcal {D}_C$ and atom divergence $\mathcal {D}_A$ of a compositionality experiment consisting of a train set $V$ and a test set $W$:
Based on these principles, we suggest to use as a preferred compositionality benchmark for a given dataset the accuracy obtained by a learner on splits with maximum compound divergence and low atom divergence (we use $\mathcal {D}_A \le 0.02$). See Section SECREF4 for details about how to construct such splits.
The CFQ Dataset
We present the Compositional Freebase Questions (CFQ) as an example of how to construct a dataset that is specifically designed to measure compositional generalization using the DBCA method introduced above. CFQ is a simple yet realistic, large dataset of natural language questions and answers that also provides for each question a corresponding sparql query against the Freebase knowledge base BIBREF10. This means that CFQ can be used for semantic parsing BIBREF11, BIBREF12, which is the task that we focus on in this paper.
The CFQ Dataset ::: Automatic, rule-based generation
BIBREF13 describe a number of benefits for automated rule-based dataset generation, including scalability, control of scope, and avoidance of human errors. Beyond these benefits, however, such an approach is particularly attractive in the context of measuring compositional generalization using the DBCA method, as it allows us to precisely track the atoms (rules) and compounds (rule applications) of each example by recording the sequence of rule applications used to generate it.
Since the way we measure compositionality depends on how the examples can be broken down into atoms and compounds, we design the generation rules so as to have few and meaningful atoms. More precisely, we aim to have as few rules as possible so that the richness of the examples comes from composing them, which yields a large variety of compounds (enabling a large range of different compound divergences) while making it easy to obtain similar distributions of atoms. Also, we aim to make our rules truly “atomic” in the sense that the behavior of any rule is independent of the context where it is applied (e.g., rules may not contain “if-then-else” constructs).
In order to minimize the number of rules, we use an intermediate logical form that serves as a uniform semantic representation with relatively direct mappings to natural language and sparql. Our rules thus fall into the following four categories (a selection of rules is provided in Appendix SECREF20):
Grammar rules that generate natural language constructs and corresponding logical forms.
Inference rules that describe transformations on logical forms, allowing us to factor out transformations that are independent of specific linguistic and sparql constructs.
Resolution rules that map constructs of the logical form to sparql constructs.
Knowledge rules that supply logical form expressions that are universally applicable. Other rules can be kept more generic by parameterizing them on knowledge.
These rules define a language of triples of the form $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $. Our generation algorithm produces such triples in a mixed top-down and bottom-up fashion. We first apply grammar rules and inference rules to produce the natural language questions and their semantics in our logical form. Then we apply resolution rules to obtain the sparql query. See Figure FIGREF14 for an illustration. In addition, the generator produces a normalized, directed acyclic graph (DAG) of rule applications that corresponds to the normalized program that generated the triple. (Appendix SECREF19 shows an example.) Edges of this DAG represent dependencies among the rule applications, and the normalization ensures that a certain rule combination is represented using the same DAG across all the examples where it occurs.
The described approach can generate a potentially infinite set of questions, from which we first sample randomly and then subsample (to maximize the overall diversity of rule combinations while keeping a uniform distribution over complexity). We measure the diversity of rule combinations using the empirical entropy of a weighted subset of the rule application DAGs, and we use the number of rule applications as a measure of the complexity of an example. We also limit the maximum example complexity such that the questions remain relatively natural. Table TABREF15 shows examples of generated questions at varying levels of complexity. An example of a complete data item is shown in Appendix SECREF8, a more detailed data quality analysis is presented in Appendix SECREF9, and the generation algorithm is discussed in more detail in Appendix SECREF18.
The CFQ Dataset ::: Dataset details and statistics
Input and output. While the primary focus of the dataset is semantic parsing (natural language question to sparql query), we also provide natural language answers for each question. This allows the dataset to be used in a text-in-text-out scenario as well (see Appendix SECREF8).
Ambiguity. We largely avoid ambiguity in the questions. In particular, we make sure each name is used to refer to exactly one entity, and we avoid different possible parse trees, different interpretations of plurals, and the need for disambiguation that requires semantic knowledge.
Scope. We select the following language features as compositional building blocks: open questions and closed questions; subordinate clauses; active and passive voice; conjunctions of verb phrases and of noun phrases; possessives with roles (“X's parent”); adjectives; and type restrictions. For knowledge base features, we select roles, verbs, types, and adjectives from domains that are well-represented in Freebase and that can be combined easily. We start from the popular movie domain (e.g., directing, producing, editor, sequel) and extend this with personal relations (e.g., parent, spouse, sibling), companies (e.g., founding, employer), and adjectives (e.g., gender, nationality).
Logical form and grammar. For the internal logical form, we adopt a variation of the description logic $\mathcal {EL}$ BIBREF14, BIBREF15, augmented with additional constructors (see Appendix SECREF16) to more easily map to certain linguistic structures. For the grammar rules, we use a unification-based grammar syntax similar to that used in the Prolog extension GULP 3.1 BIBREF16, with addition of support for disjunction, negation, absence, and default inheritance of features for compactness of representation.
Grounding in Freebase. Once an example is generated by the CFQ rules, it still contains entity placeholders instead of Freebase machine ids (MIDs). For the task of semantic parsing, the examples could theoretically be used as-is, as our avoidance of semantic ambiguity means that a learner should not need knowledge of the specific entity in order to parse the question. To make the questions natural, however, we apply an additional step of replacing the placeholders with appropriate specific entities. To do this we first execute the generated sparql query against Freebase. This returns a set of candidate MID combinations that satisfy the query and can be used as substitutes. If the set is empty, we abandon the generated question candidate as unnatural. Otherwise, we pick one combination at random to yield a question with positive answer. In the case of a closed question, we also generate a variation that yields the answer “No”, which we do by mixing in MIDs from another substitution (or a more generic replacement if that fails) to keep the question as plausible-sounding as possible. We then randomly choose either the question with positive or with negative answer, to avoid spurious correlations between question structure and yes/no answer.
Semantic and structural filtering. Even among the questions that can be satisfied in Freebase, there are some that are meaningful but somewhat unnatural, such as “Was Strange Days directed by a female person whose gender is female?”. We automatically filter out such unnatural questions using semantic and structural rules. Note that since we do not require a learner to identify such questions, we do not track these filtering rules.
Release and statistics.
CFQ contains 239,357 English question-answer pairs that are answerable using the public Freebase data. (The data URL is not yet provided for anonymous review.) We include a list of MIDs such that their English names map unambiguously to a MID. Table TABREF17(a) summarizes the overall statistics of CFQ. Table TABREF17(b) uses numbers from BIBREF8 and from an analysis of WebQuestionsSP BIBREF17 and ComplexWebQuestions BIBREF18 to compare three key statistics of CFQ to other semantic parsing datasets (none of which provide annotations of their compositional structure). CFQ contains the most query patterns by an order of magnitude and also contains significantly more queries and questions than the other datasets. Note that it would be easy to boost the raw number of questions in CFQ almost arbitrarily by repeating the same question pattern with varying entities, but we use at most one entity substitution per question pattern. Appendix SECREF10 contains more detailed analyses of the data distribution.
Compositionality Experiments for CFQ and scan
The DBCA principles described in Section SECREF6 enable a generic and task-independent method for constructing compositionality experiments. To construct such an experiment for a dataset $U$ and a desired combination of atom and compound divergences, we use an iterative greedy algorithm that starts with empty sets $V$ (train) and $W$ (test), and then alternates between adding an example $u \in U$ to $V$ or $W$ (while maintaining the desired train/test ratio). At each iteration, the element $u$ is selected such that $\mathcal {D}_C(V \Vert W)$ and $\mathcal {D}_A(V \Vert W)$ are kept as closely as possible to the desired values. To reduce the risk of being stuck in a local optimum, we also allow removing examples at certain iterations.
In general, there are many different splits that satisfy a desired compound and atom divergence. This reflects the fact that a certain compound may either occur exclusively in the train set or the test set, or it may occur in both of them because the split may have achieved the desired compound divergence by separating other (possibly orthogonal) compounds. Our greedy algorithm addresses this by making random choices along the way, starting with picking the first example randomly.
For the goal of measuring compositional generalization as accurately as possible, it is particularly interesting to construct maximum compound divergence (MCD) splits, which aim for a maximum compound divergence at a low atom divergence (we use $\mathcal {D}_A \le 0.02$). Table TABREF18 compares the compound divergence $\mathcal {D}_C$ and atom divergence $\mathcal {D}_A$ of three MCD splits to a random split baseline as well as to several previously suggested compositionality experiments for both CFQ and the existing scan dataset (cf. Section SECREF30). The split methods (beyond random split) are the following:
Output length: Variation of the setup described by BIBREF2 where the train set consists of examples with output (sparql query or action sequence) length $\le \hspace{-2.5pt} N$, while the test set consists of examples with output length $> \hspace{-2.5pt} N$. For CFQ, we use $N = 7$ constraints. For scan, we use $N = 22$ actions.
Input length: Variation of the above setup, in which the train set consists of examples with input (question or command) length $\le N$, while test set consists of examples with input length $> N$. For CFQ, we use $N=19$ grammar leaves. For SCAN, we use $N=8$ tokens.
Output pattern: Variation of setup described by BIBREF8, in which the split is based on randomly assigning clusters of examples sharing the same output (query or action sequence) pattern. Query patterns are determined by anonymizing entities and properties; action sequence patterns collapse primitive actions and directions.
Input pattern: Variation of the previous setup in which the split is based on randomly assigning clusters of examples sharing the same input (question or command) pattern. Question patterns are determined by anonymizing entity and property names ; command patterns collapse verbs and the interchangeable pairs left/right, around/opposite, twice/thrice.
All of these experiments are based on the same train and validation/test sizes of 40% and 10% of the whole set, respectively. For CFQ, this corresponds to about 96k train and 12k validation and test examples, whereas for scan, it corresponds to about 8k train and 1k validation and test examples. We chose to use half of the full dataset for the train-test splits, as it led to an appropriate balance between high compound divergence and high train set size in informal experiments.
The MCD splits achieve a significantly higher compound divergence at a similar atom divergence when compared to the other experiments. The reason for this is that, instead of focusing on only one intuitive but rather arbitrary aspect of compositional generalization, the MCD splits aim to optimize divergence across all compounds directly.
Interestingly, the MCD splits still correlate with the aspects of compositional generalization that are targeted by the other experiments in this table. As shown in the four right columns of Table TABREF18, for each MCD split, the train set $V$ contains on average shorter examples than the test set $W$ (measured by the ratio of average lengths), and $V$ also contains only a small fraction of the input and output patterns used in $W$ (measured by the fraction of patterns covered). However, these correlations are less pronounced than for the experiments that specifically target these aspects, and they vary significantly across the different MCD splits.
This illustrates that MCD splits are comprehensive in the sense that they cover many different aspects of compositional generalization, especially when looking at multiple of them. It also means that whether a certain example ends up in train or test is not determined solely by a single criterion that is immediately observable when looking at the input and output (such as length). As we show in Appendix SECREF91, this generally makes the examples in train and test look fairly similar.
Experimental Results and Analysis ::: Experiment Setup
We use three encoder-decoder neural architectures as baselines: (1) LSTM+attention as an LSTM BIBREF19 with attention mechanism BIBREF20; (2) Transformer BIBREF21 and (3) Universal Transformer BIBREF22.
We tune the hyperparameters using a CFQ random split, and we keep the hyperparameters fixed for both CFQ and scan (listed in Appendix SECREF12). In particular the number of training steps is kept constant to remove this factor of variation. We train a fresh model for each experiment, and we replicate each experiment 5 times and report the resulting mean accuracy with 95% confidence intervals.
Note that while we construct test and validation sets from the same distribution, we suggest that hyperparameter tuning should be done on a random split (or random subset of the train set) if one wants to measure compositional generalization of a model with respect to an unknown test distribution as opposed to an architecture with respect to a known test distribution. Tuning on a validation set that has the same distribution as the test set would amount to optimizing for a particular type of compound divergence and thus measure the ability for a particular architecture to yield models that can be made to generalize in one particular way (through leaking information about the test set in the hyperparameters).
Similarly to BIBREF8, we anonymize the Freebase names and MIDs in the textual input and the SPARQL output, respectively, by replacing them with a placeholder (e.g., “M0” for the first MID). This removes the need for two learning sub-tasks that are orthogonal to our focus: named entity recognition and learning that the MIDs are patterns that need to be copied. An example input-output (question-query) pair then looks like the following: `Was M0 a screenwriter' $\mapsto $ `select count(*) where {M0 a ns:film.writer}'.
The main relation we are interested in is the one between compound divergence of the data split and accuracy. Specifically, we compute the accuracy of each model configuration on a series of divergence-based splits that we produce with target compound divergences that span the range between zero and the maximum achievable in 0.1 increments (while ensuring that atom divergence does not exceed the value of 0.02). For each target divergence, we produce at least 3 different splits with different randomization parameters (compare Section SECREF4). For comparison, we also compute accuracies on the other splits shown in Table TABREF18.
Experimental Results and Analysis ::: Results and analysis for CFQ
The mean accuracies of the three architectures on CFQ are shown in Figure FIGREF28(a) and Table TABREF29. We make three main observations:
All models achieve an accuracy larger than 95% on a random split, and this is true even if they are trained on 10 times fewer training instances (see Appendix SECREF15 for a more detailed analysis on the performance with varying training size).
The mean accuracy on the MCD splits is below 20% for all architectures, which means that even a large train set (about 96k instances) with a similar distribution of atoms between train and test is not sufficient for these architectures to perform well on the test distribution.
For all architectures, there is a strong negative correlation between the compound divergence and the mean accuracy.
This suggests that the baseline models are able to capture the superficial structure of the dataset, but fail to capture the compositional structure. We find it surprising that varying the compound divergence gives direct control of the (mean) accuracy, even though the examples in train and test look similar (see Appendix SECREF91). This means that compound divergence seems to capture the core difficulty for these ML architectures to generalize compositionally.
Note that the experiment based on output-length exhibits a worse accuracy than what we would expect based on its compositional divergence. One explanation for this is that the test distribution varies from the training distribution in other ways than compound divergence (namely in output length and a slightly higher atom divergence), which seems to make this split particularly difficult for the baseline architectures. To analyze the influence of the length ratio further, we compute the correlation between length ratios and accuracy of the baseline systems and compare it to the correlation between compound divergence and accuracy. We observe $R^2$ correlation coefficients between 0.11 and 0.22 for the input and output length ratios and between 0.81 and 0.88 for the compound divergence. This shows that despite the known phenomenon that the baseline systems struggle to generalize to longer lengths, the compound divergence seems to be a stronger explanation for the accuracy on different splits than the lengths ratios.
Error analysis. We perform an analysis of the errors for the split MCD$_{1}$ (the first MCD split that we constructed, with more details provided in Appendix SECREF13). We observe accuracies between 29% and 37% on the test set of this particular split. Qualitatively, all three systems seem to make similar errors at this point (68% of errors are on the same samples). They make more errors for longer sequences and predict about 20% too short output when they make an error. The most common category of error is the omission of a clause in the output (present in 43%-49% of the test samples), e.g.: (1) Omitted conjunctions: for the input “What spouse of a film producer executive produced and edited M0, M1, and M2?” the best system ignores “executive produced” in the output. (2) Omitted adjectives: for the input “Which female Spanish film producer was M3' s spouse?” the best system ignores the adjective “female”.
Experimental Results and Analysis ::: Results and analysis for scan
To demonstrate the use of our analysis method on another dataset, we re-create the scan dataset BIBREF2, which consists of compositional navigation commands (e.g, `turn left twice and jump') mapped to corresponding action sequences (e.g., `lturn lturn jump'). We use the original grammar while tracking the rule applications used for the construction of each input-output pair. This enables us to compare the compositional generalization abilities of the baseline systems on this dataset in a novel way.
Figure FIGREF28(b) shows the graphs for the scan data set in the same setup as Figure FIGREF28(a) does for CFQ. We observe that the compound divergence again is a good predictor for the mean accuracy for all three architectures. One difference is that for scan the systems are able to attain accuracies close to 100% for compound divergences up to around 0.2, which is not the case for CFQ. This seems to be in line with the fact that overall CFQ is a more complex task than scan: the total number of rules used in generating scan is only 38 in comparison to 443 rules in the construction of CFQ.
Appendix SECREF14 provides a comparison to other experiments presented in previous work, including experiments that have significantly different atom distributions. We observe that this generally causes lower accuracies but does not break the correlation between accuracy and compound divergence.
Related Work
To measure compositional generalization for semantic parsing to SQL, BIBREF8 propose to ensure that no SQL query pattern occurs in both the train and the test set (“query split”), and they provide such splits for several data sets. By evaluating several ML architectures the authors confirm that this query-pattern split is harder to learn than a conventional split.
BIBREF2 introduce the scan dataset, and several publications provide interesting analyses of compositional generalization using it BIBREF5, BIBREF6. BIBREF7 discuss a particular extension of a seq2seq model that is effective in handling difficult scan sub-tasks by separating semantic and syntactic information during learning. Our contributions extend the analyses on the scan data in several ways: CFQ provides richer annotations and covers a broader subset of English than the scan dataset, and we propose a comprehensive score for assessing aggregate compositionality of a system on a given task.
The mathematics dataset BIBREF13 is a large, automatically generated set of 112M samples in 56 separated sub-tasks. The authors present data and experiments that share common goals with our approach, but focus on mathematical reasoning instead of natural language. Our breakdown of generation rules per train sample is more fine-grained, which allows a more precise compositional generalization analysis. Being automatically generated also links our approach to datasets such as the bAbI tasks BIBREF23, which however do not focus on compositional generalization.
A dataset related to CFQ is ComplexWebQuestions BIBREF18, which consists of complex questions that are automatically generated from simpler sub-questions in WebQuestionsSP BIBREF17 and then reworded manually. While these datasets can be used for semantic parsing, we did not find them suitable for a thorough compositionality analysis because a consistent annotation with the compositional structure would be hard to obtain. Other approaches to semi-automatic dataset creation also use paraphrasing BIBREF24, BIBREF25.
BIBREF3 introduce the generated clevr dataset, which shares common goals with our work applied in the area of visual reasoning. The dataset's functional programs capture some of the structural information of the questions and are linked one-to-many to the 423 question patterns used. The authors specifically investigate generalization to new combinations of visual attributes in one experiment which uses a particular train-test split based on the colors used. BIBREF26 propose a neural-symbolic architecture and discuss promising results on additional specific splits of the clevr data, e.g. based on object counts and program depth. BIBREF27 describe how the application of compositional attention networks to the clevr data leads to structured and data-efficient learning. BIBREF28 present a large, compositional, generated visual question answering data set with functional programs, on which neural state machines achieve good performance BIBREF29. The use of specific splits between train and test data also occurs in the context of visual data. E.g., BIBREF30 propose a greedy split algorithm to maximize the coverage of test concepts in the train set while keeping question-type/answer pairs disjoint and observe performance degradation of existing approaches. BIBREF31 introduce a synthetic visual question answering dataset called sqoop, which is used to test whether a learner can answer questions about all possible object pairs after being trained on a subset.
While these datasets are very interesting, the additional annotation that we provide in CFQ indicating the exact rule trees needed to link input and output makes additional analyses regarding compositionality possible. Our analyses go beyond many of the presented discussions (that mostly focus on accuracy regarding particular holdouts) in formalizing an approach that uses the atom and compound divergences to measure compositionality.
A number of ML approaches have been developed for semantic parsing. BIBREF32 propose Key-Value Memory Networks – neural network-based architectures that internalize a knowledge base into the network – and introduce the WikiMovies dataset. BIBREF33 develop an end-to-end architecture that can handle noise in questions and learn multi-hop reasoning simultaneously. They introduce the MetaQA benchmark that is based on WikiMovies but uses a set of only 511 question patterns (mod entities) shared between train and test.
With regards to studying compositionality in ML, BIBREF34 argue that combinatorial generalization should be a top priority to achieve human-like abilities. BIBREF35 discusses measuring the compositionality of a trained representation, e.g. of a learned embedding. The author suggests to use a tree reconstruction error that is based on how well the oracle derivation of the input matches the structure that can be derived on the representations. BIBREF4 discuss an architecture that enables the learning of compositional concept operators on top of learned visual abstractions. BIBREF36 introduce the compositional recursive learner that “can generalize to more complex problems than the learner has previously encountered”.
Conclusion and Outlook
In this paper we presented what is (to the best of our knowledge) the largest and most comprehensive benchmark for compositional generalization on a realistic NLU task. It is based on a new dataset generated via a principled rule-based approach and a new method of splitting the dataset by optimizing the divergence of atom and compound distributions between train and test sets. The performance of three baselines indicates that in a simple but realistic NLU scenario, state-of-the-art learning systems fail to generalize compositionally even if they are provided with large amounts of training data and that the mean accuracy is strongly correlated with the compound divergence.
We hope our work will inspire others to use this benchmark as a yardstick to advance the compositional generalization capabilities of learning systems and achieve high accuracy at high compound divergence. Some specific directions that we consider promising include applying unsupervised pretraining on the input language or output queries and the use of more diverse or more targeted learning architectures, such as syntactic attention BIBREF7. We also believe it would be interesting to apply the DBCA approach to other domains such as visual reasoning, e.g. based on clevr BIBREF3.
In the area of compositionality benchmarks, we are interested in determining the performance of current architectures on the end-to-end task that expects a natural language answer given a natural language question in CFQ. We would like also to extend our approach to broader subsets of language understanding, including use of ambiguous constructs, negations, quantification, comparatives, additional languages, and other vertical domains.
Example Dataset Item
The following shows an example data item including the question text in various forms, the answer, the sparql query in various forms, some tracked statistics, and the set of used rules (atoms) and the applied rule tree (compound). Some details are omitted, indicated by ellipses (`...').
Data Quality Analysis
During the development of our data generation pipeline, we manually checked the generated examples for quality. Below is a random selection of 50 examples of the final CFQ dataset (no cherry-picking was used). Brackets around [entity names] are provided just for ease of human reading. Manual checking also indicated that all questions are associated with the semantically correct sparql queries. However, because we rely on the data present in Freebase, there are three debatable questions which sound somewhat unnatural (UNKREF33, UNKREF51, and UNKREF59, see further discussion below the list).
Who was a writer, star, and cinematographer of [Tetsuo: The Bullet Man], [Nightmare Detective], and [Bullet Ballet]?
Which male person was a sibling of [Andrew Klavan]?
Did [Wallace Stevens] influence [Levi Seeley]'s spouse and parent?
Did a producer, writer, and art director of [Thelma & Luis] produce, direct, and write [Light Girls]?
Were [Hangover Square], [Zack and Miri Make a Porno], and [Clerks II] edited by a founder and employee of a film producer?
What American parent of [Charlie Sistovaris] was a British screenwriter's sibling?
Did [Anne Williams Rubinstein] marry a person that influenced a screenwriter and influenced [John Most]?
Was [Cachún cachún ra ra!]'s director a film director's American child?
Did [Maisy's Garden]'s executive producer write, edit, and executive produce [Pakalppooram], [It's Not About the Shawerma], [Rick's Canoe], and [The Fifth Wall]?
Was [Holly Ellenson]'s child [Wally Ellenson]?
Did [Emerald Cities]'s cinematographer, writer, and editor edit, executive produce, and direct [Blues for the Avatar] and [White Stork Is Coming]?
Was a film producer [Lilies of the Ghetto]'s distributor and producer?
Which child of [Mimi Iger] did a film producer employ and [The Walt Disney Company] employ?
What Japanese spouse of [Hong Kong Paradise]'s star did [Ineko Arima] and [Nishiki Kô] marry?
Who influenced and was influenced by [Black Dynamite]'s star?
What was written by, edited by, directed by, produced by, and executive produced by [Pauline Collins]'s child's sibling?
Which Swedish film director that [Théo Van Horn]'s actor influenced did [Egen ingȧng] star?
Who was influenced by [Golden Yeggs]'s star, was influenced by [Richard Pryor], was influenced by [Bill Murray], and married [Elaine Chappelle]?
What did [This Is My Show]'s director, cinematographer, and star direct, edit, produce, and executive produce?
Who was a male costume designer and director of [Ene... due... like... fake...] and [The Windmill Bar]?
Was [Kumudu Munasinghe] a Dutch film producer's country of nationality's employee?
Did an art director, editor, director, writer, cinematographer, and star of [Tetsuo II: Body Hammer] produce [Nightmare Detective], [Tetsuo: The Iron Man], and [A Snake of June]?
Was [Alexandra Naoum] [Monsieur Verdoux]'s producer, writer, and star?
What film director founded [THX], was employed by [American Zoetrope], [LucasArts], [Skywalker Sound], and [Lucasfilm], and founded [Industrial Light & Magic]?
What male employee of [Weta Workshop] was [Bad Taste]'s editor?
Were [Weta Digital] and [Weta Workshop] founded by a cinematographer and founded by a film editor?
What art director influenced [DreamWorks Animation]'s founder?
Did [Daisies] star [Fruit of Paradise]'s costume designer and writer, star [Jaromír Vomácka], and star [Jirina Myskova]?
What character was influenced by a costume designer, influenced by [Pedro Calderón de la Barca], influenced by [William Shakespeare] and [Luis Buñuel], and influenced by [Miguel de Unamuno]?
What British costume designer of [The Love Letter] and [The Chamber] was a screenwriter's child?
Was [Eric Massa] a cinematographer's parent's sibling's American sibling?
What art director of [Stepping Sisters 1932] was a parent of [Imre Sándorházi]?
What was executive produced by, written by, produced by, and edited by a director of [V/H/S/2]'s sequel?
What did an editor and cinematographer of [Tongue Twister Variations] direct?
Who was a Canadian screenwriter that produced [Her Painted Hero] and [The Nick of Time Baby]?
Which American parent of [Janet Friedman] did [Rose Friedman] influence and marry?
Did [George Carlin] influence [Louis C.K.: Shameless]'s executive producer and influence [Joan Rivers]?
Who was a male writer, star, director, and costume designer of [The Wizard of Speed and Time]?
Who was [Lost Boys: The Thirst]'s prequel's sequel's art director?
Did a cinematographer's female parent executive produce, direct, and write [Hit Dat Shit 5]?
Who married [Siri von Essen], influenced [A Lesson in Love]'s director and art director, influenced [Tennessee Williams], and influenced [Maxim Gorky]?
What Italian film director directed [Children of Hannibal]?
What film producer directed, wrote, edited, and produced [la estrella], [la ardilla], and [el valiente]?
Were [Flames: The Movie] and [Soltera] directed by a male person and executive produced by [Hilda Russoff]'s spouse?
Was a sibling of [Fawwaz bin Abdulaziz Al Saud] [Badr bin Abdulaziz Al Saud]'s sibling?
What did a sibling of [Louise Rohr] executive produce, produce, and edit?
Did a French cinematographer of [Le Volcan interdit] edit [The Last Bolshevik] and direct [A.K.] and [Statues Also Die]?
Was [Mannai Thottu Kumbidanum] directed by and written by a Dutch male cinematographer?
Was a director, art director, executive producer, and costume designer of [But I'm a Genderqueer] [Lauren Soldano]?
Was [When We Were Kings] produced by a film editor whose spouse was employed by [Royal Academy of Dramatic Art] and distributed by [PolyGram Filmed Entertainment]?
Further discussion of the debatable questions:
Did [Wallace Stevens] influence [Levi Seeley]'s spouse and parent?
The occurrence of the seemingly implausible combination of roles “spouse and parent” is due to incorrect data in Freebase, in which there are 502 entities asserted to be both the spouse and parent of other entities. For instance, “Anne Dacre” is both the spouse and parent of “Christopher Conyers”. We can also find occasional occurrences in CFQ of other implausible role combinations, such as “parent and child”, “spouse and sibling” etc., triggered by similar Freebase data issues.
Was [Kumudu Munasinghe] a Dutch film producer's country of nationality's employee?
The somewhat unnatural phrasing of “country's employee” occurs due to a modeling choice in Freebase, in which the same entity is used to represent both a country and the government of that country. This makes it possible for a country to employ a person.
What character was influenced by a costume designer, influenced by [Pedro Calderón de la Barca], influenced by [William Shakespeare] and [Luis Buñuel], and influenced by [Miguel de Unamuno]?
The somewhat unnatural phrasing of “a character was influenced by” occurs due to a modeling choice in Freebase, in which when a film character is based on a real person, Freebase commonly uses the same entity to represent both. This makes “person” and “character” exchangeable in the questions where the person is also a film character.
Data Distribution Analysis ::: Answer frequencies
Table TABREF85 shows the most frequently occurring answers in CFQ. Not surprisingly, after the answers “Yes” and “No”, entities related in Freebase to the domain of movies have highest frequency.
Data Distribution Analysis ::: Impact of subsampling on the distribution of complexity levels
Figure FIGREF87 illustrates how subsampling changes the distribution of questions in CFQ with different levels of complexity to become more even.
Data Distribution Analysis ::: Impact of subsampling on the frequency of rules and rule combinations
Subsampling increases the frequency of rarely used rules and rule combinations and decreases the frequency of commonly used ones. For rules, this is illustrated by Figure FIGREF89 which shows the ratio of examples each rule appears in, before and after subsampling, in the order of their frequency. Figure FIGREF90 shows the same comparison for rule combinations.
Divergence-Based Split Analysis ::: Qualitative analysis of MCD@!START@$_{1}$@!END@
Traditional compositionality experiments often use train-test splits based on observable properties of the input and output (e.g., input/output complexity, input/output patterns, and input/output feature holdouts). One consequence of this is that the difference between train and test examples is relatively easily observable “with the naked eye”. The lists below illustrate that this is not usually the case for divergence-based splits. Similar to the random sample of the general data in Appendix SECREF9 we provide a random sample of size 20 from both the train and test set here. Indeed, even for the MCD$_{1}$ split with a high divergence of 0.694, the 20 random samples of train and test questions shown below cannot easily be distinguished as they both contain the same kind of questions of different sizes.
Train samples from MCD$_{1}$:
What was founded by a costume designer, founded by [Forgotten Silver]'s star, and founded by [Jamie Selkirk]?
Which male person influenced and was influenced by [William Dean Howells]?
Did [Marco Bellocchio] produce, write, and direct [Greek Pete]?
What did [Rick Schmidt] edit, [Philip Rashkovetsky] edit, and a cinematographer edit?
Were [The Living Playing Cards] and [The Haunted Castle] edited by, directed by, and produced by a French writer of [Le cauchemar de Méliès]?
What did a spouse of [Shorts]'s producer's spouse executive produce and direct?
Did [P. G. Wodehouse], [Raymond Chandler], [Edward Bunker], [Pauline Kael], and [Michael Cimino] influence [Grindhouse]'s cinematographer and star?
What Mexican person did a film producer employ?
Did [The Midnight After]'s Chinese executive producer edit [Perfect Life] and [Dumplings]?
Who did [For the Secret Service]'s director's female spouse influence?
Who married, was influenced by, and influenced a company's founder?
Was [MAN SE]'s French male German employee's employer [Sulzer]?
Who influenced an actor that [Robin Santana] was influenced by and [K. J. Stevens] was influenced by and was influenced by [Virgil]?
Did [Pirates of Malaysia] star [Giuseppe Addobbati] and star a Spanish screenwriter?
Was [The Silence of the Sea] written by, produced by, executive produced by, directed by, and edited by [The Red Circle]'s French editor?
Did [Chanel] employ a German costume designer, employ [Gaspard Ulliel] and [Maureen Chiquet], and employ [Jacques Polge]?
Who was influenced by [Adam Sandler] and married a film producer?
Did a Spanish screenwriter's child direct and edit [Bakuchi-uchi: Nagaremono]?
Was a founder of [IG Port] employed by a film producer?
Was [Orizzonti Orizzonti!] executive produced by and written by an art director's sibling?
Test samples from MCD$_{1}$:
What sequel of [Paranormal Activity 2] was edited by and written by a film director?
What spouse of a film producer founded [Grand Hustle Records] and was employed by [40/40 Club], [Roc-A-Fella Records], and [Def Jam Recordings]?
Did [Pixar] employ an art director and employ [Susham Bedi]?
Was a sibling of [David Lindbland] [Dynamit Nobel]'s Swedish founder?
What prequel of [Charlie the Unicorn 2] starred, was edited by, was produced by, was written by, and was directed by [Jason Steele]?
Did [Rick Schmidt] direct, produce, executive produce, and edit [Blues for the Avatar], [White Stork Is Coming], [The Fifth Wall], and [It's Not About the Shawerma]?
Was [Luke Larkin Music] an art director's employer?
What prequel of [Goat Story 2] was executive produced, written, directed, edited, and produced by [Jan Tománek]?
Was [Bullet Ballet]'s editor, star, director, and cinematographer [Promises Written in Water]'s star, director, writer, executive producer, and art director?
What was edited by, produced by, directed by, and written by [Ellis Kaan Ozen], [Thaw Bwe], [Jeffrey Malkofsky-Berger], and [Leslie Berkley]?
Was a person's female sibling [Reggae in a Babylon]'s producer?
Who was a director, cinematographer, executive producer, art director, producer, star, and writer of [The Man Who Killed God]?
Was [My Sweet Home]'s director, editor, writer, art director, producer, cinematographer, and costume designer a person?
Which art director, star, and editor of [The Brown Bunny] and [Promises Written in Water] did [Cord] star?
Did an employee and founder of [Virgin Mobile Australia], [Virgin Mobile USA], and [Virgin Mobile France] found [Virgin America] and found [V2 Records]?
Was a Chinese executive producer and star of [Happy Ghost II] and [All's Well, Ends Well 2010] a film director?
Was [The Voyeur]'s executive producer an actor's parent?
Did [Erasable Cities]'s writer, producer, editor, art director, cinematographer, and director produce and executive produce [Promises Written in Water]?
Who was an editor, star, and cinematographer of [Tetsuo: The Iron Man], [A Snake of June], and [Bullet Ballet]?
Was a costume designer's employer [Philips High School]?
Divergence-Based Split Analysis ::: Quantitative analysis of MCD@!START@$_{1}$@!END@
Figure FIGREF133 shows the frequency of atoms (upper graph) and compounds (lower graph) in the train and test sets of the maximum compound divergence split for the CFQ data. As the frequency of an atom resp. compound we use the fraction of examples it appears in. Both atoms and compounds are indexed primarily by their frequency in the train set, secondarily by their frequency in the test set, in decreasing order. For practical reasons we only look at a small subset of compounds here but we believe the analysis is representative.
We can see that the frequency of atoms in the two sets is very aligned and that all atoms from the test set appear in the train set. The frequency of compounds however is wildly different: While some invariably occur in both sets, the frequencies are often not aligned and most compounds appear only in either the train or the test set.
Hyperparameters
The experiments were run using the tensor2tensor framework BIBREF39 with some of the hyperparameters tuned using a random split of a previous, smaller version of the data set during development. We use the default hyperparameter sets publicly available in the tensor2tensor implementation (obtained from https://github.com/tensorflow/tensor2tensor) and override the tuned hyperparameters. The hyperparameters used are summarized in Table TABREF134.
Detailed error analysis ::: Breakdown of error types
Table TABREF136 shows a more detailed analysis of the errors that the baseline models make on CFQ for MCD$_{1}$ (compare Section SECREF24). The reported errors are bucketized into three main types: sparql property clause error, sparql filter clause error and malformed sparql query in the model's output. The total number of test set examples exhibiting any clause or filter error is reported (sum column), as well as the number of insertions (ins), deletions (del), and substitutions (sub) in the model's output with respect to the correct query. Property clause substitution errors are further subdivided into those where only the property itself is wrong while subject and object are correct (prop), those where the property is correct but either subject or object is wrong (node) and those where both the property and the subject or the object are wrong (both).
The accuracy metric requires the model response and the golden (correct) answer to be exactly equal to each other. Thus, a sparql query with the same clauses as the golden answer but in a different order or with some of the clauses appearing multiple times is also considered to be an error despite being equivalent to the golden answer in its meaning. The amount of such errors is relatively small though, accounting for 1.8%, 0.6% and 1.5% of total test set size for LSTM+Attention, Transformer and Universal Transformer respectively.
Detailed error analysis ::: Qualitative error analysis
Below we qualitatively analyze a number of instances the models fail on. We anonymize the MIDs in the same way as the data is provided to the models (see Section SECREF5). We first select queries on which all machine learning systems fail in all replicated runs (about 5k instances out of a total of about 12k), and then randomly select queries from this list. In the following we discuss a few cases in more detail. Note that, for readability, we use the following abbreviations for the sparql properties in Query 1:
ns:people.person.child = ns:people.person.children|
ns:fictional_universe.fictional_character.children|
ns:organization.organization.child/
ns:organization.organization_relationship.child
ns:people.person.sibling = ns:people.person.siblings/
ns:people.siblingrelationship.sibling|
ns:fictionaluniverse.fictionalcharacter.siblings/
ns:fictionaluniverse.
siblingrelationshipoffictionalcharacters.siblings
Query 1: “What sibling of M0 was M1' s parent?”
Golden (correct) sparql query:
SELECT DISTINCT ?x0 WHERE {
?x0 ns:people.person.child M1 .
?x0 ns:people.person.sibling M0 .
FILTER ( ?x0 != M0 )
}
Inferred (system) sparql query:
SELECT DISTINCT ?x0 WHERE {
?x0 ns:people.person.sibling ?x1 .
?x0 ns:people.person.sibling M0 .
?x1 ns:people.person.child M1 .
FILTER ( ?x0 != ?x1 )
}
Analysis. The meaning of the sparql query generated by the system is “What sibling of M0 was a sibling of M1's parent?”, which is incorrect. We next analyze the train set, in order to show that we believe enough information has been provided in the train set for the question to be answered correctly.
Some subqueries of the query and their occurrences are shown in Table TABREF140. While the exact subquery “What sibling” does not occur at training, the two words have been shown separately in many instances: the subqueries “sibling of Mx”, and “Mx's parent” occur 2,331 and 1,222 times, respectively. We can analyze this example in more detail by comparing parts of the rule tree of this example with those shown at training. As can be read from the table, similar sentences have been shown during training. Some examples are:
What was executive produced by and written by a sibling of M0?
What costume designer did M1's parent employ?
What cinematographer was a film editor that M2 and M3 married?
What film director was a character influenced by M2?
Query 2: “Did a male film director edit and direct M0 and M1?”
Golden (correct) sparql query:
SELECT count ( * ) WHERE {
?x0 ns:film.director.film M0 .
?x0 ns:film.director.film M1 .
?x0 ns:film.editor.film M0 .
?x0 ns:film.editor.film M1 .
?x0 ns:people.person.gender m_05zppz
}
Inferred (system) sparql query:
SELECT count ( * ) WHERE {
?x0 ns:film.director.film M0 .
?x0 ns:film.director.film M1 .
?x0 ns:film.editor.film M0 .
?x0 ns:people.person.gender m_05zppz
}
Analysis. The meaning of the inferred sparql query is “Did a male film director edit M0 and direct M0 and M1?”. It thus seems the model `forgets' to include the relation between the director and movie M1.
Looking at subqueries and their occurrence count (Table TABREF145), we see again that various subqueries occur often during training. However, “edit and direct” have not been shown often together. When looking at the rule trees, we see that both conjunctions in the query occur often at training separately: “Did [DetNP] [VP] and [VP] [DetNP]” occurs 1,432 times, and “Did [DetNP] [VP] [Entity] and [Entity]” occurs 909 times. However, they never occur together: “Did [DetNP] [VP] and [VP] [DetNP] and [DetNP]” does not occur at training. This may be the reason why all systems fail on this example, but at the same time we believe a compositional learner should be able to generalize correctly given the training instances. Some examples are:
Did a male film director that M3's parent married influence an art director?
Did a film producer that played M2 edit and direct M1?
Did a screenwriter edit and direct a sequel of M1
Did a Chinese male film director edit M1 and M2?
Additional experimental results on scan
Figure FIGREF150 shows a scatter plot of accuracy vs. compound divergence for the three baseline architectures (see Section SECREF5) on existing splits of the scan data. These splits are discussed in BIBREF2 and BIBREF6, and the exact split data is available. (Data splits obtained from https://github.com/brendenlake/SCAN). We map these splits onto the re-created scan data, which enables us to measure the atom and compound divergences. The authors present a total of six split experiments (some with several sub-experiments):
BIBREF2:
simple (random)
by action sequence length
adding a primitive and adding a primitive along with complex combinations
BIBREF6:
adding a template
adding template fillers
adding more training examples of fillers (fewshot)
In the plot, we omit some data points that are too close to be distinguished easily. The point labels have the form `(abbreviated experiment name)<(parameter)>@(number of samples) (baseline system abbreviation) [(train set size fraction), (split atom divergence)]'. The train set size fraction is given as a percentage of the overall data size. The baseline system abbreviations are LSTM, T for Transformer, UT for Universal Transformer, T/UT where both transformer models are indistinguishable, and empty where all three systems perform indistinguishably. The abbreviated experiment name is one of the names in italics above.
We can observe a strong dependency of the accuracies on the compound divergence of the data split. Again, this seems to indicate that the compound divergence is correlated with accuracy for these baseline architectures. One difference to the data shown in Figure FIGREF28(b) is that for this set of experiments the accuracy drops faster with increasing compound divergence. One explanation for this effect is that the experiments are directly aimed at highlighting one specific potentially problematic scenario for learning. E.g. in the experiment `primitive<jump>' (with very low accuracies for all three systems) the jump command is shown exactly in one combination (namely alone) in the training data while it occurs in all test examples in arbitrary combinations.
This is reflected in the higher atom divergence value of 0.08 for this split, as well as in all other splits that exhibit a low accuracy at a low compound divergence in Figure FIGREF150. Note that BIBREF2 already compare the experiment `primitive<jump>' to the experiment `primitive<turn left>' for which all three systems achieve a much higher accuracy. In their interpretation of this phenomenon, they mainly focus on the fact that in contrast to 'jump', the action 'turn left' is also generated by other inputs. We additionally observe that the latter experiment also has a slightly lower atom divergence of 0.07, a lower compound divergence, and it covers a much larger part of the data in the train set (94% vs. 63%).
While the accuracies we observe for the `primitive' experiments are very much in line with the results reported by BIBREF2, we noticed a few interesting differences for other experiments: All three systems go to 100% accuracy on the fewshot task even for one example (while BIBREF6 report a slowly increasing accuracy for the architecture they evaluate). On the other hand, both transformer models only reach 0% accuracy on the length split, while the LSTM obtains around 14% (which is in line with what previous work reports).
Analysis of relations between accuracy, compound divergence, and training size
Figure FIGREF28 shows for all baseline systems a strong correlation between accuracy and compound divergence for the chosen training sizes (96k for CFQ and 8k for scan). One interesting question is whether and how this correlation is changed for different training sizes. Figures FIGREF159 and FIGREF159 show that this correlation holds also for smaller training sizes but that the accuracy is generally somewhat lower for smaller training sizes.
At the same time, we observe that the difference between accuracies of various training sizes gets smaller as the training size increases. This can be seen even more clearly in Figures FIGREF159 and FIGREF159, which plot the training size rather than the compound divergence on the x-axis. These figures show that the increase in accuracy flattens out significantly as we reach training size of about 80k for CFQ and about 6k for SCAN. This indicates that further increasing train set size may not be sufficient to do well on these compositionality experiments.
Logical Form
To represent our logical form we use syntax of the description logic $\mathcal {EL}$ BIBREF14, BIBREF15 with additional concept and role constructors. These constructors do not have description logic semantics; instead, their meaning is completely determined by the set of generation rules of the CFQ dataset.
Let $A$ be a concept name, $C, C_1, C_2$ be concepts, $R, R_1, R_2$ be roles, and $v$ be a raw string. Then the following would be concepts:
and the following would be roles:
Note that our logical form does not have roles other than those in a form of RolePair($C_1$, $C_2$).
New strings are generated by using a special function new_var($\$S$). This function generates a unique string of the form ?x<N>, where N is a unique number, and assigns that string to variable $\$S$. This string can later be used as a variable in a sparql constraint.
Rule Format
This section describes the format of each of the rule types we use for generating the CFQ dataset, in the form in which they appear in the rules index in Appendix SECREF20.
General formatting conventions shared across all rule types:
Variable names are prefixed by `$'. Example: $X.
(Exception: In grammar rules, while variables standing for constants are prefixed by `$', variables standing for logical forms are prefixed by `_'. Example: _action.)
Concept names are written in camel case. Example: FilmProducer.
Names of functions that output logical forms (concepts, roles, or knowledge) are also written in camel case. Examples: DropDependency, BoundRolePairs, RolePair.
Names of functions that output string literals or which are used for converting logical forms to sparql are written in lowercase with underscores. Examples: def2sparql, get_specializations, new_var.
String literals are enclosed in single quotes. Example: 'ns:film:director'.
Rule Format ::: Grammar rule format
The CFQ grammar is a unification-based grammar of recursive rewriting rules used to generate pairs of strings and their corresponding logical form. For an introductory overview of unification-based grammars including several popular variations, see BIBREF38. The rules in the CFQ grammar follow a similar syntax in particular to that used in the Prolog extension GULP 3.1 BIBREF16, with the addition of support for disjunction, negation, absence, and default inheritance of features, and with minor differences in formatting described below.
Properties shared between the CFQ grammar syntax and that of BIBREF16 include the following:
Grammar rules are notated as variations of context-free phrase-structure rules of the form $T_{0} \rightarrow T_{1}$ ... $T_{n}$, where each of the syntactic non-terminals and terminals $T_{0}$ ... $T_{n}$ are augmented with feature lists in parentheses.
Each grammar rule can be interpreted as specifying how a feature structure (with logical form) that is unifiable with the lefthand side can be re-written to the sequence of features structures (with logical form) indicated on the righthand side.
Features are represented as attribute-value pairs separated by a colon (i.e., $attribute$:$value$).
Shared values in feature structures are represented through the use of variables.
Specifically, in the rules index, CFQ grammar rules are described in the format
$T_{0}(F_{0})[H]/L_{0} \rightarrow T_{1}(F_{1})/L_{1}$ ... $T_{n}(F_{n})/L_{n}$
where:
Each $T_{i}$ is a syntactic category (syntactic nonterminal) or a string literal (syntactic terminal).
Each $L_{i}$ for $i \in [1, n]$ is either a variable representing a logical form or an empty string. In the case when $L_{i}$ is an empty string, we allow dropping the trailing slash from the $T_{i}(F_{i})/L_{i}$ expression, resulting in just $T_{i}(F_{i})$.
$L_{0}$ is a logical form expressed in terms of $L_{1}...L_{n}$.
Each $F_{i}$ is a comma-separated feature list of the form $(attribute_{1}$:$value_{1}$, ..., $attribute_{k}$:$value_{k})$. In the case where $F_{i}$ is empty, we allow dropping the parentheses from the $T_{i}(F_{i})$ expression, resulting in just $T_{i}$.
$H$ is either an empty string or one of the variables $L_{i}$ for $i \in [1, n]$, indicating that $F_{0}$ default inherits the features of $F_{i}$ (the syntactic “head”). In the case where $H$ is an empty string, we allow dropping the brackets from the $T_{0}(F_{0})[H]$ expression, resulting in just $T_{0}(F_{0})$.
Note that while the above notation adopts the convention of splitting out the syntactic category and logical form from the feature list for visual prominence and to highlight the relationship to its context-free phrase-structure rule core, behaviorally it is identical to adding two more features to the feature list (we can call them, for example, $cat$ and $sem$) to represent the syntactic category and logical form.
This means that, for example, the rule
ACTIVE_VP[_head]/_head
$\rightarrow $ VP_SIMPLE(form:infinitive)/_head
can be considered a notational shorthand for the following rule expressed purely using feature lists:
(cat:ACTIVE_VP, sem:_head)[_head]
$\rightarrow $ (cat:VP_SIMPLE, sem:_head, form:infinitive)
Disjunction of features. Similarly to BIBREF37, we allow disjunctive feature specifications, which we denote by separating the alternative values with a pipe (`$|$'). The feature specification (form:gerund|infinitive) would thus unify with either (form:gerund) or (form:infinitive), but not with (form:pastparticiple).
Absence of features. We use a special atomic value _none_ to indicate that a given feature must either be absent or else explicitly set to the value _none_. The feature specification (subject:_none_, object:yes) would thus unify with either (object:yes) or (subject:_none_, object:yes), but not with (subject:yes, object:yes).
Negation of features. Similarly to BIBREF37, we allow negated feature specifications, which we denote by prefixing the attribute with a minus sign (`-'). The feature specification (-form:gerund|infinitive) would thus unify with (form:pastparticiple) or (form:_none_), but not with (form:gerund) or (form:infinitive). In general, a feature specification of the form (-attribute:v$_{1}$|...|v$_{j}$) can be considered a notational shorthand for (attribute:v$_{j+1}$|...|v$_{k}$|_none_), where v$_{j+1}$|...|v$_{k}$ is an enumeration of all possible values of the feature attribute other than v$_{1}$|...|v$_{j}$.
Default inheritance of features. If the lefthand side term is notated as $T_{0}(F_{0})[H]$, with $H$ equal to one of the variables $L_{i}$ for $i \in [1, n]$, then this is interpreted as a notational shorthand for augmenting both $F_{0}$ and $F_{i}$ with an additional list of attribute-value pairs $(a_{1}$:$\$v_{1}, ..., a_{k}$:$\$v_{k})$, where $a_{1} ... a_{k}$ are all of the attributes listed in $F_{i}$ that were not originally listed in $F_{0}$.
Unification of logical forms. As described in Appendix SECREF16, we represent logical forms using a variation of description logic, rather than using feature structures. In the context of unification, we consider logical forms to unify if and only they achieve structural concept equality after variable replacement (using the same variable replacements applied during unification of the corresponding feature lists), while taking into account the commutativity and associativity of $\sqcap $. For example, under this criterion, the logical form GenderRel $\sqcap $ $\exists $RolePair(Predicate, Gender)._head would unify with either GenderRel $\sqcap $ $\exists $RolePair(Predicate, Gender).Male or with ($\exists $RolePair(Predicate, Gender).Male) $\sqcap $ GenderRel under a variable replacement mapping _head to Male, but would not unify with GenderRel $\sqcap $ $\exists $RolePair(Predicate, Gender).Male $\sqcap $ $\exists $RolePair(Predicate, GenderHaver).FilmProducer.
Rule Format ::: Knowledge rule format
CFQ knowledge rules output expressions representing facts that are known to be true. They have no direct effect on text, logical forms, or sparql, but the generated knowledge can be used as preconditions to other rules. In the rules index, they are described in the following format:
$\rightarrow K$, where $K$ is knowledge that is output.
By convention, we define the rule name of a knowledge rule to be simply the string representing the knowledge that the rule outputs, and we omit the rule name in the rules index for brevity.
The union of those rules defines a knowledge base which we denote with $KB^{CFQ}$.
All knowledge in CFQ is represented in the form $P(X_1,...,X_n)$, where $P$ is a predicate from the list below, and $X_1, ..., X_n$ are either logical forms or else raw strings. Knowledge rules do not use variable-based expressions.
Supported knowledge predicates:
BoundRolePairs
ExclusiveRolePair
FreebaseEntityMapping
FreebasePropertyMapping
FreebaseTypeMapping
NonExclusiveRolePair
Role
Rule Format ::: Inference rule format
CFQ inference rules transform logical forms and may be conditioned on knowledge. In the rules index, they are described in the following format:
$K: L_0 \rightarrow L_1$
where $K$ represents a comma-separated list of knowledge preconditions, and $L_0$ and $L_1$ represent the input and output logical forms, all expressed in terms of a shared set of variables $v_1,...,v_m$.
These rules are interpreted as stating that if there exists a variable replacement $r()$ replacing $v_1,...,v_m$ with some logical forms $l_1,...,l_m$ respectively, such that $r(K) \subseteq KB^{CFQ}$, then we can apply the inference rule by rewriting $r(L_0)$ to $r(L_1)$.
Rule Format ::: Resolution rule format
CFQ resolution rules transform sparql expressions and may be conditioned on knowledge. They do not affect text or logical forms.
In the rules index, they are described in the following format:
$K: S_0 \rightarrow S_1~...~S_n$
where $K$ represents a comma-separated list of knowledge preconditions, $S_0$ is a variable-based expression and $S_1~...~S_n$ are either raw sparql strings or else expressions described in terms of the same variables used in $S_0$ and $K$.
These rules are interpreted as stating that if there exists a variable replacement $r()$ replacing $v_1,...,v_m$ with some logical forms, strings, or expressions $l_1,...,l_m$ respectively, such that $r(K) \subseteq KB^{CFQ}$, then we can apply the resolution rule by rewriting $r(S_0)$ to the sequence of terms $r(S_1)~...~r(S_n)$.
Generation Algorithm
Our generation algorithm produces triples of the form $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ in a mixed top-down and bottom-up fashion, with the final program of rule applications output alongside each triple in the form of a rule application DAG. The top-down portion of generation is responsible for efficiently searching for rules that can be applied to produce a meaningful example, while the bottom-up portion is responsible for actually applying the rules (i.e., performing the composition) and for producing the DAG.
The generation process proceeds in two phases, each involving a top-down as well as bottom-up aspect. In the first phase, we apply grammar rules interleaved with inference rules to produce a pair of $\langle \text{question, logical form} \rangle $. Specifically, we apply a recursive top-down algorithm which starts with the $S$ nonterminal and at every step performs a random search over the rules in the grammar which could produce the target nonterminal with accompanying feature structure. This top-down process proceeds until a candidate syntactic parse tree is attained whose leaves consist purely of syntactic terminals (i.e., string literals or entity placeholders). The grammar rules from this candidate parse tree are then applied in a bottom-up fashion beginning with the syntactic terminals to yield a tree of $\langle \text{text, logical form} \rangle $ pairs. After each such bottom-up grammar rule application, we then greedily apply all possible inference rules on the resulting logical forms, applying an arbitrary deterministic ordering to the inference rules in cases where rules could be applied in multiple valid orderings. This ensures that inference rules and grammar rules are executed in an interleaved manner and each inference rule is applied at the earliest possible occasion.
When a $\langle \text{question, logical form} \rangle $ pair is generated for the $S$ nonterminal, we proceed to the second phase of the algorithm, in which resolution rules are applied to generate a corresponding sparql query to make up the third element of the desired $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple. In practice, the bulk of the work in this phase is performed in a top-down fashion, in which resolution rules are recursively applied to transform a starting expression of the form get_specializations($L) (where $L represents the logical form output from the grammar phase) into a sequence of text literals representing the sparql query. This is followed nominally by a bottom-up process to construct the rule application DAG, yielding a tree of resolution rule applications of a similar form to the tree of interleaved grammar and inference rules output from the grammar phase. Note that while the grammar phase involves a large degree of random choice, the resolution phase proceeds much more deterministically, as the CFQ resolution rules have been designed such that any given question can yield only one possible sparql query, modulo commutativity and associativity of $\sqcap $. In cases where resolution rules could be applied in multiple valid orderings, we again apply an arbitrary deterministic ordering to the resolution rules so as to yield as consistent as possible a rule application DAG and $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple for any given question.
Finally, to ease the task of tracking unique query patterns and to minimize the impact on the learning task of implementation details regarding choice of variable names or ordering of clauses, we normalize the final sparql query by alphabetically sorting the query clauses and re-numbering the variables to follow a standard increasing order.
The resulting $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple is then appended to the CFQ dataset.
Generation Algorithm ::: Join by Logical Form
In general, we do not explicitly track rules to represent the example-independent behaviors of the generation algorithm, as the universal applicability of these rules mean that the complete behavior of the generator should be observable on any reasonably-sized train set. The same applies to certain core behaviors of the description logic $\mathcal {EL}$, such as commutativity and associativity of $\sqcap $, which we omit tracking as explicit rules due to their similar ubiquity of application.
One example-independent rule, however, that we do explicitly track is the rule that describes the handover process between the grammar phase and the resolution phase – or in terms of the rule application DAG, the rule that joins the tree of interleaved grammar and inference rule applications with the tree of resolution rule applications. We call this rule JOIN_BY_LOGICAL_FORM. It is included in the rules list for every example in CFQ and appears as the head of the rule application tree for each example.
Generation Algorithm ::: Relationship between Generation and Parsing
Note that conceptually a similar approach for combining the different rule types could be applied to the semantic parsing task. The main difference would be that, instead of performing random search over the grammar, the semantic parsing task would need to find the set of rules which produce the desired input text.
Generation Algorithm ::: Selecting an appropriate sample set
For many domains, the set of examples generated by exhaustively combining rules is infinite or prohibitively large. For example, the CFQ grammar generates an infinite set of questions, and even when restricted to a reasonable complexity, the set is still too large for practical use. This means that we need to choose which subset of examples we want to include in our dataset. Given our goal of comprehensively measuring compositional generalization, we do this by:
maximizing the overall diversity of rule combinations (allowing us to test as many rule combinations as possible)
while using a uniform distribution from simple examples to increasingly more complex examples.
We measure the diversity of rule combinations of a dataset using the empirical entropy over the frequency distribution of the subgraphs of the rule application DAGs, and we measure the complexity of an example using the number of rule applications used to generate it.
For CFQ, we choose the following practical trade-off between these two criteria. We first generate a sufficiently large sample set by performing random rule applications. We then subsample from it to select a subset that maximizes the entropy of the subgraph distribution (while only taking into account subgraphs with a limited number of nodes for practicality). We use a greedy algorithm that incrementally assigns elements to the subsampled set while maximizing entropy at each step.
The subsampling is initially limited to examples with the smallest complexity level and continues with increasingly larger complexity levels. We cap the maximum number of examples per level to achieve a uniform distribution across levels, and we limit the maximum complexity level such that the questions remain relatively natural. Table TABREF15 shows examples of generated questions at varying levels of complexity.
Example of a rule application DAG
Figures FIGREF190 through FIGREF192 show the rule application DAG that was produced when generating the question “Who directed [entity]?”. They illustrate how grammar, inference, and knowledge rules are combined to generate a pair of text and logical form, and how resolution rules are used to generate the sparql query for the resulting logical form.
Example of a rule application DAG ::: DAG normalization
As discussed in Section SECREF3, nodes of this DAG represent rule applications while edges represent dependencies among the rules; i.e., an edge $A \rightarrow B$ means that rule $B$ strictly depends on rule $A$ in the sense that the generator cannot apply rule $B$ before applying rule $A$. The DAG is normalized to ensure that a certain rule combination is represented using the same DAG across all the examples where it occurs. This is important for meaningfully comparing measures such as entropy and divergence across subgraphs of different examples.
Specifically, together with adopting the measures described above to ensure that rules are applied in a deterministic order, we achieve the normalization of the DAG by only producing edges that represent “minimal dependencies”. This means that if a rule $A$ can be applied after rule $B$, but it could also be applied after rule $B^{\prime }$ with $B \rightarrow B^{\prime }$ (i.e., $B^{\prime }$ depends on $B$), we don't produce the edge $B^{\prime } \rightarrow A$.
Example of a rule application DAG ::: Concept abbreviations
For brevity, in the rule application DAG figures we have applied the following abbreviations for several lengthy concept names:
Director = FilmDirector
Directee = DirectedFilm
Directing = DirectingAFilm
SubjectAgentVerb = PredicateWithBoundRolePairs(RolePair( SubjectHaver, Subject), RolePair(Predicate, Agent))
ObjectUndergoerVerb = PredicateWithBoundRolePairs(RolePair( ObjectHaver, Object), RolePair(Predicate, Undergoer))
E1 = Entity('?E1')
Example of a rule application DAG ::: Entity placeholders
As described in Section SECREF16, during generation we initially generate a $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple containing entity placeholders, and then replace those placeholders with specific entities as a post-processing step. Conceptually, one could construct a rule application DAG describing either the process by which the original $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple with entity placeholders was generated, or alternatively the rules that would need to be applied if constructing the $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple containing the final entity MIDs directly. Structurally, these two DAGs are identical, differing only in the definition of two entity-related rules described below. The rule application DAG shown in the accompanying figures is the version using entity placeholders.
Versions of entity rules applicable when using entity placeholders:
ENTITY=[ENTITY]_HSz7QrdGdsX:
ENTITY(number:singular)/Entity(new_var(V1))
$\rightarrow $ '[entity]'
ENTITY_MID:
ent2sparql(Entity($X)) $\rightarrow $ $X
Versions of entity rules applicable when using actual entity MIDs:
ENTITY=[ENTITY]_HSz7QrdGdsX:
ENTITY(number:singular)/'m.'$X
$\rightarrow $ 'm.'$X
ENTITY_MID:
ent2sparql('m.'$X) $\rightarrow $ 'ns:m.'$X
Example of a rule application DAG ::: Subgraphs and their weights
Figure FIGREF203 shows an example of subgraphs in order to provide more details on the sampling and weighting of compounds. An example non-linear subgraph is highlighted by the red area, and two linear subgraphs are highlighted by the blue and the yellow areas, respectively.
As described in Section SECREF6, given a large subset $\mathbb {G}$ of subgraphs from the sample set as a whole, we calculate for each sample the weight of each subgraph $G \in \mathbb {G}$ that occurs in that sample as:
where $\text{occ}(G)$ is the set of all occurrences of $G$ in the sample, $\prec $ denotes the strict subgraph relation, and $P(G^{\prime }| G)$ is the empirical probability of $G^{\prime }$ occurring as a supergraph of $G$ over the full sample set.
Intuitively, we are trying to estimate how interesting the subgraph $G$ is in the sample. First, for every occurrence $g$ of a subgraph $G$, we look for the supergraph $G^{\prime }$ of $g$ that co-occurs most often with $G$ in the full sample set. The empirical probability of having $G^{\prime }$ as a supergraph of $G$ determines how interesting the occurrence $g$ is – the higher this probability, the less interesting the occurrence. Thus we compute the weight of the occurrence as the complement of this maximum empirical probability. Then we take the weight of $G$ to be the weight of the most interesting occurrence $g$ of $G$ in the sample.
E.g. in the extreme case that $G$ only occurs within the context $G^{\prime }$, the weight of $G$ will be 0 in all samples. Conversely, if $G$ occurs in many different contexts, such that there is no single other subgraph $G^{\prime }$ that subsumes it in many cases, then $w(G)$ will be high in all samples in which it occurs. This ensures that when calculating compound divergence based on a weighted subset of compounds, the most representative compounds are taken into account, while avoiding double-counting compounds whose frequency of occurrence is already largely explainable by the frequency of occurrence of one of its super-compounds.
Returning to our example in Figure FIGREF203, suppose that $G$ represents the smallest linear subgraph (yellow area), and suppose that the weight of $G$ in this sample is 0.4. Then this means that there exists some other subgraph $G^{\prime }$ (for instance, the linear subgraph highlighted by the blue area) that is a supergraph of $G$ in 60% of the occurrences of $G$ across the sample set.
Rules Index
Below is a selection of the rules used in the generation of CFQ. Specifically, this includes all rules involved in generating the question “Who directed [entity]?” (the same example illustrated in the rule application DAG in Appendix SECREF19). The format of the rules is discussed in Appendix SECREF17.
Rules Index ::: Grammar rules
S=WHQ_F6E9egkQqxj:
S/_x
$\rightarrow $ WHQ/_x
WHQ=NPQ_INDIRECT_VP_INDIRECT_TXCca9URgVm:
WHQ[_subject]/DropDependency(_subject) $\sqcap $ DropDependency($\exists $RolePair(Subject, SubjectHaver)._action)
$\rightarrow $ NPQ_INDIRECT(is_what:_none_, number:$n)/_subject
VP_INDIRECT(form:past, number:$n, object:yes, subject:_none_)/_action
NPQ_INDIRECT=WHO_5ptbPXXbuLZ:
NPQ_INDIRECT(number:singular)/Person
$\rightarrow $ 'who'
VP_INDIRECT=VP_INDIRECT_DP_ZJH4NhRkByc:
VP_INDIRECT(object:yes)[_action]/_action $\sqcap $ $\exists $RolePair(ObjectHaver, Object)._object
$\rightarrow $ VP_INDIRECT(object:_none_, subject:_none_)/_action
DP/_object
VP_INDIRECT=ACTIVE_VP_RX51Tm7RXPe:
VP_INDIRECT(object_type:$ut, subject_type:$at)[_head]/_head $\sqcap $ PredicateWithBoundRolePairs(RolePair(SubjectHaver, Subject), RolePair(Predicate, Agent)) $\sqcap $ PredicateWithBoundRolePairs(RolePair(ObjectHaver, Object), RolePair(Predicate, Undergoer))
$\rightarrow $ ACTIVE_VP(agent_type:$at, undergoer_type:$ut)/_head
ACTIVE_VP=VP_SIMPLE_hJqAyjRUYJp:
ACTIVE_VP(number:singular)[_head]/_head
$\rightarrow $ VP_SIMPLE(form:past)/_head
VP_SIMPLE=VP_GHWf3fcVRZg:
VP_SIMPLE(agent_type:person, undergoer_type:movie)[_head]/_head
$\rightarrow $ VP(concept_id:DirectingAFilm)/_head
VP=DIRECTED_JkYzNbQyXtv:
VP(concept_id:DirectingAFilm, form:past)/DirectingAFilm
$\rightarrow $ 'directed'
DP=ENTITY_M6fSP5GvRaN:
DP(is_proper_noun:yes, number:singular)[_head]/_head
$\rightarrow $ ENTITY/_head
ENTITY=[ENTITY]_HSz7QrdGdsX:
ENTITY(number:singular)/Entity(new_var(V1))
$\rightarrow $ '[entity]'
... (211 grammar rules total)
Rules Index ::: Inference rules
BOUND_ROLES_WITH_PREDICATE_OBJECT:
BoundRolePairs($A, RolePair($R, $Q), RolePair($T, $S)):
$\exists $RolePair($Q, $R).($A $\sqcap $ $B) $\rightarrow $ $\exists $RolePair($S, $T).($A $\sqcap $ $B)
BOUND_ROLES_WITH_PREDICATE_SUBJECT:
BoundRolePairs($B, RolePair($Q, $R), RolePair($S, $T)):
$B $\sqcap $ $\exists $RolePair($Q, $R).$A $\rightarrow $ $B $\sqcap $ $\exists $RolePair($S, $T).$A
IGNORE_BOUND_ROLE_PAIRS:
$A $\sqcap $ PredicateWithBoundRolePairs($X, $Y) $\rightarrow $ $A
IGNORE_DEPENDENCY_DROPPING:
DropDependency($X) $\rightarrow $ $X
PREDICATE_UNREIFICATION:
Role($Q, $P), Role($R, $P):
$\exists $RolePair($Q, Predicate).($P $\sqcap $ $\exists $RolePair(Predicate, $R).$A) $\rightarrow $ $\exists $RolePair($Q, $R).$A
... (17 inference rules total)
Rules Index ::: Resolution rules
CONJUNCTION_WITHOUT_ENTITY:
def2sparql($X $\sqcap $ $Y, $V1) $\rightarrow $ def2sparql($X, $V1) ' . ' def2sparql($Y, $V1)
ENTITY_MID:
ent2sparql(Entity($X)) $\rightarrow $ $X
GET_SPECIALIZATIONS:
get_specializations($X) $\rightarrow $ 'SELECT DISTINCT ' get_var($X, new_var($V0)) ' WHERE { ' def2sparql($X, get_var($X, $V0)) '}'
GET_VAR_CONJUNCTION:
get_var($X $\sqcap $ $Y, $V1) $\rightarrow $ shared_var(get_var($X, get_var($Y, $V1)), get_var($Y, get_var($X, $V1)))
GET_VAR_RELATION:
get_var($\exists $$R.$X, $V1) $\rightarrow $ $V1
GET_VAR_TYPE:
FreebaseTypeMapping($X, $F):
get_var($X, $V1) $\rightarrow $ $V1
PROPERTY_MAPPING:
FreebasePropertyMapping($R, $F):
role2sparql($R) $\rightarrow $ $F
RELATION_MAPPING_WITHOUT_EXCLUSION:
NonExclusiveRolePair($R):
rel2sparql($X, $R, $Y) $\rightarrow $ $X role2sparql($R) $Y
RELATION_TO_ENTITY:
def2sparql($\exists $$R.$X, $V1) $\rightarrow $ rel2sparql($V1, $R, ent2sparql($X))
SHARED_VAR:
shared_var($X, $X) $\rightarrow $ $X
SPECIALIZATION_OF_TYPE:
def2sparql($X, $V1) $\rightarrow $ $V1 ' a ' type2sparql($X)
TYPE_MAPPING:
FreebaseTypeMapping($X, $F):
type2sparql($X) $\rightarrow $ $F
... (21 resolution rules total)
Rules Index ::: Knowledge rules
$\rightarrow $ BoundRolePairs(DirectingFilm, RolePair(Predicate, Agent), RolePair(Predicate, FilmDirector))
$\rightarrow $ BoundRolePairs(DirectingFilm, RolePair(Predicate, Undergoer), RolePair(Predicate, DirectedFilm))
$\rightarrow $ BoundRolePairs(PredicateWithBoundRolePairs(RolePair(ObjectHaver, Object), RolePair(Predicate, Undergoer)), RolePair(ObjectHaver, Object), RolePair(Predicate, Undergoer))
$\rightarrow $ BoundRolePairs(PredicateWithBoundRolePairs(RolePair(Subject, SubjectHaver), RolePair(Agent, Predicate)), RolePair(Subject, SubjectHaver), RolePair(Agent, Predicate))
$\rightarrow $ FreebasePropertyMapping(RolePair(FilmDirector, DirectedFilm), 'ns:film.director.film')
$\rightarrow $ FreebaseTypeMapping(Person, 'ns:people.person')
$\rightarrow $ NonExclusiveRolePair(FilmDirector, DirectedFilm)
$\rightarrow $ Role(DirectedFilm, DirectingFilm)
$\rightarrow $ Role(FilmDirector, DirectingFilm)
... (194 knowledge rules total) | 239,357 English question-answer pairs |
8db45a8217f6be30c31f9b9a3146bf267de68389 | 8db45a8217f6be30c31f9b9a3146bf267de68389_0 | Q: What are other approaches into creating compositional generalization benchmarks?
Text: Introduction
Human intelligence exhibits systematic compositionality BIBREF0, the capacity to understand and produce a potentially infinite number of novel combinations of known components, i.e., to make “infinite use of finite means” BIBREF1. In the context of learning from a set of training examples, we can observe compositionality as compositional generalization, which we take to mean the ability to systematically generalize to composed test examples of a certain distribution after being exposed to the necessary components during training on a different distribution.
Humans demonstrate this ability in many different domains, such as natural language understanding (NLU) and visual scene understanding. For example, we can learn the meaning of a new word and then apply it to other language contexts. As BIBREF2 put it: “Once a person learns the meaning of a new verb `dax', he or she can immediately understand the meaning of `dax twice' and `sing and dax'.” Similarly, we can learn a new object shape and then understand its compositions with previously learned colors or materials BIBREF3, BIBREF4.
In contrast, state-of-the-art machine learning (ML) methods often fail to capture the compositional structure that is underlying the problem domain and thus fail to generalize compositionally BIBREF2, BIBREF5, BIBREF6, BIBREF7, BIBREF3. We believe that part of the reason for this shortcoming is a lack of realistic benchmarks that comprehensively measure this aspect of learning in realistic scenarios.
As others have proposed, compositional generalization can be assessed using a train-test split based on observable properties of the examples that intuitively correlate with their underlying compositional structure. BIBREF8, for example, propose to test on different output patterns than are in the train set, while BIBREF2 propose, among others, to split examples by output length or to test on examples containing primitives that are rarely shown during training. In this paper, we formalize and generalize this intuition and make these contributions:
We introduce distribution-based compositionality assessment (DBCA), which is a novel method to quantitatively assess the adequacy of a particular dataset split for measuring compositional generalization and to construct splits that are ideally suited for this purpose (Section SECREF2).
We present the Compositional Freebase Questions (CFQ) , a simple yet realistic and large NLU dataset that is specifically designed to measure compositional generalization using the DBCA method, and we describe how to construct such a dataset (Section SECREF3).
We use the DBCA method to construct a series of experiments for measuring compositionality on CFQ and scan BIBREF2 and to quantitatively compare these experiments to other compositionality experiments (Section SECREF4).
We analyze the performance of three baseline ML architectures on these experiments and show that these architectures fail to generalize compositionally, and perhaps more surprisingly, that compound divergence between train and test sets is a good predictor of the test accuracy (Section SECREF5).
Distribution-Based Compositionality Assessment (DBCA)
Like other authors, we propose to measure a learner's ability to generalize compositionally by using a setup where the train and test sets come from different distributions. More specifically, we propose a setup where each example is obtained by composing primitive elements (atoms), and where these atoms are similarly represented in the train and test sets while the test set contains novel compounds, i.e., new ways of composing the atoms of the train set.
As a simple illustrative scenario, consider the task of answering simple questions such as “Who directed Inception?” and “Did Christopher Nolan produce Goldfinger?”. In this scenario, the atoms intuitively correspond to the primitive elements that are used to compose those questions, such as the predicates “direct(ed)” and “produce(d)”, the question patterns “Who [predicate] [entity]” and “Did [entity1] [predicate] [entity2]”, and the entities “Inception”, “Christopher Nolan”, etc. The compounds on the other hand correspond to the combinations of these atoms that appear in the various examples: "Who directed [entity]?", "Did Christopher Nolan [predicate] Inception?", etc.
To measure compositional generalization on such a task, one might therefore use the questions “Who directed Inception?” and “Did Christopher Nolan produce Goldfinger?” as training examples while testing on questions such as “Did Christopher Nolan direct Goldfinger?” and "Who produced Inception?" because the atoms are identically represented in the train and test sets while the compounds differ.
To make this intuition more precise, we focus on datasets such as CFQ (introduced in Section SECREF3) and scan BIBREF2, where each example can be created from a formal set of rules by successively applying a number of these rules. In this case, the atoms are the individual rules, while the compounds are the subgraphs of the directed acyclic graphs (DAGs) that correspond to the rule applications. (See Sections SECREF3 and SECREF4 for more details.)
Distribution-Based Compositionality Assessment (DBCA) ::: Principles for measuring compositionality
We use the term compositionality experiment to mean a particular way of splitting the data into train and test sets with the goal of measuring compositional generalization. Based on the notions of atoms and compounds described above, we say that an ideal compositionality experiment should adhere to the following two principles:
Similar atom distribution: All atoms present in the test set are also present in the train set, and the distribution of atoms in the train set is as similar as possible to their distribution in the test set.
Different compound distribution: The distribution of compounds in the train set is as different as possible from the distribution in the test set.
The second principle guarantees that the experiment is compositionally challenging in the sense that it tests the learner on compounds that are as different as possible from the compounds used during training. The first principle aims to guarantee that the experiment is exclusively measuring the effect of the difference in the way atoms are composed to form compounds (rather than some related but different property such as domain adaptation on the distribution of the atoms).
To determine to which degree a certain experiment adheres to these principles, we use the following formalization. For a sample set $T$, we use $\mathcal {F}_A(T)$ to denote the frequency distribution of atoms in $T$ and $\mathcal {F}_C(T)$ for the weighted frequency distribution of compounds in $T$, which correspond to the subgraphs of the rule application DAGs. For practicality, we do not consider all subgraphs of rule application DAGs when computing the compound divergence. Instead, we first generate a large subset $\mathbb {G}$ of subgraphs, then weight them in context of their occurrence, and keep only the ones with highest sum of weights. The purpose of the weighting is to avoid double-counting compounds that are highly correlated with some of their super-compounds. We achieve this by calculating the weight of $G \in \mathbb {G}$ in a sample as $w(G) = \max _{g \in \text{occ}(G)} (1 - \max _{G^{\prime }: g \prec g^{\prime } \in \text{occ}(G^{\prime })} P(G^{\prime }| G))$, where $\text{occ}(G)$ is the set of all occurrences of $G$ in the sample, $\prec $ denotes the strict subgraph relation, and $P(G^{\prime }| G)$ is the empirical probability of $G^{\prime }$ occurring as a supergraph of $G$ over the full sample set. See Appendix SECREF202 for example subgraphs and more details on the weighting.
We measure divergence (or similarity) of the weighted distributions using the Chernoff coefficient $C_\alpha (P \Vert Q) = \sum _{k} p_k^\alpha \, q_k^{1-\alpha } \in [0, 1]$ BIBREF9. For the atom divergence, we use $\alpha =0.5$, which corresponds to the Bhattacharyya coefficient and reflects the desire of making the atom distributions in train and test as similar as possible. For the compound divergence, we use $\alpha = 0.1$, which reflects the intuition that it is more important whether a certain compound occurs in $P$ (train) than whether the probabilities in $P$ (train) and $Q$ (test) match exactly. This allows us to formally define as follows the notions of compound divergence $\mathcal {D}_C$ and atom divergence $\mathcal {D}_A$ of a compositionality experiment consisting of a train set $V$ and a test set $W$:
Based on these principles, we suggest to use as a preferred compositionality benchmark for a given dataset the accuracy obtained by a learner on splits with maximum compound divergence and low atom divergence (we use $\mathcal {D}_A \le 0.02$). See Section SECREF4 for details about how to construct such splits.
The CFQ Dataset
We present the Compositional Freebase Questions (CFQ) as an example of how to construct a dataset that is specifically designed to measure compositional generalization using the DBCA method introduced above. CFQ is a simple yet realistic, large dataset of natural language questions and answers that also provides for each question a corresponding sparql query against the Freebase knowledge base BIBREF10. This means that CFQ can be used for semantic parsing BIBREF11, BIBREF12, which is the task that we focus on in this paper.
The CFQ Dataset ::: Automatic, rule-based generation
BIBREF13 describe a number of benefits for automated rule-based dataset generation, including scalability, control of scope, and avoidance of human errors. Beyond these benefits, however, such an approach is particularly attractive in the context of measuring compositional generalization using the DBCA method, as it allows us to precisely track the atoms (rules) and compounds (rule applications) of each example by recording the sequence of rule applications used to generate it.
Since the way we measure compositionality depends on how the examples can be broken down into atoms and compounds, we design the generation rules so as to have few and meaningful atoms. More precisely, we aim to have as few rules as possible so that the richness of the examples comes from composing them, which yields a large variety of compounds (enabling a large range of different compound divergences) while making it easy to obtain similar distributions of atoms. Also, we aim to make our rules truly “atomic” in the sense that the behavior of any rule is independent of the context where it is applied (e.g., rules may not contain “if-then-else” constructs).
In order to minimize the number of rules, we use an intermediate logical form that serves as a uniform semantic representation with relatively direct mappings to natural language and sparql. Our rules thus fall into the following four categories (a selection of rules is provided in Appendix SECREF20):
Grammar rules that generate natural language constructs and corresponding logical forms.
Inference rules that describe transformations on logical forms, allowing us to factor out transformations that are independent of specific linguistic and sparql constructs.
Resolution rules that map constructs of the logical form to sparql constructs.
Knowledge rules that supply logical form expressions that are universally applicable. Other rules can be kept more generic by parameterizing them on knowledge.
These rules define a language of triples of the form $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $. Our generation algorithm produces such triples in a mixed top-down and bottom-up fashion. We first apply grammar rules and inference rules to produce the natural language questions and their semantics in our logical form. Then we apply resolution rules to obtain the sparql query. See Figure FIGREF14 for an illustration. In addition, the generator produces a normalized, directed acyclic graph (DAG) of rule applications that corresponds to the normalized program that generated the triple. (Appendix SECREF19 shows an example.) Edges of this DAG represent dependencies among the rule applications, and the normalization ensures that a certain rule combination is represented using the same DAG across all the examples where it occurs.
The described approach can generate a potentially infinite set of questions, from which we first sample randomly and then subsample (to maximize the overall diversity of rule combinations while keeping a uniform distribution over complexity). We measure the diversity of rule combinations using the empirical entropy of a weighted subset of the rule application DAGs, and we use the number of rule applications as a measure of the complexity of an example. We also limit the maximum example complexity such that the questions remain relatively natural. Table TABREF15 shows examples of generated questions at varying levels of complexity. An example of a complete data item is shown in Appendix SECREF8, a more detailed data quality analysis is presented in Appendix SECREF9, and the generation algorithm is discussed in more detail in Appendix SECREF18.
The CFQ Dataset ::: Dataset details and statistics
Input and output. While the primary focus of the dataset is semantic parsing (natural language question to sparql query), we also provide natural language answers for each question. This allows the dataset to be used in a text-in-text-out scenario as well (see Appendix SECREF8).
Ambiguity. We largely avoid ambiguity in the questions. In particular, we make sure each name is used to refer to exactly one entity, and we avoid different possible parse trees, different interpretations of plurals, and the need for disambiguation that requires semantic knowledge.
Scope. We select the following language features as compositional building blocks: open questions and closed questions; subordinate clauses; active and passive voice; conjunctions of verb phrases and of noun phrases; possessives with roles (“X's parent”); adjectives; and type restrictions. For knowledge base features, we select roles, verbs, types, and adjectives from domains that are well-represented in Freebase and that can be combined easily. We start from the popular movie domain (e.g., directing, producing, editor, sequel) and extend this with personal relations (e.g., parent, spouse, sibling), companies (e.g., founding, employer), and adjectives (e.g., gender, nationality).
Logical form and grammar. For the internal logical form, we adopt a variation of the description logic $\mathcal {EL}$ BIBREF14, BIBREF15, augmented with additional constructors (see Appendix SECREF16) to more easily map to certain linguistic structures. For the grammar rules, we use a unification-based grammar syntax similar to that used in the Prolog extension GULP 3.1 BIBREF16, with addition of support for disjunction, negation, absence, and default inheritance of features for compactness of representation.
Grounding in Freebase. Once an example is generated by the CFQ rules, it still contains entity placeholders instead of Freebase machine ids (MIDs). For the task of semantic parsing, the examples could theoretically be used as-is, as our avoidance of semantic ambiguity means that a learner should not need knowledge of the specific entity in order to parse the question. To make the questions natural, however, we apply an additional step of replacing the placeholders with appropriate specific entities. To do this we first execute the generated sparql query against Freebase. This returns a set of candidate MID combinations that satisfy the query and can be used as substitutes. If the set is empty, we abandon the generated question candidate as unnatural. Otherwise, we pick one combination at random to yield a question with positive answer. In the case of a closed question, we also generate a variation that yields the answer “No”, which we do by mixing in MIDs from another substitution (or a more generic replacement if that fails) to keep the question as plausible-sounding as possible. We then randomly choose either the question with positive or with negative answer, to avoid spurious correlations between question structure and yes/no answer.
Semantic and structural filtering. Even among the questions that can be satisfied in Freebase, there are some that are meaningful but somewhat unnatural, such as “Was Strange Days directed by a female person whose gender is female?”. We automatically filter out such unnatural questions using semantic and structural rules. Note that since we do not require a learner to identify such questions, we do not track these filtering rules.
Release and statistics.
CFQ contains 239,357 English question-answer pairs that are answerable using the public Freebase data. (The data URL is not yet provided for anonymous review.) We include a list of MIDs such that their English names map unambiguously to a MID. Table TABREF17(a) summarizes the overall statistics of CFQ. Table TABREF17(b) uses numbers from BIBREF8 and from an analysis of WebQuestionsSP BIBREF17 and ComplexWebQuestions BIBREF18 to compare three key statistics of CFQ to other semantic parsing datasets (none of which provide annotations of their compositional structure). CFQ contains the most query patterns by an order of magnitude and also contains significantly more queries and questions than the other datasets. Note that it would be easy to boost the raw number of questions in CFQ almost arbitrarily by repeating the same question pattern with varying entities, but we use at most one entity substitution per question pattern. Appendix SECREF10 contains more detailed analyses of the data distribution.
Compositionality Experiments for CFQ and scan
The DBCA principles described in Section SECREF6 enable a generic and task-independent method for constructing compositionality experiments. To construct such an experiment for a dataset $U$ and a desired combination of atom and compound divergences, we use an iterative greedy algorithm that starts with empty sets $V$ (train) and $W$ (test), and then alternates between adding an example $u \in U$ to $V$ or $W$ (while maintaining the desired train/test ratio). At each iteration, the element $u$ is selected such that $\mathcal {D}_C(V \Vert W)$ and $\mathcal {D}_A(V \Vert W)$ are kept as closely as possible to the desired values. To reduce the risk of being stuck in a local optimum, we also allow removing examples at certain iterations.
In general, there are many different splits that satisfy a desired compound and atom divergence. This reflects the fact that a certain compound may either occur exclusively in the train set or the test set, or it may occur in both of them because the split may have achieved the desired compound divergence by separating other (possibly orthogonal) compounds. Our greedy algorithm addresses this by making random choices along the way, starting with picking the first example randomly.
For the goal of measuring compositional generalization as accurately as possible, it is particularly interesting to construct maximum compound divergence (MCD) splits, which aim for a maximum compound divergence at a low atom divergence (we use $\mathcal {D}_A \le 0.02$). Table TABREF18 compares the compound divergence $\mathcal {D}_C$ and atom divergence $\mathcal {D}_A$ of three MCD splits to a random split baseline as well as to several previously suggested compositionality experiments for both CFQ and the existing scan dataset (cf. Section SECREF30). The split methods (beyond random split) are the following:
Output length: Variation of the setup described by BIBREF2 where the train set consists of examples with output (sparql query or action sequence) length $\le \hspace{-2.5pt} N$, while the test set consists of examples with output length $> \hspace{-2.5pt} N$. For CFQ, we use $N = 7$ constraints. For scan, we use $N = 22$ actions.
Input length: Variation of the above setup, in which the train set consists of examples with input (question or command) length $\le N$, while test set consists of examples with input length $> N$. For CFQ, we use $N=19$ grammar leaves. For SCAN, we use $N=8$ tokens.
Output pattern: Variation of setup described by BIBREF8, in which the split is based on randomly assigning clusters of examples sharing the same output (query or action sequence) pattern. Query patterns are determined by anonymizing entities and properties; action sequence patterns collapse primitive actions and directions.
Input pattern: Variation of the previous setup in which the split is based on randomly assigning clusters of examples sharing the same input (question or command) pattern. Question patterns are determined by anonymizing entity and property names ; command patterns collapse verbs and the interchangeable pairs left/right, around/opposite, twice/thrice.
All of these experiments are based on the same train and validation/test sizes of 40% and 10% of the whole set, respectively. For CFQ, this corresponds to about 96k train and 12k validation and test examples, whereas for scan, it corresponds to about 8k train and 1k validation and test examples. We chose to use half of the full dataset for the train-test splits, as it led to an appropriate balance between high compound divergence and high train set size in informal experiments.
The MCD splits achieve a significantly higher compound divergence at a similar atom divergence when compared to the other experiments. The reason for this is that, instead of focusing on only one intuitive but rather arbitrary aspect of compositional generalization, the MCD splits aim to optimize divergence across all compounds directly.
Interestingly, the MCD splits still correlate with the aspects of compositional generalization that are targeted by the other experiments in this table. As shown in the four right columns of Table TABREF18, for each MCD split, the train set $V$ contains on average shorter examples than the test set $W$ (measured by the ratio of average lengths), and $V$ also contains only a small fraction of the input and output patterns used in $W$ (measured by the fraction of patterns covered). However, these correlations are less pronounced than for the experiments that specifically target these aspects, and they vary significantly across the different MCD splits.
This illustrates that MCD splits are comprehensive in the sense that they cover many different aspects of compositional generalization, especially when looking at multiple of them. It also means that whether a certain example ends up in train or test is not determined solely by a single criterion that is immediately observable when looking at the input and output (such as length). As we show in Appendix SECREF91, this generally makes the examples in train and test look fairly similar.
Experimental Results and Analysis ::: Experiment Setup
We use three encoder-decoder neural architectures as baselines: (1) LSTM+attention as an LSTM BIBREF19 with attention mechanism BIBREF20; (2) Transformer BIBREF21 and (3) Universal Transformer BIBREF22.
We tune the hyperparameters using a CFQ random split, and we keep the hyperparameters fixed for both CFQ and scan (listed in Appendix SECREF12). In particular the number of training steps is kept constant to remove this factor of variation. We train a fresh model for each experiment, and we replicate each experiment 5 times and report the resulting mean accuracy with 95% confidence intervals.
Note that while we construct test and validation sets from the same distribution, we suggest that hyperparameter tuning should be done on a random split (or random subset of the train set) if one wants to measure compositional generalization of a model with respect to an unknown test distribution as opposed to an architecture with respect to a known test distribution. Tuning on a validation set that has the same distribution as the test set would amount to optimizing for a particular type of compound divergence and thus measure the ability for a particular architecture to yield models that can be made to generalize in one particular way (through leaking information about the test set in the hyperparameters).
Similarly to BIBREF8, we anonymize the Freebase names and MIDs in the textual input and the SPARQL output, respectively, by replacing them with a placeholder (e.g., “M0” for the first MID). This removes the need for two learning sub-tasks that are orthogonal to our focus: named entity recognition and learning that the MIDs are patterns that need to be copied. An example input-output (question-query) pair then looks like the following: `Was M0 a screenwriter' $\mapsto $ `select count(*) where {M0 a ns:film.writer}'.
The main relation we are interested in is the one between compound divergence of the data split and accuracy. Specifically, we compute the accuracy of each model configuration on a series of divergence-based splits that we produce with target compound divergences that span the range between zero and the maximum achievable in 0.1 increments (while ensuring that atom divergence does not exceed the value of 0.02). For each target divergence, we produce at least 3 different splits with different randomization parameters (compare Section SECREF4). For comparison, we also compute accuracies on the other splits shown in Table TABREF18.
Experimental Results and Analysis ::: Results and analysis for CFQ
The mean accuracies of the three architectures on CFQ are shown in Figure FIGREF28(a) and Table TABREF29. We make three main observations:
All models achieve an accuracy larger than 95% on a random split, and this is true even if they are trained on 10 times fewer training instances (see Appendix SECREF15 for a more detailed analysis on the performance with varying training size).
The mean accuracy on the MCD splits is below 20% for all architectures, which means that even a large train set (about 96k instances) with a similar distribution of atoms between train and test is not sufficient for these architectures to perform well on the test distribution.
For all architectures, there is a strong negative correlation between the compound divergence and the mean accuracy.
This suggests that the baseline models are able to capture the superficial structure of the dataset, but fail to capture the compositional structure. We find it surprising that varying the compound divergence gives direct control of the (mean) accuracy, even though the examples in train and test look similar (see Appendix SECREF91). This means that compound divergence seems to capture the core difficulty for these ML architectures to generalize compositionally.
Note that the experiment based on output-length exhibits a worse accuracy than what we would expect based on its compositional divergence. One explanation for this is that the test distribution varies from the training distribution in other ways than compound divergence (namely in output length and a slightly higher atom divergence), which seems to make this split particularly difficult for the baseline architectures. To analyze the influence of the length ratio further, we compute the correlation between length ratios and accuracy of the baseline systems and compare it to the correlation between compound divergence and accuracy. We observe $R^2$ correlation coefficients between 0.11 and 0.22 for the input and output length ratios and between 0.81 and 0.88 for the compound divergence. This shows that despite the known phenomenon that the baseline systems struggle to generalize to longer lengths, the compound divergence seems to be a stronger explanation for the accuracy on different splits than the lengths ratios.
Error analysis. We perform an analysis of the errors for the split MCD$_{1}$ (the first MCD split that we constructed, with more details provided in Appendix SECREF13). We observe accuracies between 29% and 37% on the test set of this particular split. Qualitatively, all three systems seem to make similar errors at this point (68% of errors are on the same samples). They make more errors for longer sequences and predict about 20% too short output when they make an error. The most common category of error is the omission of a clause in the output (present in 43%-49% of the test samples), e.g.: (1) Omitted conjunctions: for the input “What spouse of a film producer executive produced and edited M0, M1, and M2?” the best system ignores “executive produced” in the output. (2) Omitted adjectives: for the input “Which female Spanish film producer was M3' s spouse?” the best system ignores the adjective “female”.
Experimental Results and Analysis ::: Results and analysis for scan
To demonstrate the use of our analysis method on another dataset, we re-create the scan dataset BIBREF2, which consists of compositional navigation commands (e.g, `turn left twice and jump') mapped to corresponding action sequences (e.g., `lturn lturn jump'). We use the original grammar while tracking the rule applications used for the construction of each input-output pair. This enables us to compare the compositional generalization abilities of the baseline systems on this dataset in a novel way.
Figure FIGREF28(b) shows the graphs for the scan data set in the same setup as Figure FIGREF28(a) does for CFQ. We observe that the compound divergence again is a good predictor for the mean accuracy for all three architectures. One difference is that for scan the systems are able to attain accuracies close to 100% for compound divergences up to around 0.2, which is not the case for CFQ. This seems to be in line with the fact that overall CFQ is a more complex task than scan: the total number of rules used in generating scan is only 38 in comparison to 443 rules in the construction of CFQ.
Appendix SECREF14 provides a comparison to other experiments presented in previous work, including experiments that have significantly different atom distributions. We observe that this generally causes lower accuracies but does not break the correlation between accuracy and compound divergence.
Related Work
To measure compositional generalization for semantic parsing to SQL, BIBREF8 propose to ensure that no SQL query pattern occurs in both the train and the test set (“query split”), and they provide such splits for several data sets. By evaluating several ML architectures the authors confirm that this query-pattern split is harder to learn than a conventional split.
BIBREF2 introduce the scan dataset, and several publications provide interesting analyses of compositional generalization using it BIBREF5, BIBREF6. BIBREF7 discuss a particular extension of a seq2seq model that is effective in handling difficult scan sub-tasks by separating semantic and syntactic information during learning. Our contributions extend the analyses on the scan data in several ways: CFQ provides richer annotations and covers a broader subset of English than the scan dataset, and we propose a comprehensive score for assessing aggregate compositionality of a system on a given task.
The mathematics dataset BIBREF13 is a large, automatically generated set of 112M samples in 56 separated sub-tasks. The authors present data and experiments that share common goals with our approach, but focus on mathematical reasoning instead of natural language. Our breakdown of generation rules per train sample is more fine-grained, which allows a more precise compositional generalization analysis. Being automatically generated also links our approach to datasets such as the bAbI tasks BIBREF23, which however do not focus on compositional generalization.
A dataset related to CFQ is ComplexWebQuestions BIBREF18, which consists of complex questions that are automatically generated from simpler sub-questions in WebQuestionsSP BIBREF17 and then reworded manually. While these datasets can be used for semantic parsing, we did not find them suitable for a thorough compositionality analysis because a consistent annotation with the compositional structure would be hard to obtain. Other approaches to semi-automatic dataset creation also use paraphrasing BIBREF24, BIBREF25.
BIBREF3 introduce the generated clevr dataset, which shares common goals with our work applied in the area of visual reasoning. The dataset's functional programs capture some of the structural information of the questions and are linked one-to-many to the 423 question patterns used. The authors specifically investigate generalization to new combinations of visual attributes in one experiment which uses a particular train-test split based on the colors used. BIBREF26 propose a neural-symbolic architecture and discuss promising results on additional specific splits of the clevr data, e.g. based on object counts and program depth. BIBREF27 describe how the application of compositional attention networks to the clevr data leads to structured and data-efficient learning. BIBREF28 present a large, compositional, generated visual question answering data set with functional programs, on which neural state machines achieve good performance BIBREF29. The use of specific splits between train and test data also occurs in the context of visual data. E.g., BIBREF30 propose a greedy split algorithm to maximize the coverage of test concepts in the train set while keeping question-type/answer pairs disjoint and observe performance degradation of existing approaches. BIBREF31 introduce a synthetic visual question answering dataset called sqoop, which is used to test whether a learner can answer questions about all possible object pairs after being trained on a subset.
While these datasets are very interesting, the additional annotation that we provide in CFQ indicating the exact rule trees needed to link input and output makes additional analyses regarding compositionality possible. Our analyses go beyond many of the presented discussions (that mostly focus on accuracy regarding particular holdouts) in formalizing an approach that uses the atom and compound divergences to measure compositionality.
A number of ML approaches have been developed for semantic parsing. BIBREF32 propose Key-Value Memory Networks – neural network-based architectures that internalize a knowledge base into the network – and introduce the WikiMovies dataset. BIBREF33 develop an end-to-end architecture that can handle noise in questions and learn multi-hop reasoning simultaneously. They introduce the MetaQA benchmark that is based on WikiMovies but uses a set of only 511 question patterns (mod entities) shared between train and test.
With regards to studying compositionality in ML, BIBREF34 argue that combinatorial generalization should be a top priority to achieve human-like abilities. BIBREF35 discusses measuring the compositionality of a trained representation, e.g. of a learned embedding. The author suggests to use a tree reconstruction error that is based on how well the oracle derivation of the input matches the structure that can be derived on the representations. BIBREF4 discuss an architecture that enables the learning of compositional concept operators on top of learned visual abstractions. BIBREF36 introduce the compositional recursive learner that “can generalize to more complex problems than the learner has previously encountered”.
Conclusion and Outlook
In this paper we presented what is (to the best of our knowledge) the largest and most comprehensive benchmark for compositional generalization on a realistic NLU task. It is based on a new dataset generated via a principled rule-based approach and a new method of splitting the dataset by optimizing the divergence of atom and compound distributions between train and test sets. The performance of three baselines indicates that in a simple but realistic NLU scenario, state-of-the-art learning systems fail to generalize compositionally even if they are provided with large amounts of training data and that the mean accuracy is strongly correlated with the compound divergence.
We hope our work will inspire others to use this benchmark as a yardstick to advance the compositional generalization capabilities of learning systems and achieve high accuracy at high compound divergence. Some specific directions that we consider promising include applying unsupervised pretraining on the input language or output queries and the use of more diverse or more targeted learning architectures, such as syntactic attention BIBREF7. We also believe it would be interesting to apply the DBCA approach to other domains such as visual reasoning, e.g. based on clevr BIBREF3.
In the area of compositionality benchmarks, we are interested in determining the performance of current architectures on the end-to-end task that expects a natural language answer given a natural language question in CFQ. We would like also to extend our approach to broader subsets of language understanding, including use of ambiguous constructs, negations, quantification, comparatives, additional languages, and other vertical domains.
Example Dataset Item
The following shows an example data item including the question text in various forms, the answer, the sparql query in various forms, some tracked statistics, and the set of used rules (atoms) and the applied rule tree (compound). Some details are omitted, indicated by ellipses (`...').
Data Quality Analysis
During the development of our data generation pipeline, we manually checked the generated examples for quality. Below is a random selection of 50 examples of the final CFQ dataset (no cherry-picking was used). Brackets around [entity names] are provided just for ease of human reading. Manual checking also indicated that all questions are associated with the semantically correct sparql queries. However, because we rely on the data present in Freebase, there are three debatable questions which sound somewhat unnatural (UNKREF33, UNKREF51, and UNKREF59, see further discussion below the list).
Who was a writer, star, and cinematographer of [Tetsuo: The Bullet Man], [Nightmare Detective], and [Bullet Ballet]?
Which male person was a sibling of [Andrew Klavan]?
Did [Wallace Stevens] influence [Levi Seeley]'s spouse and parent?
Did a producer, writer, and art director of [Thelma & Luis] produce, direct, and write [Light Girls]?
Were [Hangover Square], [Zack and Miri Make a Porno], and [Clerks II] edited by a founder and employee of a film producer?
What American parent of [Charlie Sistovaris] was a British screenwriter's sibling?
Did [Anne Williams Rubinstein] marry a person that influenced a screenwriter and influenced [John Most]?
Was [Cachún cachún ra ra!]'s director a film director's American child?
Did [Maisy's Garden]'s executive producer write, edit, and executive produce [Pakalppooram], [It's Not About the Shawerma], [Rick's Canoe], and [The Fifth Wall]?
Was [Holly Ellenson]'s child [Wally Ellenson]?
Did [Emerald Cities]'s cinematographer, writer, and editor edit, executive produce, and direct [Blues for the Avatar] and [White Stork Is Coming]?
Was a film producer [Lilies of the Ghetto]'s distributor and producer?
Which child of [Mimi Iger] did a film producer employ and [The Walt Disney Company] employ?
What Japanese spouse of [Hong Kong Paradise]'s star did [Ineko Arima] and [Nishiki Kô] marry?
Who influenced and was influenced by [Black Dynamite]'s star?
What was written by, edited by, directed by, produced by, and executive produced by [Pauline Collins]'s child's sibling?
Which Swedish film director that [Théo Van Horn]'s actor influenced did [Egen ingȧng] star?
Who was influenced by [Golden Yeggs]'s star, was influenced by [Richard Pryor], was influenced by [Bill Murray], and married [Elaine Chappelle]?
What did [This Is My Show]'s director, cinematographer, and star direct, edit, produce, and executive produce?
Who was a male costume designer and director of [Ene... due... like... fake...] and [The Windmill Bar]?
Was [Kumudu Munasinghe] a Dutch film producer's country of nationality's employee?
Did an art director, editor, director, writer, cinematographer, and star of [Tetsuo II: Body Hammer] produce [Nightmare Detective], [Tetsuo: The Iron Man], and [A Snake of June]?
Was [Alexandra Naoum] [Monsieur Verdoux]'s producer, writer, and star?
What film director founded [THX], was employed by [American Zoetrope], [LucasArts], [Skywalker Sound], and [Lucasfilm], and founded [Industrial Light & Magic]?
What male employee of [Weta Workshop] was [Bad Taste]'s editor?
Were [Weta Digital] and [Weta Workshop] founded by a cinematographer and founded by a film editor?
What art director influenced [DreamWorks Animation]'s founder?
Did [Daisies] star [Fruit of Paradise]'s costume designer and writer, star [Jaromír Vomácka], and star [Jirina Myskova]?
What character was influenced by a costume designer, influenced by [Pedro Calderón de la Barca], influenced by [William Shakespeare] and [Luis Buñuel], and influenced by [Miguel de Unamuno]?
What British costume designer of [The Love Letter] and [The Chamber] was a screenwriter's child?
Was [Eric Massa] a cinematographer's parent's sibling's American sibling?
What art director of [Stepping Sisters 1932] was a parent of [Imre Sándorházi]?
What was executive produced by, written by, produced by, and edited by a director of [V/H/S/2]'s sequel?
What did an editor and cinematographer of [Tongue Twister Variations] direct?
Who was a Canadian screenwriter that produced [Her Painted Hero] and [The Nick of Time Baby]?
Which American parent of [Janet Friedman] did [Rose Friedman] influence and marry?
Did [George Carlin] influence [Louis C.K.: Shameless]'s executive producer and influence [Joan Rivers]?
Who was a male writer, star, director, and costume designer of [The Wizard of Speed and Time]?
Who was [Lost Boys: The Thirst]'s prequel's sequel's art director?
Did a cinematographer's female parent executive produce, direct, and write [Hit Dat Shit 5]?
Who married [Siri von Essen], influenced [A Lesson in Love]'s director and art director, influenced [Tennessee Williams], and influenced [Maxim Gorky]?
What Italian film director directed [Children of Hannibal]?
What film producer directed, wrote, edited, and produced [la estrella], [la ardilla], and [el valiente]?
Were [Flames: The Movie] and [Soltera] directed by a male person and executive produced by [Hilda Russoff]'s spouse?
Was a sibling of [Fawwaz bin Abdulaziz Al Saud] [Badr bin Abdulaziz Al Saud]'s sibling?
What did a sibling of [Louise Rohr] executive produce, produce, and edit?
Did a French cinematographer of [Le Volcan interdit] edit [The Last Bolshevik] and direct [A.K.] and [Statues Also Die]?
Was [Mannai Thottu Kumbidanum] directed by and written by a Dutch male cinematographer?
Was a director, art director, executive producer, and costume designer of [But I'm a Genderqueer] [Lauren Soldano]?
Was [When We Were Kings] produced by a film editor whose spouse was employed by [Royal Academy of Dramatic Art] and distributed by [PolyGram Filmed Entertainment]?
Further discussion of the debatable questions:
Did [Wallace Stevens] influence [Levi Seeley]'s spouse and parent?
The occurrence of the seemingly implausible combination of roles “spouse and parent” is due to incorrect data in Freebase, in which there are 502 entities asserted to be both the spouse and parent of other entities. For instance, “Anne Dacre” is both the spouse and parent of “Christopher Conyers”. We can also find occasional occurrences in CFQ of other implausible role combinations, such as “parent and child”, “spouse and sibling” etc., triggered by similar Freebase data issues.
Was [Kumudu Munasinghe] a Dutch film producer's country of nationality's employee?
The somewhat unnatural phrasing of “country's employee” occurs due to a modeling choice in Freebase, in which the same entity is used to represent both a country and the government of that country. This makes it possible for a country to employ a person.
What character was influenced by a costume designer, influenced by [Pedro Calderón de la Barca], influenced by [William Shakespeare] and [Luis Buñuel], and influenced by [Miguel de Unamuno]?
The somewhat unnatural phrasing of “a character was influenced by” occurs due to a modeling choice in Freebase, in which when a film character is based on a real person, Freebase commonly uses the same entity to represent both. This makes “person” and “character” exchangeable in the questions where the person is also a film character.
Data Distribution Analysis ::: Answer frequencies
Table TABREF85 shows the most frequently occurring answers in CFQ. Not surprisingly, after the answers “Yes” and “No”, entities related in Freebase to the domain of movies have highest frequency.
Data Distribution Analysis ::: Impact of subsampling on the distribution of complexity levels
Figure FIGREF87 illustrates how subsampling changes the distribution of questions in CFQ with different levels of complexity to become more even.
Data Distribution Analysis ::: Impact of subsampling on the frequency of rules and rule combinations
Subsampling increases the frequency of rarely used rules and rule combinations and decreases the frequency of commonly used ones. For rules, this is illustrated by Figure FIGREF89 which shows the ratio of examples each rule appears in, before and after subsampling, in the order of their frequency. Figure FIGREF90 shows the same comparison for rule combinations.
Divergence-Based Split Analysis ::: Qualitative analysis of MCD@!START@$_{1}$@!END@
Traditional compositionality experiments often use train-test splits based on observable properties of the input and output (e.g., input/output complexity, input/output patterns, and input/output feature holdouts). One consequence of this is that the difference between train and test examples is relatively easily observable “with the naked eye”. The lists below illustrate that this is not usually the case for divergence-based splits. Similar to the random sample of the general data in Appendix SECREF9 we provide a random sample of size 20 from both the train and test set here. Indeed, even for the MCD$_{1}$ split with a high divergence of 0.694, the 20 random samples of train and test questions shown below cannot easily be distinguished as they both contain the same kind of questions of different sizes.
Train samples from MCD$_{1}$:
What was founded by a costume designer, founded by [Forgotten Silver]'s star, and founded by [Jamie Selkirk]?
Which male person influenced and was influenced by [William Dean Howells]?
Did [Marco Bellocchio] produce, write, and direct [Greek Pete]?
What did [Rick Schmidt] edit, [Philip Rashkovetsky] edit, and a cinematographer edit?
Were [The Living Playing Cards] and [The Haunted Castle] edited by, directed by, and produced by a French writer of [Le cauchemar de Méliès]?
What did a spouse of [Shorts]'s producer's spouse executive produce and direct?
Did [P. G. Wodehouse], [Raymond Chandler], [Edward Bunker], [Pauline Kael], and [Michael Cimino] influence [Grindhouse]'s cinematographer and star?
What Mexican person did a film producer employ?
Did [The Midnight After]'s Chinese executive producer edit [Perfect Life] and [Dumplings]?
Who did [For the Secret Service]'s director's female spouse influence?
Who married, was influenced by, and influenced a company's founder?
Was [MAN SE]'s French male German employee's employer [Sulzer]?
Who influenced an actor that [Robin Santana] was influenced by and [K. J. Stevens] was influenced by and was influenced by [Virgil]?
Did [Pirates of Malaysia] star [Giuseppe Addobbati] and star a Spanish screenwriter?
Was [The Silence of the Sea] written by, produced by, executive produced by, directed by, and edited by [The Red Circle]'s French editor?
Did [Chanel] employ a German costume designer, employ [Gaspard Ulliel] and [Maureen Chiquet], and employ [Jacques Polge]?
Who was influenced by [Adam Sandler] and married a film producer?
Did a Spanish screenwriter's child direct and edit [Bakuchi-uchi: Nagaremono]?
Was a founder of [IG Port] employed by a film producer?
Was [Orizzonti Orizzonti!] executive produced by and written by an art director's sibling?
Test samples from MCD$_{1}$:
What sequel of [Paranormal Activity 2] was edited by and written by a film director?
What spouse of a film producer founded [Grand Hustle Records] and was employed by [40/40 Club], [Roc-A-Fella Records], and [Def Jam Recordings]?
Did [Pixar] employ an art director and employ [Susham Bedi]?
Was a sibling of [David Lindbland] [Dynamit Nobel]'s Swedish founder?
What prequel of [Charlie the Unicorn 2] starred, was edited by, was produced by, was written by, and was directed by [Jason Steele]?
Did [Rick Schmidt] direct, produce, executive produce, and edit [Blues for the Avatar], [White Stork Is Coming], [The Fifth Wall], and [It's Not About the Shawerma]?
Was [Luke Larkin Music] an art director's employer?
What prequel of [Goat Story 2] was executive produced, written, directed, edited, and produced by [Jan Tománek]?
Was [Bullet Ballet]'s editor, star, director, and cinematographer [Promises Written in Water]'s star, director, writer, executive producer, and art director?
What was edited by, produced by, directed by, and written by [Ellis Kaan Ozen], [Thaw Bwe], [Jeffrey Malkofsky-Berger], and [Leslie Berkley]?
Was a person's female sibling [Reggae in a Babylon]'s producer?
Who was a director, cinematographer, executive producer, art director, producer, star, and writer of [The Man Who Killed God]?
Was [My Sweet Home]'s director, editor, writer, art director, producer, cinematographer, and costume designer a person?
Which art director, star, and editor of [The Brown Bunny] and [Promises Written in Water] did [Cord] star?
Did an employee and founder of [Virgin Mobile Australia], [Virgin Mobile USA], and [Virgin Mobile France] found [Virgin America] and found [V2 Records]?
Was a Chinese executive producer and star of [Happy Ghost II] and [All's Well, Ends Well 2010] a film director?
Was [The Voyeur]'s executive producer an actor's parent?
Did [Erasable Cities]'s writer, producer, editor, art director, cinematographer, and director produce and executive produce [Promises Written in Water]?
Who was an editor, star, and cinematographer of [Tetsuo: The Iron Man], [A Snake of June], and [Bullet Ballet]?
Was a costume designer's employer [Philips High School]?
Divergence-Based Split Analysis ::: Quantitative analysis of MCD@!START@$_{1}$@!END@
Figure FIGREF133 shows the frequency of atoms (upper graph) and compounds (lower graph) in the train and test sets of the maximum compound divergence split for the CFQ data. As the frequency of an atom resp. compound we use the fraction of examples it appears in. Both atoms and compounds are indexed primarily by their frequency in the train set, secondarily by their frequency in the test set, in decreasing order. For practical reasons we only look at a small subset of compounds here but we believe the analysis is representative.
We can see that the frequency of atoms in the two sets is very aligned and that all atoms from the test set appear in the train set. The frequency of compounds however is wildly different: While some invariably occur in both sets, the frequencies are often not aligned and most compounds appear only in either the train or the test set.
Hyperparameters
The experiments were run using the tensor2tensor framework BIBREF39 with some of the hyperparameters tuned using a random split of a previous, smaller version of the data set during development. We use the default hyperparameter sets publicly available in the tensor2tensor implementation (obtained from https://github.com/tensorflow/tensor2tensor) and override the tuned hyperparameters. The hyperparameters used are summarized in Table TABREF134.
Detailed error analysis ::: Breakdown of error types
Table TABREF136 shows a more detailed analysis of the errors that the baseline models make on CFQ for MCD$_{1}$ (compare Section SECREF24). The reported errors are bucketized into three main types: sparql property clause error, sparql filter clause error and malformed sparql query in the model's output. The total number of test set examples exhibiting any clause or filter error is reported (sum column), as well as the number of insertions (ins), deletions (del), and substitutions (sub) in the model's output with respect to the correct query. Property clause substitution errors are further subdivided into those where only the property itself is wrong while subject and object are correct (prop), those where the property is correct but either subject or object is wrong (node) and those where both the property and the subject or the object are wrong (both).
The accuracy metric requires the model response and the golden (correct) answer to be exactly equal to each other. Thus, a sparql query with the same clauses as the golden answer but in a different order or with some of the clauses appearing multiple times is also considered to be an error despite being equivalent to the golden answer in its meaning. The amount of such errors is relatively small though, accounting for 1.8%, 0.6% and 1.5% of total test set size for LSTM+Attention, Transformer and Universal Transformer respectively.
Detailed error analysis ::: Qualitative error analysis
Below we qualitatively analyze a number of instances the models fail on. We anonymize the MIDs in the same way as the data is provided to the models (see Section SECREF5). We first select queries on which all machine learning systems fail in all replicated runs (about 5k instances out of a total of about 12k), and then randomly select queries from this list. In the following we discuss a few cases in more detail. Note that, for readability, we use the following abbreviations for the sparql properties in Query 1:
ns:people.person.child = ns:people.person.children|
ns:fictional_universe.fictional_character.children|
ns:organization.organization.child/
ns:organization.organization_relationship.child
ns:people.person.sibling = ns:people.person.siblings/
ns:people.siblingrelationship.sibling|
ns:fictionaluniverse.fictionalcharacter.siblings/
ns:fictionaluniverse.
siblingrelationshipoffictionalcharacters.siblings
Query 1: “What sibling of M0 was M1' s parent?”
Golden (correct) sparql query:
SELECT DISTINCT ?x0 WHERE {
?x0 ns:people.person.child M1 .
?x0 ns:people.person.sibling M0 .
FILTER ( ?x0 != M0 )
}
Inferred (system) sparql query:
SELECT DISTINCT ?x0 WHERE {
?x0 ns:people.person.sibling ?x1 .
?x0 ns:people.person.sibling M0 .
?x1 ns:people.person.child M1 .
FILTER ( ?x0 != ?x1 )
}
Analysis. The meaning of the sparql query generated by the system is “What sibling of M0 was a sibling of M1's parent?”, which is incorrect. We next analyze the train set, in order to show that we believe enough information has been provided in the train set for the question to be answered correctly.
Some subqueries of the query and their occurrences are shown in Table TABREF140. While the exact subquery “What sibling” does not occur at training, the two words have been shown separately in many instances: the subqueries “sibling of Mx”, and “Mx's parent” occur 2,331 and 1,222 times, respectively. We can analyze this example in more detail by comparing parts of the rule tree of this example with those shown at training. As can be read from the table, similar sentences have been shown during training. Some examples are:
What was executive produced by and written by a sibling of M0?
What costume designer did M1's parent employ?
What cinematographer was a film editor that M2 and M3 married?
What film director was a character influenced by M2?
Query 2: “Did a male film director edit and direct M0 and M1?”
Golden (correct) sparql query:
SELECT count ( * ) WHERE {
?x0 ns:film.director.film M0 .
?x0 ns:film.director.film M1 .
?x0 ns:film.editor.film M0 .
?x0 ns:film.editor.film M1 .
?x0 ns:people.person.gender m_05zppz
}
Inferred (system) sparql query:
SELECT count ( * ) WHERE {
?x0 ns:film.director.film M0 .
?x0 ns:film.director.film M1 .
?x0 ns:film.editor.film M0 .
?x0 ns:people.person.gender m_05zppz
}
Analysis. The meaning of the inferred sparql query is “Did a male film director edit M0 and direct M0 and M1?”. It thus seems the model `forgets' to include the relation between the director and movie M1.
Looking at subqueries and their occurrence count (Table TABREF145), we see again that various subqueries occur often during training. However, “edit and direct” have not been shown often together. When looking at the rule trees, we see that both conjunctions in the query occur often at training separately: “Did [DetNP] [VP] and [VP] [DetNP]” occurs 1,432 times, and “Did [DetNP] [VP] [Entity] and [Entity]” occurs 909 times. However, they never occur together: “Did [DetNP] [VP] and [VP] [DetNP] and [DetNP]” does not occur at training. This may be the reason why all systems fail on this example, but at the same time we believe a compositional learner should be able to generalize correctly given the training instances. Some examples are:
Did a male film director that M3's parent married influence an art director?
Did a film producer that played M2 edit and direct M1?
Did a screenwriter edit and direct a sequel of M1
Did a Chinese male film director edit M1 and M2?
Additional experimental results on scan
Figure FIGREF150 shows a scatter plot of accuracy vs. compound divergence for the three baseline architectures (see Section SECREF5) on existing splits of the scan data. These splits are discussed in BIBREF2 and BIBREF6, and the exact split data is available. (Data splits obtained from https://github.com/brendenlake/SCAN). We map these splits onto the re-created scan data, which enables us to measure the atom and compound divergences. The authors present a total of six split experiments (some with several sub-experiments):
BIBREF2:
simple (random)
by action sequence length
adding a primitive and adding a primitive along with complex combinations
BIBREF6:
adding a template
adding template fillers
adding more training examples of fillers (fewshot)
In the plot, we omit some data points that are too close to be distinguished easily. The point labels have the form `(abbreviated experiment name)<(parameter)>@(number of samples) (baseline system abbreviation) [(train set size fraction), (split atom divergence)]'. The train set size fraction is given as a percentage of the overall data size. The baseline system abbreviations are LSTM, T for Transformer, UT for Universal Transformer, T/UT where both transformer models are indistinguishable, and empty where all three systems perform indistinguishably. The abbreviated experiment name is one of the names in italics above.
We can observe a strong dependency of the accuracies on the compound divergence of the data split. Again, this seems to indicate that the compound divergence is correlated with accuracy for these baseline architectures. One difference to the data shown in Figure FIGREF28(b) is that for this set of experiments the accuracy drops faster with increasing compound divergence. One explanation for this effect is that the experiments are directly aimed at highlighting one specific potentially problematic scenario for learning. E.g. in the experiment `primitive<jump>' (with very low accuracies for all three systems) the jump command is shown exactly in one combination (namely alone) in the training data while it occurs in all test examples in arbitrary combinations.
This is reflected in the higher atom divergence value of 0.08 for this split, as well as in all other splits that exhibit a low accuracy at a low compound divergence in Figure FIGREF150. Note that BIBREF2 already compare the experiment `primitive<jump>' to the experiment `primitive<turn left>' for which all three systems achieve a much higher accuracy. In their interpretation of this phenomenon, they mainly focus on the fact that in contrast to 'jump', the action 'turn left' is also generated by other inputs. We additionally observe that the latter experiment also has a slightly lower atom divergence of 0.07, a lower compound divergence, and it covers a much larger part of the data in the train set (94% vs. 63%).
While the accuracies we observe for the `primitive' experiments are very much in line with the results reported by BIBREF2, we noticed a few interesting differences for other experiments: All three systems go to 100% accuracy on the fewshot task even for one example (while BIBREF6 report a slowly increasing accuracy for the architecture they evaluate). On the other hand, both transformer models only reach 0% accuracy on the length split, while the LSTM obtains around 14% (which is in line with what previous work reports).
Analysis of relations between accuracy, compound divergence, and training size
Figure FIGREF28 shows for all baseline systems a strong correlation between accuracy and compound divergence for the chosen training sizes (96k for CFQ and 8k for scan). One interesting question is whether and how this correlation is changed for different training sizes. Figures FIGREF159 and FIGREF159 show that this correlation holds also for smaller training sizes but that the accuracy is generally somewhat lower for smaller training sizes.
At the same time, we observe that the difference between accuracies of various training sizes gets smaller as the training size increases. This can be seen even more clearly in Figures FIGREF159 and FIGREF159, which plot the training size rather than the compound divergence on the x-axis. These figures show that the increase in accuracy flattens out significantly as we reach training size of about 80k for CFQ and about 6k for SCAN. This indicates that further increasing train set size may not be sufficient to do well on these compositionality experiments.
Logical Form
To represent our logical form we use syntax of the description logic $\mathcal {EL}$ BIBREF14, BIBREF15 with additional concept and role constructors. These constructors do not have description logic semantics; instead, their meaning is completely determined by the set of generation rules of the CFQ dataset.
Let $A$ be a concept name, $C, C_1, C_2$ be concepts, $R, R_1, R_2$ be roles, and $v$ be a raw string. Then the following would be concepts:
and the following would be roles:
Note that our logical form does not have roles other than those in a form of RolePair($C_1$, $C_2$).
New strings are generated by using a special function new_var($\$S$). This function generates a unique string of the form ?x<N>, where N is a unique number, and assigns that string to variable $\$S$. This string can later be used as a variable in a sparql constraint.
Rule Format
This section describes the format of each of the rule types we use for generating the CFQ dataset, in the form in which they appear in the rules index in Appendix SECREF20.
General formatting conventions shared across all rule types:
Variable names are prefixed by `$'. Example: $X.
(Exception: In grammar rules, while variables standing for constants are prefixed by `$', variables standing for logical forms are prefixed by `_'. Example: _action.)
Concept names are written in camel case. Example: FilmProducer.
Names of functions that output logical forms (concepts, roles, or knowledge) are also written in camel case. Examples: DropDependency, BoundRolePairs, RolePair.
Names of functions that output string literals or which are used for converting logical forms to sparql are written in lowercase with underscores. Examples: def2sparql, get_specializations, new_var.
String literals are enclosed in single quotes. Example: 'ns:film:director'.
Rule Format ::: Grammar rule format
The CFQ grammar is a unification-based grammar of recursive rewriting rules used to generate pairs of strings and their corresponding logical form. For an introductory overview of unification-based grammars including several popular variations, see BIBREF38. The rules in the CFQ grammar follow a similar syntax in particular to that used in the Prolog extension GULP 3.1 BIBREF16, with the addition of support for disjunction, negation, absence, and default inheritance of features, and with minor differences in formatting described below.
Properties shared between the CFQ grammar syntax and that of BIBREF16 include the following:
Grammar rules are notated as variations of context-free phrase-structure rules of the form $T_{0} \rightarrow T_{1}$ ... $T_{n}$, where each of the syntactic non-terminals and terminals $T_{0}$ ... $T_{n}$ are augmented with feature lists in parentheses.
Each grammar rule can be interpreted as specifying how a feature structure (with logical form) that is unifiable with the lefthand side can be re-written to the sequence of features structures (with logical form) indicated on the righthand side.
Features are represented as attribute-value pairs separated by a colon (i.e., $attribute$:$value$).
Shared values in feature structures are represented through the use of variables.
Specifically, in the rules index, CFQ grammar rules are described in the format
$T_{0}(F_{0})[H]/L_{0} \rightarrow T_{1}(F_{1})/L_{1}$ ... $T_{n}(F_{n})/L_{n}$
where:
Each $T_{i}$ is a syntactic category (syntactic nonterminal) or a string literal (syntactic terminal).
Each $L_{i}$ for $i \in [1, n]$ is either a variable representing a logical form or an empty string. In the case when $L_{i}$ is an empty string, we allow dropping the trailing slash from the $T_{i}(F_{i})/L_{i}$ expression, resulting in just $T_{i}(F_{i})$.
$L_{0}$ is a logical form expressed in terms of $L_{1}...L_{n}$.
Each $F_{i}$ is a comma-separated feature list of the form $(attribute_{1}$:$value_{1}$, ..., $attribute_{k}$:$value_{k})$. In the case where $F_{i}$ is empty, we allow dropping the parentheses from the $T_{i}(F_{i})$ expression, resulting in just $T_{i}$.
$H$ is either an empty string or one of the variables $L_{i}$ for $i \in [1, n]$, indicating that $F_{0}$ default inherits the features of $F_{i}$ (the syntactic “head”). In the case where $H$ is an empty string, we allow dropping the brackets from the $T_{0}(F_{0})[H]$ expression, resulting in just $T_{0}(F_{0})$.
Note that while the above notation adopts the convention of splitting out the syntactic category and logical form from the feature list for visual prominence and to highlight the relationship to its context-free phrase-structure rule core, behaviorally it is identical to adding two more features to the feature list (we can call them, for example, $cat$ and $sem$) to represent the syntactic category and logical form.
This means that, for example, the rule
ACTIVE_VP[_head]/_head
$\rightarrow $ VP_SIMPLE(form:infinitive)/_head
can be considered a notational shorthand for the following rule expressed purely using feature lists:
(cat:ACTIVE_VP, sem:_head)[_head]
$\rightarrow $ (cat:VP_SIMPLE, sem:_head, form:infinitive)
Disjunction of features. Similarly to BIBREF37, we allow disjunctive feature specifications, which we denote by separating the alternative values with a pipe (`$|$'). The feature specification (form:gerund|infinitive) would thus unify with either (form:gerund) or (form:infinitive), but not with (form:pastparticiple).
Absence of features. We use a special atomic value _none_ to indicate that a given feature must either be absent or else explicitly set to the value _none_. The feature specification (subject:_none_, object:yes) would thus unify with either (object:yes) or (subject:_none_, object:yes), but not with (subject:yes, object:yes).
Negation of features. Similarly to BIBREF37, we allow negated feature specifications, which we denote by prefixing the attribute with a minus sign (`-'). The feature specification (-form:gerund|infinitive) would thus unify with (form:pastparticiple) or (form:_none_), but not with (form:gerund) or (form:infinitive). In general, a feature specification of the form (-attribute:v$_{1}$|...|v$_{j}$) can be considered a notational shorthand for (attribute:v$_{j+1}$|...|v$_{k}$|_none_), where v$_{j+1}$|...|v$_{k}$ is an enumeration of all possible values of the feature attribute other than v$_{1}$|...|v$_{j}$.
Default inheritance of features. If the lefthand side term is notated as $T_{0}(F_{0})[H]$, with $H$ equal to one of the variables $L_{i}$ for $i \in [1, n]$, then this is interpreted as a notational shorthand for augmenting both $F_{0}$ and $F_{i}$ with an additional list of attribute-value pairs $(a_{1}$:$\$v_{1}, ..., a_{k}$:$\$v_{k})$, where $a_{1} ... a_{k}$ are all of the attributes listed in $F_{i}$ that were not originally listed in $F_{0}$.
Unification of logical forms. As described in Appendix SECREF16, we represent logical forms using a variation of description logic, rather than using feature structures. In the context of unification, we consider logical forms to unify if and only they achieve structural concept equality after variable replacement (using the same variable replacements applied during unification of the corresponding feature lists), while taking into account the commutativity and associativity of $\sqcap $. For example, under this criterion, the logical form GenderRel $\sqcap $ $\exists $RolePair(Predicate, Gender)._head would unify with either GenderRel $\sqcap $ $\exists $RolePair(Predicate, Gender).Male or with ($\exists $RolePair(Predicate, Gender).Male) $\sqcap $ GenderRel under a variable replacement mapping _head to Male, but would not unify with GenderRel $\sqcap $ $\exists $RolePair(Predicate, Gender).Male $\sqcap $ $\exists $RolePair(Predicate, GenderHaver).FilmProducer.
Rule Format ::: Knowledge rule format
CFQ knowledge rules output expressions representing facts that are known to be true. They have no direct effect on text, logical forms, or sparql, but the generated knowledge can be used as preconditions to other rules. In the rules index, they are described in the following format:
$\rightarrow K$, where $K$ is knowledge that is output.
By convention, we define the rule name of a knowledge rule to be simply the string representing the knowledge that the rule outputs, and we omit the rule name in the rules index for brevity.
The union of those rules defines a knowledge base which we denote with $KB^{CFQ}$.
All knowledge in CFQ is represented in the form $P(X_1,...,X_n)$, where $P$ is a predicate from the list below, and $X_1, ..., X_n$ are either logical forms or else raw strings. Knowledge rules do not use variable-based expressions.
Supported knowledge predicates:
BoundRolePairs
ExclusiveRolePair
FreebaseEntityMapping
FreebasePropertyMapping
FreebaseTypeMapping
NonExclusiveRolePair
Role
Rule Format ::: Inference rule format
CFQ inference rules transform logical forms and may be conditioned on knowledge. In the rules index, they are described in the following format:
$K: L_0 \rightarrow L_1$
where $K$ represents a comma-separated list of knowledge preconditions, and $L_0$ and $L_1$ represent the input and output logical forms, all expressed in terms of a shared set of variables $v_1,...,v_m$.
These rules are interpreted as stating that if there exists a variable replacement $r()$ replacing $v_1,...,v_m$ with some logical forms $l_1,...,l_m$ respectively, such that $r(K) \subseteq KB^{CFQ}$, then we can apply the inference rule by rewriting $r(L_0)$ to $r(L_1)$.
Rule Format ::: Resolution rule format
CFQ resolution rules transform sparql expressions and may be conditioned on knowledge. They do not affect text or logical forms.
In the rules index, they are described in the following format:
$K: S_0 \rightarrow S_1~...~S_n$
where $K$ represents a comma-separated list of knowledge preconditions, $S_0$ is a variable-based expression and $S_1~...~S_n$ are either raw sparql strings or else expressions described in terms of the same variables used in $S_0$ and $K$.
These rules are interpreted as stating that if there exists a variable replacement $r()$ replacing $v_1,...,v_m$ with some logical forms, strings, or expressions $l_1,...,l_m$ respectively, such that $r(K) \subseteq KB^{CFQ}$, then we can apply the resolution rule by rewriting $r(S_0)$ to the sequence of terms $r(S_1)~...~r(S_n)$.
Generation Algorithm
Our generation algorithm produces triples of the form $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ in a mixed top-down and bottom-up fashion, with the final program of rule applications output alongside each triple in the form of a rule application DAG. The top-down portion of generation is responsible for efficiently searching for rules that can be applied to produce a meaningful example, while the bottom-up portion is responsible for actually applying the rules (i.e., performing the composition) and for producing the DAG.
The generation process proceeds in two phases, each involving a top-down as well as bottom-up aspect. In the first phase, we apply grammar rules interleaved with inference rules to produce a pair of $\langle \text{question, logical form} \rangle $. Specifically, we apply a recursive top-down algorithm which starts with the $S$ nonterminal and at every step performs a random search over the rules in the grammar which could produce the target nonterminal with accompanying feature structure. This top-down process proceeds until a candidate syntactic parse tree is attained whose leaves consist purely of syntactic terminals (i.e., string literals or entity placeholders). The grammar rules from this candidate parse tree are then applied in a bottom-up fashion beginning with the syntactic terminals to yield a tree of $\langle \text{text, logical form} \rangle $ pairs. After each such bottom-up grammar rule application, we then greedily apply all possible inference rules on the resulting logical forms, applying an arbitrary deterministic ordering to the inference rules in cases where rules could be applied in multiple valid orderings. This ensures that inference rules and grammar rules are executed in an interleaved manner and each inference rule is applied at the earliest possible occasion.
When a $\langle \text{question, logical form} \rangle $ pair is generated for the $S$ nonterminal, we proceed to the second phase of the algorithm, in which resolution rules are applied to generate a corresponding sparql query to make up the third element of the desired $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple. In practice, the bulk of the work in this phase is performed in a top-down fashion, in which resolution rules are recursively applied to transform a starting expression of the form get_specializations($L) (where $L represents the logical form output from the grammar phase) into a sequence of text literals representing the sparql query. This is followed nominally by a bottom-up process to construct the rule application DAG, yielding a tree of resolution rule applications of a similar form to the tree of interleaved grammar and inference rules output from the grammar phase. Note that while the grammar phase involves a large degree of random choice, the resolution phase proceeds much more deterministically, as the CFQ resolution rules have been designed such that any given question can yield only one possible sparql query, modulo commutativity and associativity of $\sqcap $. In cases where resolution rules could be applied in multiple valid orderings, we again apply an arbitrary deterministic ordering to the resolution rules so as to yield as consistent as possible a rule application DAG and $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple for any given question.
Finally, to ease the task of tracking unique query patterns and to minimize the impact on the learning task of implementation details regarding choice of variable names or ordering of clauses, we normalize the final sparql query by alphabetically sorting the query clauses and re-numbering the variables to follow a standard increasing order.
The resulting $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple is then appended to the CFQ dataset.
Generation Algorithm ::: Join by Logical Form
In general, we do not explicitly track rules to represent the example-independent behaviors of the generation algorithm, as the universal applicability of these rules mean that the complete behavior of the generator should be observable on any reasonably-sized train set. The same applies to certain core behaviors of the description logic $\mathcal {EL}$, such as commutativity and associativity of $\sqcap $, which we omit tracking as explicit rules due to their similar ubiquity of application.
One example-independent rule, however, that we do explicitly track is the rule that describes the handover process between the grammar phase and the resolution phase – or in terms of the rule application DAG, the rule that joins the tree of interleaved grammar and inference rule applications with the tree of resolution rule applications. We call this rule JOIN_BY_LOGICAL_FORM. It is included in the rules list for every example in CFQ and appears as the head of the rule application tree for each example.
Generation Algorithm ::: Relationship between Generation and Parsing
Note that conceptually a similar approach for combining the different rule types could be applied to the semantic parsing task. The main difference would be that, instead of performing random search over the grammar, the semantic parsing task would need to find the set of rules which produce the desired input text.
Generation Algorithm ::: Selecting an appropriate sample set
For many domains, the set of examples generated by exhaustively combining rules is infinite or prohibitively large. For example, the CFQ grammar generates an infinite set of questions, and even when restricted to a reasonable complexity, the set is still too large for practical use. This means that we need to choose which subset of examples we want to include in our dataset. Given our goal of comprehensively measuring compositional generalization, we do this by:
maximizing the overall diversity of rule combinations (allowing us to test as many rule combinations as possible)
while using a uniform distribution from simple examples to increasingly more complex examples.
We measure the diversity of rule combinations of a dataset using the empirical entropy over the frequency distribution of the subgraphs of the rule application DAGs, and we measure the complexity of an example using the number of rule applications used to generate it.
For CFQ, we choose the following practical trade-off between these two criteria. We first generate a sufficiently large sample set by performing random rule applications. We then subsample from it to select a subset that maximizes the entropy of the subgraph distribution (while only taking into account subgraphs with a limited number of nodes for practicality). We use a greedy algorithm that incrementally assigns elements to the subsampled set while maximizing entropy at each step.
The subsampling is initially limited to examples with the smallest complexity level and continues with increasingly larger complexity levels. We cap the maximum number of examples per level to achieve a uniform distribution across levels, and we limit the maximum complexity level such that the questions remain relatively natural. Table TABREF15 shows examples of generated questions at varying levels of complexity.
Example of a rule application DAG
Figures FIGREF190 through FIGREF192 show the rule application DAG that was produced when generating the question “Who directed [entity]?”. They illustrate how grammar, inference, and knowledge rules are combined to generate a pair of text and logical form, and how resolution rules are used to generate the sparql query for the resulting logical form.
Example of a rule application DAG ::: DAG normalization
As discussed in Section SECREF3, nodes of this DAG represent rule applications while edges represent dependencies among the rules; i.e., an edge $A \rightarrow B$ means that rule $B$ strictly depends on rule $A$ in the sense that the generator cannot apply rule $B$ before applying rule $A$. The DAG is normalized to ensure that a certain rule combination is represented using the same DAG across all the examples where it occurs. This is important for meaningfully comparing measures such as entropy and divergence across subgraphs of different examples.
Specifically, together with adopting the measures described above to ensure that rules are applied in a deterministic order, we achieve the normalization of the DAG by only producing edges that represent “minimal dependencies”. This means that if a rule $A$ can be applied after rule $B$, but it could also be applied after rule $B^{\prime }$ with $B \rightarrow B^{\prime }$ (i.e., $B^{\prime }$ depends on $B$), we don't produce the edge $B^{\prime } \rightarrow A$.
Example of a rule application DAG ::: Concept abbreviations
For brevity, in the rule application DAG figures we have applied the following abbreviations for several lengthy concept names:
Director = FilmDirector
Directee = DirectedFilm
Directing = DirectingAFilm
SubjectAgentVerb = PredicateWithBoundRolePairs(RolePair( SubjectHaver, Subject), RolePair(Predicate, Agent))
ObjectUndergoerVerb = PredicateWithBoundRolePairs(RolePair( ObjectHaver, Object), RolePair(Predicate, Undergoer))
E1 = Entity('?E1')
Example of a rule application DAG ::: Entity placeholders
As described in Section SECREF16, during generation we initially generate a $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple containing entity placeholders, and then replace those placeholders with specific entities as a post-processing step. Conceptually, one could construct a rule application DAG describing either the process by which the original $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple with entity placeholders was generated, or alternatively the rules that would need to be applied if constructing the $\langle \text{question, logical form, \textsc {sparql}{} query} \rangle $ triple containing the final entity MIDs directly. Structurally, these two DAGs are identical, differing only in the definition of two entity-related rules described below. The rule application DAG shown in the accompanying figures is the version using entity placeholders.
Versions of entity rules applicable when using entity placeholders:
ENTITY=[ENTITY]_HSz7QrdGdsX:
ENTITY(number:singular)/Entity(new_var(V1))
$\rightarrow $ '[entity]'
ENTITY_MID:
ent2sparql(Entity($X)) $\rightarrow $ $X
Versions of entity rules applicable when using actual entity MIDs:
ENTITY=[ENTITY]_HSz7QrdGdsX:
ENTITY(number:singular)/'m.'$X
$\rightarrow $ 'm.'$X
ENTITY_MID:
ent2sparql('m.'$X) $\rightarrow $ 'ns:m.'$X
Example of a rule application DAG ::: Subgraphs and their weights
Figure FIGREF203 shows an example of subgraphs in order to provide more details on the sampling and weighting of compounds. An example non-linear subgraph is highlighted by the red area, and two linear subgraphs are highlighted by the blue and the yellow areas, respectively.
As described in Section SECREF6, given a large subset $\mathbb {G}$ of subgraphs from the sample set as a whole, we calculate for each sample the weight of each subgraph $G \in \mathbb {G}$ that occurs in that sample as:
where $\text{occ}(G)$ is the set of all occurrences of $G$ in the sample, $\prec $ denotes the strict subgraph relation, and $P(G^{\prime }| G)$ is the empirical probability of $G^{\prime }$ occurring as a supergraph of $G$ over the full sample set.
Intuitively, we are trying to estimate how interesting the subgraph $G$ is in the sample. First, for every occurrence $g$ of a subgraph $G$, we look for the supergraph $G^{\prime }$ of $g$ that co-occurs most often with $G$ in the full sample set. The empirical probability of having $G^{\prime }$ as a supergraph of $G$ determines how interesting the occurrence $g$ is – the higher this probability, the less interesting the occurrence. Thus we compute the weight of the occurrence as the complement of this maximum empirical probability. Then we take the weight of $G$ to be the weight of the most interesting occurrence $g$ of $G$ in the sample.
E.g. in the extreme case that $G$ only occurs within the context $G^{\prime }$, the weight of $G$ will be 0 in all samples. Conversely, if $G$ occurs in many different contexts, such that there is no single other subgraph $G^{\prime }$ that subsumes it in many cases, then $w(G)$ will be high in all samples in which it occurs. This ensures that when calculating compound divergence based on a weighted subset of compounds, the most representative compounds are taken into account, while avoiding double-counting compounds whose frequency of occurrence is already largely explainable by the frequency of occurrence of one of its super-compounds.
Returning to our example in Figure FIGREF203, suppose that $G$ represents the smallest linear subgraph (yellow area), and suppose that the weight of $G$ in this sample is 0.4. Then this means that there exists some other subgraph $G^{\prime }$ (for instance, the linear subgraph highlighted by the blue area) that is a supergraph of $G$ in 60% of the occurrences of $G$ across the sample set.
Rules Index
Below is a selection of the rules used in the generation of CFQ. Specifically, this includes all rules involved in generating the question “Who directed [entity]?” (the same example illustrated in the rule application DAG in Appendix SECREF19). The format of the rules is discussed in Appendix SECREF17.
Rules Index ::: Grammar rules
S=WHQ_F6E9egkQqxj:
S/_x
$\rightarrow $ WHQ/_x
WHQ=NPQ_INDIRECT_VP_INDIRECT_TXCca9URgVm:
WHQ[_subject]/DropDependency(_subject) $\sqcap $ DropDependency($\exists $RolePair(Subject, SubjectHaver)._action)
$\rightarrow $ NPQ_INDIRECT(is_what:_none_, number:$n)/_subject
VP_INDIRECT(form:past, number:$n, object:yes, subject:_none_)/_action
NPQ_INDIRECT=WHO_5ptbPXXbuLZ:
NPQ_INDIRECT(number:singular)/Person
$\rightarrow $ 'who'
VP_INDIRECT=VP_INDIRECT_DP_ZJH4NhRkByc:
VP_INDIRECT(object:yes)[_action]/_action $\sqcap $ $\exists $RolePair(ObjectHaver, Object)._object
$\rightarrow $ VP_INDIRECT(object:_none_, subject:_none_)/_action
DP/_object
VP_INDIRECT=ACTIVE_VP_RX51Tm7RXPe:
VP_INDIRECT(object_type:$ut, subject_type:$at)[_head]/_head $\sqcap $ PredicateWithBoundRolePairs(RolePair(SubjectHaver, Subject), RolePair(Predicate, Agent)) $\sqcap $ PredicateWithBoundRolePairs(RolePair(ObjectHaver, Object), RolePair(Predicate, Undergoer))
$\rightarrow $ ACTIVE_VP(agent_type:$at, undergoer_type:$ut)/_head
ACTIVE_VP=VP_SIMPLE_hJqAyjRUYJp:
ACTIVE_VP(number:singular)[_head]/_head
$\rightarrow $ VP_SIMPLE(form:past)/_head
VP_SIMPLE=VP_GHWf3fcVRZg:
VP_SIMPLE(agent_type:person, undergoer_type:movie)[_head]/_head
$\rightarrow $ VP(concept_id:DirectingAFilm)/_head
VP=DIRECTED_JkYzNbQyXtv:
VP(concept_id:DirectingAFilm, form:past)/DirectingAFilm
$\rightarrow $ 'directed'
DP=ENTITY_M6fSP5GvRaN:
DP(is_proper_noun:yes, number:singular)[_head]/_head
$\rightarrow $ ENTITY/_head
ENTITY=[ENTITY]_HSz7QrdGdsX:
ENTITY(number:singular)/Entity(new_var(V1))
$\rightarrow $ '[entity]'
... (211 grammar rules total)
Rules Index ::: Inference rules
BOUND_ROLES_WITH_PREDICATE_OBJECT:
BoundRolePairs($A, RolePair($R, $Q), RolePair($T, $S)):
$\exists $RolePair($Q, $R).($A $\sqcap $ $B) $\rightarrow $ $\exists $RolePair($S, $T).($A $\sqcap $ $B)
BOUND_ROLES_WITH_PREDICATE_SUBJECT:
BoundRolePairs($B, RolePair($Q, $R), RolePair($S, $T)):
$B $\sqcap $ $\exists $RolePair($Q, $R).$A $\rightarrow $ $B $\sqcap $ $\exists $RolePair($S, $T).$A
IGNORE_BOUND_ROLE_PAIRS:
$A $\sqcap $ PredicateWithBoundRolePairs($X, $Y) $\rightarrow $ $A
IGNORE_DEPENDENCY_DROPPING:
DropDependency($X) $\rightarrow $ $X
PREDICATE_UNREIFICATION:
Role($Q, $P), Role($R, $P):
$\exists $RolePair($Q, Predicate).($P $\sqcap $ $\exists $RolePair(Predicate, $R).$A) $\rightarrow $ $\exists $RolePair($Q, $R).$A
... (17 inference rules total)
Rules Index ::: Resolution rules
CONJUNCTION_WITHOUT_ENTITY:
def2sparql($X $\sqcap $ $Y, $V1) $\rightarrow $ def2sparql($X, $V1) ' . ' def2sparql($Y, $V1)
ENTITY_MID:
ent2sparql(Entity($X)) $\rightarrow $ $X
GET_SPECIALIZATIONS:
get_specializations($X) $\rightarrow $ 'SELECT DISTINCT ' get_var($X, new_var($V0)) ' WHERE { ' def2sparql($X, get_var($X, $V0)) '}'
GET_VAR_CONJUNCTION:
get_var($X $\sqcap $ $Y, $V1) $\rightarrow $ shared_var(get_var($X, get_var($Y, $V1)), get_var($Y, get_var($X, $V1)))
GET_VAR_RELATION:
get_var($\exists $$R.$X, $V1) $\rightarrow $ $V1
GET_VAR_TYPE:
FreebaseTypeMapping($X, $F):
get_var($X, $V1) $\rightarrow $ $V1
PROPERTY_MAPPING:
FreebasePropertyMapping($R, $F):
role2sparql($R) $\rightarrow $ $F
RELATION_MAPPING_WITHOUT_EXCLUSION:
NonExclusiveRolePair($R):
rel2sparql($X, $R, $Y) $\rightarrow $ $X role2sparql($R) $Y
RELATION_TO_ENTITY:
def2sparql($\exists $$R.$X, $V1) $\rightarrow $ rel2sparql($V1, $R, ent2sparql($X))
SHARED_VAR:
shared_var($X, $X) $\rightarrow $ $X
SPECIALIZATION_OF_TYPE:
def2sparql($X, $V1) $\rightarrow $ $V1 ' a ' type2sparql($X)
TYPE_MAPPING:
FreebaseTypeMapping($X, $F):
type2sparql($X) $\rightarrow $ $F
... (21 resolution rules total)
Rules Index ::: Knowledge rules
$\rightarrow $ BoundRolePairs(DirectingFilm, RolePair(Predicate, Agent), RolePair(Predicate, FilmDirector))
$\rightarrow $ BoundRolePairs(DirectingFilm, RolePair(Predicate, Undergoer), RolePair(Predicate, DirectedFilm))
$\rightarrow $ BoundRolePairs(PredicateWithBoundRolePairs(RolePair(ObjectHaver, Object), RolePair(Predicate, Undergoer)), RolePair(ObjectHaver, Object), RolePair(Predicate, Undergoer))
$\rightarrow $ BoundRolePairs(PredicateWithBoundRolePairs(RolePair(Subject, SubjectHaver), RolePair(Agent, Predicate)), RolePair(Subject, SubjectHaver), RolePair(Agent, Predicate))
$\rightarrow $ FreebasePropertyMapping(RolePair(FilmDirector, DirectedFilm), 'ns:film.director.film')
$\rightarrow $ FreebaseTypeMapping(Person, 'ns:people.person')
$\rightarrow $ NonExclusiveRolePair(FilmDirector, DirectedFilm)
$\rightarrow $ Role(DirectedFilm, DirectingFilm)
$\rightarrow $ Role(FilmDirector, DirectingFilm)
... (194 knowledge rules total) | random , Output length, Input length, Output pattern, Input pattern |
4e379d6d5f87554fabf6f7f7b6ed92d2025e7280 | 4e379d6d5f87554fabf6f7f7b6ed92d2025e7280_0 | Q: What problem do they apply transfer learning to?
Text: Introduction
Continuous Speech Keyword Spotting (CSKS) aims to detect embedded keywords in audio recordings. These spotted keyword frequencies can then be used to analyze theme of communication, creating temporal visualizations and word clouds BIBREF0 . Another use case is to detect domain specific keywords which ASR (Automatic Speech Recognition) systems trained on public data cannot detect. For example, to detect a TV model number “W884” being mentioned in a recording, we might not have a large number of training sentences containing the model number of a newly launched TV to finetune a speech recognition (ASR) algorithm. A trained CSKS algorithm can be used to quickly extract out all instances of such keywords.
We train CSKS algorithms like other Keyword Spotting algorithms by classifying small fragments of audio in running speech. This requires the classifier model to have a formalized process to reject unseen instances (everything not a keyword, henceforth referred to as background) apart from ability to differentiate between classes (keywords). Another real world constraint that needs to be addressed while training such an algorithm is the availability of small amount of labeled keyword instances. We combine practices from fields of transfer learning, few-shot learning and metric learning to get better performance on this low training data imbalanced classification task.
Our work involves :
Our baselines, Honk( UID9 ), DeepSpeech-finetune( UID10 ), had comparatively both lower recall and precision. We noticed an improvement when fine tuning DeepSpeech model with prototypical loss (DeepSpeech-finetune-prototypical ( UID11 )). While analysing the false positives of this model, it was observed that the model gets confused between the keywords and it also wrongly classifies background noise as a keyword. To improve this, we combined prototypical loss with a metric loss to reject background (DeepSpeech-finetune-prototypical+metric( UID14 )). This model gave us the best results.
Related work
In the past, Hidden Markov Models (HMM) BIBREF6 , BIBREF7 , BIBREF8 have been used to solve the CSKS problem. But since the HMM techniques use Viterbi algorithms(computationally expensive) a faster approach is required.
Owning to the popularity of deep learning, many recent works such as BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 have used deep learning techniques for many speech processing tasks. In tasks such as ASR, Hannun et al. BIBREF3 proposed a RNN based model to transcribe speech into text. Even for plain keyword spotting, BIBREF1 , BIBREF2 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 have proposed various deep learning architectures to solve the task. But to the best of our knowledge, no past work has deployed deep learning for spotting keywords in continuous speech.
Recently, a lot of work is being done on training deep learning models with limited training data. Out of them, few-shot techniques as proposed by BIBREF18 , BIBREF4 have become really popular. Pons et al. BIBREF16 proposed a few-shot technique using prototypical networks BIBREF4 and transfer leaning BIBREF19 , BIBREF20 to solve a different audio task.
We took inspiration from these works to design our experiments to solve the CSKS task.
Dataset
Our learning data, which was created in-house, has 20 keywords to be spotted about television models of a consumer electronics brand. It was collected by making 40 participants utter each keyword 3 times. Each participant recorded in normal ambient noise conditions. As a result, after collection of learning data we have 120 (3 x 40) instances of each of the 20 keywords. We split the learning data 80:20 into train and validation sets. Train/Validation split was done on speaker level, so as to make sure that all occurrences of a particular speaker is present only on either of two sets. For testing, we used 10 different 5 minutes long simulated conversational recordings of television salesmen and customers from a shopping mall in India. These recordings contain background noise (as is expected in a mall) and have different languages (Indians speak a mixture of English and Hindi). The CSKS algorithm trained on instances of keywords in learning data is supposed to detect keywords embedded in conversations of test set.
Data Preprocessing
Our dataset consisted of keyword instances but the algorithm trained using this data needs to classify keywords in fragments of running conversations. To address this, we simulate the continuous speech scenario, both for keyword containing audio and background fragments, by using publicly available audio data which consisted of podcasts audio, songs, and audio narration files. For simulating fragments with keywords, we extract two random contiguous chunks from these publicly available audio files and insert the keyword either in the beginning, in the middle or in the end of the chunks, thus creating an audio segment of 2 seconds. Random 2 second segments taken from publicly available audio are used to simulate segments with no keywords(also referred to as background elsewhere in the paper). These artificially simulated audio chunks from train/validation set of pure keyword utterances were used to train/validate the model. Since the test data is quite noisy, we further used various kinds of techniques such as time-shift, pitch-shift and intensity variation to augment the data. Furthermore we used the same strategy as Tang et al. BIBREF2 of caching the data while training deep neural network on batches and artificially generating only 30% data which goes into a batch. By following these techniques, we could increase the data by many folds which not only helped the model to generalise better but also helped reduce the data preparation time during every epoch.
Feature Engineering
For all the experiments using Honk architecture, MFCC features were used. To extract these features, 20Hz/4kHz band pass filters was used to reduce the random noise. Mel-Frequency Cepstrum Coefficient (MFCC) of forty dimension were constructed and stacked using 20 milliseconds window size with 10 miliseconds overlap. For all the experiments using deep speech architecture, we have extracted spectrograms of audio files using 20 milliseconds window size with 10 milliseconds overlap and 480 nfft value.
Deep Learning Architectures
Honk is a baseline Neural Network architecture we used to address the problem. Honk has shown good performance on normal Keyword Spotting and thus was our choice as the first baseline. The neural network is a Deep Residual Convolutional Neural Network BIBREF21 which has number of feature maps fixed for all residual blocks. The python code of the model was taken from the open source repository BIBREF22 . We tried changing training strategies of Honk architecture by the methods we will describe later for DeepSpeech, but this did not improve the accuracy.
DeepSpeech-finetune is fine tuning the weights of openly available DeepSpeech BIBREF3 model (initial feature extraction layers and not the final ASR layer) for CSKS task. The architecture consists of pretrained initial layers of DeepSpeech followed by a set of LSTM layers and a Fully Connected layer (initialized randomly) for classification. Pretrained layers taken from DeepSpeech are the initial 2D convolution layers and the GRU layers which process the output of the 2D convolutions. The output of Fully Connected layer is fed into a softmax and then a cross entropy loss for classification is used to train the algorithm. Please note that the finetune trains for 21 classes (20 keywords + 1 background) as in aforementioned Honk model. The architecture can be seen in Fig. FIGREF6 .
The next model we try is fine tuning DeepSpeech model but with a different loss function. This loss function is taken from BIBREF4 . Prototypical loss works by concentrating embeddings of all data points of a class around the class prototype. This is done by putting a softmax on the negative distances from different prototypes to determine the probability to belong to corresponding classes. The architecture FIGREF7 is same as DeepSpeech-finetune, except output of pre-final layer is taken as embedding rather than applying a Fully Connected layer for classification. These embeddings are then used to calculate euclidean distances between datapoints and prototypes, represented as INLINEFORM0 in formulae. The softmax over negative distances from prototypes is used to train cross-entropy loss. During training, examples of each class are divided into support and query embeddings. The support embeddings are used to determine prototypes of the class. Equation EQREF12 shows derivation of prototype of INLINEFORM1 class where INLINEFORM2 is the neural network yielding the embedding and INLINEFORM3 is the set of support vectors for the class. The distance of query vectors from the prototypes of the class they belong to are minimized and prototypes of other classes is maximized when training the prototypical loss. The negative distances from the prototypes of each class are passed into softmax to get the probability of belonging in a class as shown in equation EQREF13 . We see better results when we train the algorithm using prototypical loss than normal cross entropy. On qualitatively observing the output from DeepSpeech-finetune-prototypical we see that the mistakes involving confusion between keywords are very less compared to datapoints of the class background being classified as one of the keywords. We hypothesize that this might be due to treating the entire background data as one class. The variance of background is very high and treating it as one class (a unimodal class in case of prototypes) might not be the best approach. To address this, we propose the next method where we use prototypes for classification within keywords and an additional metric loss component to keep distances of background datapoints from each prototype high. DISPLAYFORM0 DISPLAYFORM1
We hypothesize the components of loss function of this variant from failures of prototypical loss as stated earlier. The architecture is same as in FIGREF7 , but the loss function is different from DeepSpeech-finetune-prototypical. While in DeepSpeech-finetune-prototypical, we trained prototype loss with 21 classes(20 keywords + 1 background), in DeepSpeech-finetune-prototypical+metric prototype loss is trained only amongst the 20 keywords and a new additional metric loss component inspired from BIBREF5 is added to loss function. This metric loss component aims to bring datapoints of same class together and datapoints of different class further. Datapoints belonging to background are treated as different class objects for all other datapoints in a batch. So for each object in a batch, we add a loss component like equation EQREF15 to prototypical loss. INLINEFORM0 is all datapoints in the batch belonging to the same class as INLINEFORM1 and INLINEFORM2 is all datapoints belonging to different classes than INLINEFORM3 (including background). This architecture gets the best results. DISPLAYFORM0
Experiments, Results and Discussion
While testing, the distance of a datapoint is checked with all the prototypes to determine its predicted class. Overlapping chunks of running audio are sent to the classifier to get classified for presence of a keyword.
Train set numbers corresponding to all the models have shown in Table TABREF16 . DeepSpeech-finetune-prototypical+metric clearly beats the baselines in terms of both precision and recall. Honk is a respectable baseline and gets second best results after DeepSpeech-finetune-prototypical+metric, however, attempts to better Honk's performance using prototype loss and metric loss did not work at all.
Our method to combine prototypical loss with metric learning can be used for any classification problem which has a set of classes and a large background class, but its effectiveness needs to be tested on other datasets. | CSKS task |
518d0847b02b4f23a8f441faa38b935c9b892e1e | 518d0847b02b4f23a8f441faa38b935c9b892e1e_0 | Q: What are the baselines?
Text: Introduction
Continuous Speech Keyword Spotting (CSKS) aims to detect embedded keywords in audio recordings. These spotted keyword frequencies can then be used to analyze theme of communication, creating temporal visualizations and word clouds BIBREF0 . Another use case is to detect domain specific keywords which ASR (Automatic Speech Recognition) systems trained on public data cannot detect. For example, to detect a TV model number “W884” being mentioned in a recording, we might not have a large number of training sentences containing the model number of a newly launched TV to finetune a speech recognition (ASR) algorithm. A trained CSKS algorithm can be used to quickly extract out all instances of such keywords.
We train CSKS algorithms like other Keyword Spotting algorithms by classifying small fragments of audio in running speech. This requires the classifier model to have a formalized process to reject unseen instances (everything not a keyword, henceforth referred to as background) apart from ability to differentiate between classes (keywords). Another real world constraint that needs to be addressed while training such an algorithm is the availability of small amount of labeled keyword instances. We combine practices from fields of transfer learning, few-shot learning and metric learning to get better performance on this low training data imbalanced classification task.
Our work involves :
Our baselines, Honk( UID9 ), DeepSpeech-finetune( UID10 ), had comparatively both lower recall and precision. We noticed an improvement when fine tuning DeepSpeech model with prototypical loss (DeepSpeech-finetune-prototypical ( UID11 )). While analysing the false positives of this model, it was observed that the model gets confused between the keywords and it also wrongly classifies background noise as a keyword. To improve this, we combined prototypical loss with a metric loss to reject background (DeepSpeech-finetune-prototypical+metric( UID14 )). This model gave us the best results.
Related work
In the past, Hidden Markov Models (HMM) BIBREF6 , BIBREF7 , BIBREF8 have been used to solve the CSKS problem. But since the HMM techniques use Viterbi algorithms(computationally expensive) a faster approach is required.
Owning to the popularity of deep learning, many recent works such as BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 have used deep learning techniques for many speech processing tasks. In tasks such as ASR, Hannun et al. BIBREF3 proposed a RNN based model to transcribe speech into text. Even for plain keyword spotting, BIBREF1 , BIBREF2 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 have proposed various deep learning architectures to solve the task. But to the best of our knowledge, no past work has deployed deep learning for spotting keywords in continuous speech.
Recently, a lot of work is being done on training deep learning models with limited training data. Out of them, few-shot techniques as proposed by BIBREF18 , BIBREF4 have become really popular. Pons et al. BIBREF16 proposed a few-shot technique using prototypical networks BIBREF4 and transfer leaning BIBREF19 , BIBREF20 to solve a different audio task.
We took inspiration from these works to design our experiments to solve the CSKS task.
Dataset
Our learning data, which was created in-house, has 20 keywords to be spotted about television models of a consumer electronics brand. It was collected by making 40 participants utter each keyword 3 times. Each participant recorded in normal ambient noise conditions. As a result, after collection of learning data we have 120 (3 x 40) instances of each of the 20 keywords. We split the learning data 80:20 into train and validation sets. Train/Validation split was done on speaker level, so as to make sure that all occurrences of a particular speaker is present only on either of two sets. For testing, we used 10 different 5 minutes long simulated conversational recordings of television salesmen and customers from a shopping mall in India. These recordings contain background noise (as is expected in a mall) and have different languages (Indians speak a mixture of English and Hindi). The CSKS algorithm trained on instances of keywords in learning data is supposed to detect keywords embedded in conversations of test set.
Data Preprocessing
Our dataset consisted of keyword instances but the algorithm trained using this data needs to classify keywords in fragments of running conversations. To address this, we simulate the continuous speech scenario, both for keyword containing audio and background fragments, by using publicly available audio data which consisted of podcasts audio, songs, and audio narration files. For simulating fragments with keywords, we extract two random contiguous chunks from these publicly available audio files and insert the keyword either in the beginning, in the middle or in the end of the chunks, thus creating an audio segment of 2 seconds. Random 2 second segments taken from publicly available audio are used to simulate segments with no keywords(also referred to as background elsewhere in the paper). These artificially simulated audio chunks from train/validation set of pure keyword utterances were used to train/validate the model. Since the test data is quite noisy, we further used various kinds of techniques such as time-shift, pitch-shift and intensity variation to augment the data. Furthermore we used the same strategy as Tang et al. BIBREF2 of caching the data while training deep neural network on batches and artificially generating only 30% data which goes into a batch. By following these techniques, we could increase the data by many folds which not only helped the model to generalise better but also helped reduce the data preparation time during every epoch.
Feature Engineering
For all the experiments using Honk architecture, MFCC features were used. To extract these features, 20Hz/4kHz band pass filters was used to reduce the random noise. Mel-Frequency Cepstrum Coefficient (MFCC) of forty dimension were constructed and stacked using 20 milliseconds window size with 10 miliseconds overlap. For all the experiments using deep speech architecture, we have extracted spectrograms of audio files using 20 milliseconds window size with 10 milliseconds overlap and 480 nfft value.
Deep Learning Architectures
Honk is a baseline Neural Network architecture we used to address the problem. Honk has shown good performance on normal Keyword Spotting and thus was our choice as the first baseline. The neural network is a Deep Residual Convolutional Neural Network BIBREF21 which has number of feature maps fixed for all residual blocks. The python code of the model was taken from the open source repository BIBREF22 . We tried changing training strategies of Honk architecture by the methods we will describe later for DeepSpeech, but this did not improve the accuracy.
DeepSpeech-finetune is fine tuning the weights of openly available DeepSpeech BIBREF3 model (initial feature extraction layers and not the final ASR layer) for CSKS task. The architecture consists of pretrained initial layers of DeepSpeech followed by a set of LSTM layers and a Fully Connected layer (initialized randomly) for classification. Pretrained layers taken from DeepSpeech are the initial 2D convolution layers and the GRU layers which process the output of the 2D convolutions. The output of Fully Connected layer is fed into a softmax and then a cross entropy loss for classification is used to train the algorithm. Please note that the finetune trains for 21 classes (20 keywords + 1 background) as in aforementioned Honk model. The architecture can be seen in Fig. FIGREF6 .
The next model we try is fine tuning DeepSpeech model but with a different loss function. This loss function is taken from BIBREF4 . Prototypical loss works by concentrating embeddings of all data points of a class around the class prototype. This is done by putting a softmax on the negative distances from different prototypes to determine the probability to belong to corresponding classes. The architecture FIGREF7 is same as DeepSpeech-finetune, except output of pre-final layer is taken as embedding rather than applying a Fully Connected layer for classification. These embeddings are then used to calculate euclidean distances between datapoints and prototypes, represented as INLINEFORM0 in formulae. The softmax over negative distances from prototypes is used to train cross-entropy loss. During training, examples of each class are divided into support and query embeddings. The support embeddings are used to determine prototypes of the class. Equation EQREF12 shows derivation of prototype of INLINEFORM1 class where INLINEFORM2 is the neural network yielding the embedding and INLINEFORM3 is the set of support vectors for the class. The distance of query vectors from the prototypes of the class they belong to are minimized and prototypes of other classes is maximized when training the prototypical loss. The negative distances from the prototypes of each class are passed into softmax to get the probability of belonging in a class as shown in equation EQREF13 . We see better results when we train the algorithm using prototypical loss than normal cross entropy. On qualitatively observing the output from DeepSpeech-finetune-prototypical we see that the mistakes involving confusion between keywords are very less compared to datapoints of the class background being classified as one of the keywords. We hypothesize that this might be due to treating the entire background data as one class. The variance of background is very high and treating it as one class (a unimodal class in case of prototypes) might not be the best approach. To address this, we propose the next method where we use prototypes for classification within keywords and an additional metric loss component to keep distances of background datapoints from each prototype high. DISPLAYFORM0 DISPLAYFORM1
We hypothesize the components of loss function of this variant from failures of prototypical loss as stated earlier. The architecture is same as in FIGREF7 , but the loss function is different from DeepSpeech-finetune-prototypical. While in DeepSpeech-finetune-prototypical, we trained prototype loss with 21 classes(20 keywords + 1 background), in DeepSpeech-finetune-prototypical+metric prototype loss is trained only amongst the 20 keywords and a new additional metric loss component inspired from BIBREF5 is added to loss function. This metric loss component aims to bring datapoints of same class together and datapoints of different class further. Datapoints belonging to background are treated as different class objects for all other datapoints in a batch. So for each object in a batch, we add a loss component like equation EQREF15 to prototypical loss. INLINEFORM0 is all datapoints in the batch belonging to the same class as INLINEFORM1 and INLINEFORM2 is all datapoints belonging to different classes than INLINEFORM3 (including background). This architecture gets the best results. DISPLAYFORM0
Experiments, Results and Discussion
While testing, the distance of a datapoint is checked with all the prototypes to determine its predicted class. Overlapping chunks of running audio are sent to the classifier to get classified for presence of a keyword.
Train set numbers corresponding to all the models have shown in Table TABREF16 . DeepSpeech-finetune-prototypical+metric clearly beats the baselines in terms of both precision and recall. Honk is a respectable baseline and gets second best results after DeepSpeech-finetune-prototypical+metric, however, attempts to better Honk's performance using prototype loss and metric loss did not work at all.
Our method to combine prototypical loss with metric learning can be used for any classification problem which has a set of classes and a large background class, but its effectiveness needs to be tested on other datasets. | Honk, DeepSpeech-finetune |
8112d18681e266426cf7432ac4928b87f5ce8311 | 8112d18681e266426cf7432ac4928b87f5ce8311_0 | Q: What languages are considered?
Text: Introduction
Continuous Speech Keyword Spotting (CSKS) aims to detect embedded keywords in audio recordings. These spotted keyword frequencies can then be used to analyze theme of communication, creating temporal visualizations and word clouds BIBREF0 . Another use case is to detect domain specific keywords which ASR (Automatic Speech Recognition) systems trained on public data cannot detect. For example, to detect a TV model number “W884” being mentioned in a recording, we might not have a large number of training sentences containing the model number of a newly launched TV to finetune a speech recognition (ASR) algorithm. A trained CSKS algorithm can be used to quickly extract out all instances of such keywords.
We train CSKS algorithms like other Keyword Spotting algorithms by classifying small fragments of audio in running speech. This requires the classifier model to have a formalized process to reject unseen instances (everything not a keyword, henceforth referred to as background) apart from ability to differentiate between classes (keywords). Another real world constraint that needs to be addressed while training such an algorithm is the availability of small amount of labeled keyword instances. We combine practices from fields of transfer learning, few-shot learning and metric learning to get better performance on this low training data imbalanced classification task.
Our work involves :
Our baselines, Honk( UID9 ), DeepSpeech-finetune( UID10 ), had comparatively both lower recall and precision. We noticed an improvement when fine tuning DeepSpeech model with prototypical loss (DeepSpeech-finetune-prototypical ( UID11 )). While analysing the false positives of this model, it was observed that the model gets confused between the keywords and it also wrongly classifies background noise as a keyword. To improve this, we combined prototypical loss with a metric loss to reject background (DeepSpeech-finetune-prototypical+metric( UID14 )). This model gave us the best results.
Related work
In the past, Hidden Markov Models (HMM) BIBREF6 , BIBREF7 , BIBREF8 have been used to solve the CSKS problem. But since the HMM techniques use Viterbi algorithms(computationally expensive) a faster approach is required.
Owning to the popularity of deep learning, many recent works such as BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 have used deep learning techniques for many speech processing tasks. In tasks such as ASR, Hannun et al. BIBREF3 proposed a RNN based model to transcribe speech into text. Even for plain keyword spotting, BIBREF1 , BIBREF2 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 have proposed various deep learning architectures to solve the task. But to the best of our knowledge, no past work has deployed deep learning for spotting keywords in continuous speech.
Recently, a lot of work is being done on training deep learning models with limited training data. Out of them, few-shot techniques as proposed by BIBREF18 , BIBREF4 have become really popular. Pons et al. BIBREF16 proposed a few-shot technique using prototypical networks BIBREF4 and transfer leaning BIBREF19 , BIBREF20 to solve a different audio task.
We took inspiration from these works to design our experiments to solve the CSKS task.
Dataset
Our learning data, which was created in-house, has 20 keywords to be spotted about television models of a consumer electronics brand. It was collected by making 40 participants utter each keyword 3 times. Each participant recorded in normal ambient noise conditions. As a result, after collection of learning data we have 120 (3 x 40) instances of each of the 20 keywords. We split the learning data 80:20 into train and validation sets. Train/Validation split was done on speaker level, so as to make sure that all occurrences of a particular speaker is present only on either of two sets. For testing, we used 10 different 5 minutes long simulated conversational recordings of television salesmen and customers from a shopping mall in India. These recordings contain background noise (as is expected in a mall) and have different languages (Indians speak a mixture of English and Hindi). The CSKS algorithm trained on instances of keywords in learning data is supposed to detect keywords embedded in conversations of test set.
Data Preprocessing
Our dataset consisted of keyword instances but the algorithm trained using this data needs to classify keywords in fragments of running conversations. To address this, we simulate the continuous speech scenario, both for keyword containing audio and background fragments, by using publicly available audio data which consisted of podcasts audio, songs, and audio narration files. For simulating fragments with keywords, we extract two random contiguous chunks from these publicly available audio files and insert the keyword either in the beginning, in the middle or in the end of the chunks, thus creating an audio segment of 2 seconds. Random 2 second segments taken from publicly available audio are used to simulate segments with no keywords(also referred to as background elsewhere in the paper). These artificially simulated audio chunks from train/validation set of pure keyword utterances were used to train/validate the model. Since the test data is quite noisy, we further used various kinds of techniques such as time-shift, pitch-shift and intensity variation to augment the data. Furthermore we used the same strategy as Tang et al. BIBREF2 of caching the data while training deep neural network on batches and artificially generating only 30% data which goes into a batch. By following these techniques, we could increase the data by many folds which not only helped the model to generalise better but also helped reduce the data preparation time during every epoch.
Feature Engineering
For all the experiments using Honk architecture, MFCC features were used. To extract these features, 20Hz/4kHz band pass filters was used to reduce the random noise. Mel-Frequency Cepstrum Coefficient (MFCC) of forty dimension were constructed and stacked using 20 milliseconds window size with 10 miliseconds overlap. For all the experiments using deep speech architecture, we have extracted spectrograms of audio files using 20 milliseconds window size with 10 milliseconds overlap and 480 nfft value.
Deep Learning Architectures
Honk is a baseline Neural Network architecture we used to address the problem. Honk has shown good performance on normal Keyword Spotting and thus was our choice as the first baseline. The neural network is a Deep Residual Convolutional Neural Network BIBREF21 which has number of feature maps fixed for all residual blocks. The python code of the model was taken from the open source repository BIBREF22 . We tried changing training strategies of Honk architecture by the methods we will describe later for DeepSpeech, but this did not improve the accuracy.
DeepSpeech-finetune is fine tuning the weights of openly available DeepSpeech BIBREF3 model (initial feature extraction layers and not the final ASR layer) for CSKS task. The architecture consists of pretrained initial layers of DeepSpeech followed by a set of LSTM layers and a Fully Connected layer (initialized randomly) for classification. Pretrained layers taken from DeepSpeech are the initial 2D convolution layers and the GRU layers which process the output of the 2D convolutions. The output of Fully Connected layer is fed into a softmax and then a cross entropy loss for classification is used to train the algorithm. Please note that the finetune trains for 21 classes (20 keywords + 1 background) as in aforementioned Honk model. The architecture can be seen in Fig. FIGREF6 .
The next model we try is fine tuning DeepSpeech model but with a different loss function. This loss function is taken from BIBREF4 . Prototypical loss works by concentrating embeddings of all data points of a class around the class prototype. This is done by putting a softmax on the negative distances from different prototypes to determine the probability to belong to corresponding classes. The architecture FIGREF7 is same as DeepSpeech-finetune, except output of pre-final layer is taken as embedding rather than applying a Fully Connected layer for classification. These embeddings are then used to calculate euclidean distances between datapoints and prototypes, represented as INLINEFORM0 in formulae. The softmax over negative distances from prototypes is used to train cross-entropy loss. During training, examples of each class are divided into support and query embeddings. The support embeddings are used to determine prototypes of the class. Equation EQREF12 shows derivation of prototype of INLINEFORM1 class where INLINEFORM2 is the neural network yielding the embedding and INLINEFORM3 is the set of support vectors for the class. The distance of query vectors from the prototypes of the class they belong to are minimized and prototypes of other classes is maximized when training the prototypical loss. The negative distances from the prototypes of each class are passed into softmax to get the probability of belonging in a class as shown in equation EQREF13 . We see better results when we train the algorithm using prototypical loss than normal cross entropy. On qualitatively observing the output from DeepSpeech-finetune-prototypical we see that the mistakes involving confusion between keywords are very less compared to datapoints of the class background being classified as one of the keywords. We hypothesize that this might be due to treating the entire background data as one class. The variance of background is very high and treating it as one class (a unimodal class in case of prototypes) might not be the best approach. To address this, we propose the next method where we use prototypes for classification within keywords and an additional metric loss component to keep distances of background datapoints from each prototype high. DISPLAYFORM0 DISPLAYFORM1
We hypothesize the components of loss function of this variant from failures of prototypical loss as stated earlier. The architecture is same as in FIGREF7 , but the loss function is different from DeepSpeech-finetune-prototypical. While in DeepSpeech-finetune-prototypical, we trained prototype loss with 21 classes(20 keywords + 1 background), in DeepSpeech-finetune-prototypical+metric prototype loss is trained only amongst the 20 keywords and a new additional metric loss component inspired from BIBREF5 is added to loss function. This metric loss component aims to bring datapoints of same class together and datapoints of different class further. Datapoints belonging to background are treated as different class objects for all other datapoints in a batch. So for each object in a batch, we add a loss component like equation EQREF15 to prototypical loss. INLINEFORM0 is all datapoints in the batch belonging to the same class as INLINEFORM1 and INLINEFORM2 is all datapoints belonging to different classes than INLINEFORM3 (including background). This architecture gets the best results. DISPLAYFORM0
Experiments, Results and Discussion
While testing, the distance of a datapoint is checked with all the prototypes to determine its predicted class. Overlapping chunks of running audio are sent to the classifier to get classified for presence of a keyword.
Train set numbers corresponding to all the models have shown in Table TABREF16 . DeepSpeech-finetune-prototypical+metric clearly beats the baselines in terms of both precision and recall. Honk is a respectable baseline and gets second best results after DeepSpeech-finetune-prototypical+metric, however, attempts to better Honk's performance using prototype loss and metric loss did not work at all.
Our method to combine prototypical loss with metric learning can be used for any classification problem which has a set of classes and a large background class, but its effectiveness needs to be tested on other datasets. | English, Hindi |
b14f13f2a3a316e5a5de9e707e1e6ed55e235f6f | b14f13f2a3a316e5a5de9e707e1e6ed55e235f6f_0 | Q: Does this model train faster than state of the art models?
Text: Introduction
Neural sequence-to-sequence (seq2seq) models BIBREF0, BIBREF1, BIBREF2, BIBREF3 generate an output sequence $\mathbf {y} = \lbrace y_1, \ldots , y_T\rbrace $ given an input sequence $\mathbf {x} = \lbrace x_1, \ldots , x_{T^{\prime }}\rbrace $ using conditional probabilities $P_\theta (\mathbf {y}|\mathbf {x})$ predicted by neural networks (parameterized by $\theta $).
Most seq2seq models are autoregressive, meaning that they factorize the joint probability of the output sequence given the input sequence $P_\theta (\mathbf {y}|\mathbf {x})$ into the product of probabilities over the next token in the sequence given the input sequence and previously generated tokens:
Each factor, $P_\theta (y_{t} | y_{<t}, \mathbf {x})$, can be implemented by function approximators such as RNNs BIBREF0 and Transformers BIBREF3. This factorization takes the complicated problem of joint estimation over an exponentially large output space of outputs $\mathbf {y}$, and turns it into a sequence of tractable multi-class classification problems predicting $y_t$ given the previous words, allowing for simple maximum log-likelihood training. However, this assumption of left-to-right factorization may be sub-optimal from a modeling perspective BIBREF4, BIBREF5, and generation of outputs must be done through a linear left-to-right pass through the output tokens using beam search, which is not easily parallelizable on hardware such as GPUs.
Recently, there has been work on non-autoregressive sequence generation for neural machine translation (NMT; BIBREF6, BIBREF7, BIBREF8) and language modeling BIBREF9. Non-autoregressive models attempt to model the joint distribution $P_\theta (\mathbf {y}|\mathbf {x})$ directly, decoupling the dependencies of decoding history during generation. A naïve solution is to assume that each token of the target sequence is independent given the input:
Unfortunately, the performance of this simple model falls far behind autoregressive models, as seq2seq tasks usually do have strong conditional dependencies between output variables BIBREF6. This problem can be mitigated by introducing a latent variable $\mathbf {z}$ to model these conditional dependencies:
where $p_{\theta }(\mathbf {z}|\mathbf {x})$ is the prior distribution over latent $\mathbf {z}$ and $P_{\theta }(\mathbf {y}|\mathbf {z}, \mathbf {x})$ is the “generative” distribution (a.k.a decoder). Non-autoregressive generation can be achieved by the following independence assumption in the decoding process:
BIBREF6 proposed a $\mathbf {z}$ representing fertility scores specifying the number of output words each input word generates, significantly improving the performance over Eq. (DISPLAY_FORM4). But the performance still falls behind state-of-the-art autoregressive models due to the limited expressiveness of fertility to model the interdependence between words in $\textbf {y}$.
In this paper, we propose a simple, effective, and efficient model, FlowSeq, which models expressive prior distribution $p_{\theta }(\mathbf {z}|\mathbf {x})$ using a powerful mathematical framework called generative flow BIBREF10. This framework can elegantly model complex distributions, and has obtained remarkable success in modeling continuous data such as images and speech through efficient density estimation and sampling BIBREF11, BIBREF12, BIBREF13. Based on this, we posit that generative flow also has potential to introduce more meaningful latent variables $\mathbf {z}$ in the non-autoregressive generation in Eq. (DISPLAY_FORM5).
FlowSeq is a flow-based sequence-to-sequence model, which is (to our knowledge) the first non-autoregressive seq2seq model utilizing generative flows. It allows for efficient parallel decoding while modeling the joint distribution of the output sequence. Experimentally, on three benchmark datasets for machine translation – WMT2014, WMT2016 and IWSLT-2014, FlowSeq achieves comparable performance with state-of-the-art non-autoregressive models, and almost constant decoding time w.r.t. the sequence length compared to a typical left-to-right Transformer model, which is super-linear.
Background
As noted above, incorporating expressive latent variables $\mathbf {z}$ is essential to decouple the dependencies between tokens in the target sequence in non-autoregressive models. However, in order to model all of the complexities of sequence generation to the point that we can read off all of the words in the output in an independent fashion (as in Eq. (DISPLAY_FORM6)), the prior distribution $p_{\theta }(\mathbf {z}|\mathbf {x})$ will necessarily be quite complex. In this section, we describe generative flows BIBREF10, an effective method for arbitrary modeling of complicated distributions, before describing how we apply them to sequence-to-sequence generation in §SECREF3.
Background ::: Flow-based Generative Models
Put simply, flow-based generative models work by transforming a simple distribution (e.g. a simple Gaussian) into a complex one (e.g. the complex prior distribution over $\mathbf {z}$ that we want to model) through a chain of invertible transformations.
Formally, a set of latent variables $\mathbf {\upsilon } \in \Upsilon $ are introduced with a simple prior distribution $p_{\Upsilon }(\upsilon )$. We then define a bijection function $f: \mathcal {Z} \rightarrow \Upsilon $ (with $g = f^{-1}$), whereby we can define a generative process over variables $\mathbf {z}$:
An important insight behind flow-based models is that given this bijection function, the change of variable formula defines the model distribution on $\mathbf {z}\in \mathcal {Z}$ by:
Here $\frac{\partial f_{\theta }(\mathbf {z})}{\partial \mathbf {z}}$ is the Jacobian matrix of $f_{\theta }$ at $\mathbf {z}$.
Eq. (DISPLAY_FORM9) provides a way to calculate the (complex) density of $\mathbf {z}$ by calculating the (simple) density of $\upsilon $ and the Jacobian of the transformation from $\mathbf {z}$ to $\upsilon $. For efficiency purposes, flow-based models generally use certain types of transformations $f_{\theta }$ where both the inverse functions $g_{\theta }$ and the Jacobian determinants are tractable to compute. A stacked sequence of such invertible transformations is also called a (normalizing) flow BIBREF10:
where $f = f_1 \circ f_2 \circ \cdots \circ f_K$ is a flow of $K$ transformations (omitting $\theta $s for brevity).
Background ::: Variational Inference and Training
In the context of maximal likelihood estimation (MLE), we wish to minimize the negative log-likelihood of the parameters:
where $D=\lbrace (\mathbf {x}^i, \mathbf {y}^i)\rbrace _{i=1}^{N}$ is the set of training data. However, the likelihood $P_{\theta }(\mathbf {y}| \mathbf {x})$ after marginalizing out latent variables $\mathbf {z}$ (LHS in Eq. (DISPLAY_FORM5)) is intractable to compute or differentiate directly. Variational inference BIBREF14 provides a solution by introducing a parametric inference model $q_{\phi }(\mathbf {z}|\mathbf {y}, \mathbf {x})$ (a.k.a posterior) which is then used to approximate this integral by sampling individual examples of $\mathbf {z}$. These models then optimize the evidence lower bound (ELBO), which considers both the “reconstruction error” $\log P_\theta (\mathbf {y}|\mathbf {z},\mathbf {x})$ and KL-divergence between the posterior and the prior:
Both inference model $\phi $ and decoder $\theta $ parameters are optimized according to this objective.
FlowSeq
We first overview FlowSeq's architecture (shown in Figure FIGREF13) and training process here before detailing each component in following sections. Similarly to classic seq2seq models, at both training and test time FlowSeq first reads the whole input sequence $\mathbf {x}$ and calculates a vector for each word in the sequence, the source encoding.
At training time, FlowSeq's parameters are learned using a variational training paradigm overviewed in §SECREF10. First, we draw samples of latent codes $\mathbf {z}$ from the current posterior $q_{\phi } (\mathbf {z}|\mathbf {y}, \mathbf {x})$. Next, we feed $\mathbf {z}$ together with source encodings into the decoder network and the prior flow to compute the probabilities of $P_{\theta }(\mathbf {y}|\mathbf {z}, \mathbf {x})$ and $p_{\theta }(\mathbf {z}|\mathbf {x})$ for optimizing the ELBO (Eq. (DISPLAY_FORM12)).
At test time, generation is performed by first sampling a latent code $\mathbf {z}$ from the prior flow by executing the generative process defined in Eq. (DISPLAY_FORM8). In this step, the source encodings produced from the encoder are used as conditional inputs. Then the decoder receives both the sampled latent code $\mathbf {z}$ and the source encoder outputs to generate the target sequence $\mathbf {y}$ from $P_{\theta }(\mathbf {y}|\mathbf {z}, \mathbf {x})$.
FlowSeq ::: Source Encoder
The source encoder encodes the source sequences into hidden representations, which are used in computing attention when generating latent variables in the posterior network and prior network as well as the cross-attention with decoder. Any standard neural sequence model can be used as its encoder, including RNNs BIBREF0 or Transformers BIBREF3.
FlowSeq ::: Posterior ::: Generation of Latent Variables.
The latent variables $\mathbf {z}$ are represented as a sequence of continuous random vectors $\mathbf {z}=\lbrace \mathbf {z}_1, \ldots , \mathbf {z}_T\rbrace $ with the same length as the target sequence $\mathbf {y}$. Each $\mathbf {z}_t$ is a $d_{\mathrm {z}}$-dimensional vector, where $d_{\mathrm {z}}$ is the dimension of the latent space. The posterior distribution $q_{\phi } (\mathbf {z}|\mathbf {y}, \mathbf {x})$ models each $\mathbf {z}_t$ as a diagonal Gaussian with learned mean and variance:
where $\mu _{t}(\cdot )$ and $\sigma _{t}(\cdot )$ are neural networks such as RNNs or Transformers.
FlowSeq ::: Posterior ::: Zero initialization.
While we perform standard random initialization for most layers of the network, we initialize the last linear transforms that generate the $\mu $ and $\log \sigma ^2$ values with zeros. This ensures that the posterior distribution as a simple normal distribution, which we found helps train very deep generative flows more stably.
FlowSeq ::: Posterior ::: Token Dropout.
The motivation of introducing the latent variable $\mathbf {z}$ into the model is to model the uncertainty in the generative process. Thus, it is preferable that $\mathbf {z}$ capture contextual interdependence between tokens in $\mathbf {y}$. However, there is an obvious local optimum where the posterior network generates a latent vector $\mathbf {z}_t$ that only encodes the information about the corresponding target token $y_t$, and the decoder simply generates the “correct” token at each step $t$ with $\mathbf {z}_t$ as input. In this case, FlowSeq reduces to the baseline model in Eq. (DISPLAY_FORM4). To escape this undesired local optimum, we apply token-level dropout to randomly drop an entire token when calculating the posterior, to ensure the model also has to learn how to use contextual information. This technique is similar to the “masked language model” in previous studies BIBREF15, BIBREF16, BIBREF17.
FlowSeq ::: Decoder
As the decoder, we take the latent sequence $\mathbf {z}$ as input, run it through several layers of a neural sequence model such as a Transformer, then directly predict the output tokens in $\mathbf {y}$ individually and independently. Notably, unlike standard seq2seq decoders, we do not perform causal masking to prevent attending to future tokens, making the model fully non-autoregressive.
FlowSeq ::: Flow Architecture for Prior
The flow architecture is based on Glow BIBREF11. It consists of a series of steps of flow, combined in a multi-scale architecture (see Figure FIGREF13.) Each step of flow consists three types of elementary flows – actnorm, invertible multi-head linear, and coupling. Note that all three functions are invertible and conducive to calculation of log determinants (details in Appendix SECREF6).
FlowSeq ::: Flow Architecture for Prior ::: Actnorm.
The activation normalization layer (actnorm; BIBREF11) is an alternative for batch normalization BIBREF18, that has mainly been used in the context of image data to alleviate problems in model training. Actnorm performs an affine transformation of the activations using a scale and bias parameter per feature for sequences:
Both $\mathbf {z}$ and $\mathbf {z}^{\prime }$ are tensors of shape $[T\times d_{\mathrm {z}}]$ with time dimension $t$ and feature dimension $d_{\mathrm {z}}$. The parameters are initialized such that over each feature $\mathbf {z}_{t}^{\prime }$ has zero mean and unit variance given an initial mini-batch of data.
FlowSeq ::: Flow Architecture for Prior ::: Invertible Multi-head Linear Layers.
To incorporate general permutations of variables along the feature dimension to ensure that each dimension can affect every other ones after a sufficient number of steps of flow, BIBREF11 proposed a trainable invertible $1\times 1$ convolution layer for 2D images. It is straightforward to apply similar transformations to sequential data:
where $\mathbf {W}$ is the weight matrix of shape $[d_{\mathrm {z}} \times d_{\mathrm {z}}]$. The log-determinant of this transformation is:
The cost of computing $\mathrm {det}(\mathbf {W})$ is $O(d_{\mathrm {z}}^3)$.
Unfortunately, $d_{\mathrm {z}}$ in Seq2Seq generation is commonly large, e.g. 512, significantly slowing down the model for computing $\mathrm {det}(\mathbf {W})$. To apply this to sequence generation, we propose a multi-head invertible linear layer, which first splits each $d_{\mathrm {z}}$-dimensional feature vector into $h$ heads with dimension $d_h = d_{\mathrm {z}}/h$. Then the linear transformation in (DISPLAY_FORM26) is applied to each head, with $d_h\times d_h$ weight matrix $\mathbf {W}$, significantly reducing the dimension. For splitting of heads, one step of flow contains one linear layer with either row-major or column-major splitting format, and these steps with different linear layers are composed in an alternating pattern.
FlowSeq ::: Flow Architecture for Prior ::: Affine Coupling Layers.
To model interdependence across time steps, we use affine coupling layers BIBREF19:
where $\mathrm {s}(\mathbf {z}_a, \mathbf {x})$ and $\mathrm {b}(\mathbf {z}_a, \mathbf {x})$ are outputs of two neural networks with $\mathbf {z}_a$ and $\mathbf {x}$ as input. These are shown in Figure FIGREF21 (c). In experiments, we implement $\mathrm {s}(\cdot )$ and $\mathrm {b}(\cdot )$ with one Transformer decoder layer BIBREF3: multi-head self-attention over $\mathbf {z}_a$, followed by multi-head inter-attention over $\mathbf {x}$, followed by a position-wise feed-forward network. The input $\mathbf {z}_a$ is fed into this layer in one pass, without causal masking.
As in BIBREF19, the $\mathrm {split}()$ function splits $\mathbf {z}$ the input tensor into two halves, while the $\mathrm {concat}$ operation performs the corresponding reverse concatenation operation. In our architecture, three types of split functions are used, based on the split dimension and pattern. Figure FIGREF21 (b) illustrates the three splitting types. The first type of split groups $\mathbf {z}$ along the time dimension on alternate indices. In this case, FlowSeq mainly models the interactions between time-steps. The second and third types of splits perform on the feature dimension, with continuous and alternate patterns, respectively. For each type of split, we alternate $\mathbf {z}_a$ and $\mathbf {z}_b$ to increase the flexibility of the split function. Different types of affine coupling layers alternate in the flow, similar to the linear layers.
FlowSeq ::: Flow Architecture for Prior ::: Multi-scale Architecture.
We follow BIBREF19 in implementing a multi-scale architecture using the squeezing operation on the feature dimension, which has been demonstrated helpful for training deep flows. Formally, each scale is a combination of several steps of the flow (see Figure FIGREF21 (a)). After each scale, the model drops half of the dimensions with the third type of split in Figure FIGREF21 (b) to reduce computational and memory cost, outputting the tensor with shape $[T \times \frac{d}{2}]$. Then the squeezing operation transforms the $T \times \frac{d}{2}$ tensor into an $\frac{T}{2} \times d$ one as the input of the next scale. We pad each sentence with EOS tokens to ensure $T$ is divisible by 2. The right component of Figure FIGREF13 illustrates the multi-scale architecture.
FlowSeq ::: Predicting Target Sequence Length
In autoregressive seq2seq models, it is natural to determine the length of the sequence dynamically by simply predicting a special EOS token. However, for FlowSeq to predict the entire sequence in parallel, it needs to know its length in advance to generate the latent sequence $\mathbf {z}$. Instead of predicting the absolute length of the target sequence, we predict the length difference between source and target sequences using a classifier with a range of $[-20, 20]$. Numbers in this range are predicted by max-pooling the source encodings into a single vector, running this through a linear layer, and taking a softmax. This classifier is learned jointly with the rest of the model.
FlowSeq ::: Decoding Process
At inference time, the model needs to identify the sequence with the highest conditional probability by marginalizing over all possible latent variables (see Eq. (DISPLAY_FORM5)), which is intractable in practice. We propose three approximating decoding algorithms to reduce the search space.
FlowSeq ::: Decoding Process ::: Argmax Decoding.
Following BIBREF6, one simple and effective method is to select the best sequence by choosing the highest-probability latent sequence $\mathbf {z}$:
where identifying $\mathbf {y}^*$ only requires independently maximizing the local probability for each output position (see Eq. DISPLAY_FORM6).
FlowSeq ::: Decoding Process ::: Noisy Parallel Decoding (NPD).
A more accurate approximation of decoding, proposed in BIBREF6, is to draw samples from the latent space and compute the best output for each latent sequence. Then, a pre-trained autoregressive model is adopted to rank these sequences. In FlowSeq, different candidates can be generated by sampling different target lengths or different samples from the prior, and both of the strategies can be batched via masks during decoding. In our experiments, we first select the top $l$ length candidates from the length predictor in §SECREF29. Then, for each length candidate we use $r$ random samples from the prior network to generate output sequences, yielding a total of $l\times r$ candidates.
FlowSeq ::: Decoding Process ::: Importance Weighted Decoding (IWD)
The third approximating method is based on the lower bound of importance weighted estimation BIBREF20. Similarly to NPD, IWD first draws samples from the latent space and computes the best output for each latent sequence. Then, IWD ranks these candidate sequences with $K$ importance samples:
IWD does not rely on a separate pre-trained model, though it significantly slows down the decoding speed. The detailed comparison of these three decoding methods is provided in §SECREF45.
FlowSeq ::: Discussion
Different from the architecture proposed in BIBREF9, the architecture of FlowSeq is not using any autoregressive flow BIBREF21, BIBREF22, yielding a truly non-autoregressive model with efficient generation. Note that the FlowSeq remains non-autoregressive even if we use an RNN in the architecture because RNN is only used to encode a complete sequence of codes and all the input tokens can be fed into the RNN in parallel. This makes it possible to use highly-optimized implementations of RNNs such as those provided by cuDNN. Thus while RNNs do experience some drop in speed, it is less extreme than that experienced when using autoregressive models.
Experiments ::: Experimental Setups ::: Translation Datasets
We evaluate FlowSeq on three machine translation benchmark datasets: WMT2014 DE-EN (around 4.5M sentence pairs), WMT2016 RO-EN (around 610K sentence pairs) and a smaller dataset IWSLT2014 DE-EN (around 150K sentence pairs). We use scripts from fairseq BIBREF23 to preprocess WMT2014 and IWSLT2014, where the preprocessing steps follow BIBREF3 for WMT2014. We use the data provided in BIBREF7 for WMT2016. For both WMT datasets, the source and target languages share the same set of BPE embeddings while for IWSLT2014 we use separate embeddings. During training, we filter out sentences longer than 80 for WMT dataset and 60 for IWSLT, respectively.
Experiments ::: Experimental Setups ::: Modules and Hyperparameters
We implement the encoder, decoder and posterior networks with standard (unmasked) Transformer layers BIBREF3. For WMT datasets, the encoder consists of 6 layers, and the decoder and posterior are composed of 4 layers, and 8 attention heads. and for IWSLT, the encoder has 5 layers, and decoder and posterior have 3 layers, and 4 attention heads. The prior flow consists of 3 scales with the number of steps $[48, 48, 16]$ from bottom to top. To dissect the impact of model dimension on translation quality and speed, we perform experiments on two versions of FlowSeq with $d_{model}/d_{hidden} = 256/512$ (base) and $d_{model}/d_{hidden} = 512/1024$ (large). More model details are provided in Appendix SECREF7.
Experiments ::: Experimental Setups ::: Optimization
Parameter optimization is performed with the Adam optimizer BIBREF24 with $\beta =(0.9, 0.999)$ and $\epsilon =1e-6$. Each mini-batch consist of 2048 sentences. The learning rate is initialized to $5e-4$, and exponentially decays with rate $0.999995$. The gradient clipping cutoff is $1.0$. For all the FlowSeq models, we apply $0.1$ label smoothing and averaged the 5 best checkpoints to create the final model.
At the beginning of training, the posterior network is randomly initialized, producing noisy supervision to the prior. To mitigate this issue, we first set the weight of the $\mathrm {KL}$ term in ELBO to zero for 30,000 updates to train the encoder, decoder and posterior networks. Then the $\mathrm {KL}$ weight linearly increases to one for another 10,000 updates, which we found essential to accelerate training and achieve stable performance.
Experiments ::: Experimental Setups ::: Knowledge Distillation
Previous work on non-autoregressive generation BIBREF6, BIBREF8 has used translations produced by a pre-trained autoregressive NMT model as the training data, noting that this can significantly improve the performance. We analyze the impact of distillation in § SECREF45.
Experiments ::: Main Results
We first conduct experiments to compare the performance of FlowSeq with strong baseline models, including NAT w/ Fertility BIBREF6, NAT-IR BIBREF7, NAT-REG BIBREF25, LV NAR BIBREF26, CTC Loss BIBREF27, and CMLM BIBREF8.
Table TABREF39 provides the BLEU scores of FlowSeq with argmax decoding, together with baselines with purely non-autoregressive decoding methods that generate output sequence in one parallel pass. The first block lists results of models trained on raw data, while the second block are results using knowledge distillation. Without using knowledge distillation, FlowSeq base model achieves significant improvement (more than 9 BLEU points) over CMLM-base and LV NAR. It demonstrates the effectiveness of FlowSeq on modeling the complex interdependence in target languages.
Towards the effect of knowledge distillation, we can mainly obtain two observations: i) Similar to the findings in previous work, knowledge distillation still benefits the translation quality of FlowSeq. ii) Compared to previous models, the benefit of knowledge distillation on FlowSeq is less significant, yielding less than 3 BLEU improvement on WMT2014 DE-EN corpus, and even no improvement on WMT2016 RO-EN corpus. The reason might be that FlowSeq does not rely much on knowledge distillation to alleviate the multi-modality problem.
Table TABREF40 illustrates the BLEU scores of FlowSeq and baselines with advanced decoding methods such as iterative refinement, IWD and NPD rescoring. The first block in Table TABREF40 includes the baseline results from autoregressive Transformer. For the sampling procedure in IWD and NPD, we sampled from a reduced-temperature model BIBREF11 to obtain high-quality samples. We vary the temperature within $\lbrace 0.1, 0.2, 0.3, 0.4, 0.5, 1.0\rbrace $ and select the best temperature based on the performance on development sets. The analysis of the impact of sampling temperature and other hyper-parameters on samples is in § SECREF50. For FlowSeq, NPD obtains better results than IWD, showing that FlowSeq still falls behind auto-regressive Transformer on model data distributions. Comparing with CMLM BIBREF8 with 10 iterations of refinement, which is a contemporaneous work that achieves state-of-the-art translation performance, FlowSeq obtains competitive performance on both WMT2014 and WMT2016 corpora, with only slight degradation in translation quality. Leveraging iterative refinement to further improve the performance of FlowSeq has been left to future work.
Experiments ::: Analysis on Decoding Speed
In this section, we compare the decoding speed (measured in average time in seconds required to decode one sentence) of FlowSeq at test time with that of the autoregressive Transformer model. We use the test set of WMT14 EN-DE for evaluation and all experiments are conducted on a single NVIDIA TITAN X GPU.
Experiments ::: Analysis on Decoding Speed ::: How does batch size affect the decoding speed?
First, we investigate how different decoding batch size can affect the decoding speed. We vary the decoding batch size within $\lbrace 1, 4, 8, 32, 64, 128\rbrace $. Figure. FIGREF44 shows that for both FlowSeq and Transformer decoding is faster when using a larger batch size. However, FlowSeq has much larger gains in the decoding speed w.r.t. the increase in batch size, gaining a speed up of 594% of base model and 403% of large model when using a batch size of 128. We hypothesize that this is because the operations in FlowSeq are more friendly to batching while the Transformer model with beam search at test time is less efficient in benefiting from batching.
Experiments ::: Analysis on Decoding Speed ::: How does sentence length affect the decoding speed?
Next, we examine if sentence length is a major factor affecting the decoding speed. We bucket the test data by the target sentence length. From Fig. FIGREF44, we can see that as the sentence length increases, FlowSeq achieves almost constant decoding time while Transformer has a linearly increasing decoding time. The relative decoding speed up of FlowSeq versus Transformer linearly increases as the sequence length increases. The potential of decoding long sequences with constant time is an attractive property of FlowSeq.
Experiments ::: Analysis of Rescoring Candidates
In Fig. FIGREF49, we analyze how different sampling hyperparameters affect the performance of rescoring. First, we observe that the number of samples $r$ for each length is the most important factor. The performance is always improved with a larger sample size. Second, a larger number of length candidates does not necessarily increase the rescoring performance. Third, we find that a larger sampling temperature (0.3 - 0.5) can increase the diversity of translations and leads to better rescoring BLEU. However, the latent samples become noisy when a large temperature (1.0) is used.
Experiments ::: Analysis of Translation Diversity
Following BIBREF28, we analyze the output diversity of FlowSeq. BIBREF28 proposed pairwise-BLEU and BLEU computed in a leave-one-out manner to calibrate the diversity and quality of translation hypotheses. A lower pairwise-BLEU score implies a more diverse hypothesis set. And a higher BLEU score implies a better translation quality. We experiment on a subset of test set of WMT14-ENDE with ten references each sentence BIBREF29. In Fig. FIGREF52, we compare FlowSeq with other multi-hypothesis generation methods (ten hypotheses each sentence) to analyze how well the generation outputs of FlowSeq are in terms of diversity and quality. The right corner area of the figure indicates the ideal generations: high diversity and high quality. While FlowSeq still lags behind the autoregressive generations, by increasing the sampling temperature it provides a way of generating more diverse outputs while keeping the translation quality almost unchanged. More analysis of translation outputs and detailed results are provided in the Appendix SECREF9 and SECREF10.
Conclusion
We propose FlowSeq, an efficient and effective model for non-autoregressive sequence generation by using generative flows. One potential direction for future work is to leverage iterative refinement techniques such as masked language models to further improve translation quality. Another exciting direction is to, theoretically and empirically, investigate the latent space in FlowSeq, hence providing deep insights of the model, even enhancing controllable text generation.
Acknowledgments
This work was supported in part by DARPA grant FA8750-18-2-0018 funded under the AIDA program and grant HR0011-15-C-0114 funded under the LORELEI program. Any opinions, findings, and conclusions expressed in this material are those of the authors and do not necessarily reflect the views of DARPA. The authors thank Amazon for their gift of AWS cloud credits and anonymous reviewers for their helpful suggestions.
Appendix: FlowSeq
Flow Layers ::: ActNorm
Log-determinant:
Flow Layers ::: Invertible Linear
Log-determinant:
where $h$ is the number of heads.
Flow Layers ::: Affine Coupling
Log-determinant:
Analysis of training dynamics
In Fig. FIGREF57, we plot the train and dev loss together with dev BLEU scores for the first 50 epochs. We can see that the reconstruction loss is increasing at the initial stage of training, then start to decrease when training with full KL loss. In addition, we observed that FlowSeq does not suffer the KL collapse problem BIBREF30, BIBREF31. This is because the decoder of FlowSeq is non-autogressive, with latent variable $\mathbf {z}$ as the only input.
Analysis of Translation Results
In Tab. TABREF58, we present randomly picked translation outputs from the test set of WMT14-DEEN. For each German input sentence, we pick three hypotheses from 30 samples. We have the following observations: First, in most cases, it can accurately express the meaning of the source sentence, sometimes in a different way from the reference sentence, which cannot be precisely reflected by the BLEU score. Second, by controlling the sampling hyper-parameters such as the length candidates $l$, the sampling temperature $\tau $ and the number of samples $r$ under each length, FlowSeq is able to generate diverse translations expressing the same meaning. Third, repetition and broken translations also exist in some cases due to the lack of language model dependencies in the decoder.
Results of Translation Diversity
Table TABREF59 shows the detailed results of translation deversity. | Unanswerable |
ba6422e22297c7eb0baa381225a2f146b9621791 | ba6422e22297c7eb0baa381225a2f146b9621791_0 | Q: What is the performance difference between proposed method and state-of-the-arts on these datasets?
Text: Introduction
Neural sequence-to-sequence (seq2seq) models BIBREF0, BIBREF1, BIBREF2, BIBREF3 generate an output sequence $\mathbf {y} = \lbrace y_1, \ldots , y_T\rbrace $ given an input sequence $\mathbf {x} = \lbrace x_1, \ldots , x_{T^{\prime }}\rbrace $ using conditional probabilities $P_\theta (\mathbf {y}|\mathbf {x})$ predicted by neural networks (parameterized by $\theta $).
Most seq2seq models are autoregressive, meaning that they factorize the joint probability of the output sequence given the input sequence $P_\theta (\mathbf {y}|\mathbf {x})$ into the product of probabilities over the next token in the sequence given the input sequence and previously generated tokens:
Each factor, $P_\theta (y_{t} | y_{<t}, \mathbf {x})$, can be implemented by function approximators such as RNNs BIBREF0 and Transformers BIBREF3. This factorization takes the complicated problem of joint estimation over an exponentially large output space of outputs $\mathbf {y}$, and turns it into a sequence of tractable multi-class classification problems predicting $y_t$ given the previous words, allowing for simple maximum log-likelihood training. However, this assumption of left-to-right factorization may be sub-optimal from a modeling perspective BIBREF4, BIBREF5, and generation of outputs must be done through a linear left-to-right pass through the output tokens using beam search, which is not easily parallelizable on hardware such as GPUs.
Recently, there has been work on non-autoregressive sequence generation for neural machine translation (NMT; BIBREF6, BIBREF7, BIBREF8) and language modeling BIBREF9. Non-autoregressive models attempt to model the joint distribution $P_\theta (\mathbf {y}|\mathbf {x})$ directly, decoupling the dependencies of decoding history during generation. A naïve solution is to assume that each token of the target sequence is independent given the input:
Unfortunately, the performance of this simple model falls far behind autoregressive models, as seq2seq tasks usually do have strong conditional dependencies between output variables BIBREF6. This problem can be mitigated by introducing a latent variable $\mathbf {z}$ to model these conditional dependencies:
where $p_{\theta }(\mathbf {z}|\mathbf {x})$ is the prior distribution over latent $\mathbf {z}$ and $P_{\theta }(\mathbf {y}|\mathbf {z}, \mathbf {x})$ is the “generative” distribution (a.k.a decoder). Non-autoregressive generation can be achieved by the following independence assumption in the decoding process:
BIBREF6 proposed a $\mathbf {z}$ representing fertility scores specifying the number of output words each input word generates, significantly improving the performance over Eq. (DISPLAY_FORM4). But the performance still falls behind state-of-the-art autoregressive models due to the limited expressiveness of fertility to model the interdependence between words in $\textbf {y}$.
In this paper, we propose a simple, effective, and efficient model, FlowSeq, which models expressive prior distribution $p_{\theta }(\mathbf {z}|\mathbf {x})$ using a powerful mathematical framework called generative flow BIBREF10. This framework can elegantly model complex distributions, and has obtained remarkable success in modeling continuous data such as images and speech through efficient density estimation and sampling BIBREF11, BIBREF12, BIBREF13. Based on this, we posit that generative flow also has potential to introduce more meaningful latent variables $\mathbf {z}$ in the non-autoregressive generation in Eq. (DISPLAY_FORM5).
FlowSeq is a flow-based sequence-to-sequence model, which is (to our knowledge) the first non-autoregressive seq2seq model utilizing generative flows. It allows for efficient parallel decoding while modeling the joint distribution of the output sequence. Experimentally, on three benchmark datasets for machine translation – WMT2014, WMT2016 and IWSLT-2014, FlowSeq achieves comparable performance with state-of-the-art non-autoregressive models, and almost constant decoding time w.r.t. the sequence length compared to a typical left-to-right Transformer model, which is super-linear.
Background
As noted above, incorporating expressive latent variables $\mathbf {z}$ is essential to decouple the dependencies between tokens in the target sequence in non-autoregressive models. However, in order to model all of the complexities of sequence generation to the point that we can read off all of the words in the output in an independent fashion (as in Eq. (DISPLAY_FORM6)), the prior distribution $p_{\theta }(\mathbf {z}|\mathbf {x})$ will necessarily be quite complex. In this section, we describe generative flows BIBREF10, an effective method for arbitrary modeling of complicated distributions, before describing how we apply them to sequence-to-sequence generation in §SECREF3.
Background ::: Flow-based Generative Models
Put simply, flow-based generative models work by transforming a simple distribution (e.g. a simple Gaussian) into a complex one (e.g. the complex prior distribution over $\mathbf {z}$ that we want to model) through a chain of invertible transformations.
Formally, a set of latent variables $\mathbf {\upsilon } \in \Upsilon $ are introduced with a simple prior distribution $p_{\Upsilon }(\upsilon )$. We then define a bijection function $f: \mathcal {Z} \rightarrow \Upsilon $ (with $g = f^{-1}$), whereby we can define a generative process over variables $\mathbf {z}$:
An important insight behind flow-based models is that given this bijection function, the change of variable formula defines the model distribution on $\mathbf {z}\in \mathcal {Z}$ by:
Here $\frac{\partial f_{\theta }(\mathbf {z})}{\partial \mathbf {z}}$ is the Jacobian matrix of $f_{\theta }$ at $\mathbf {z}$.
Eq. (DISPLAY_FORM9) provides a way to calculate the (complex) density of $\mathbf {z}$ by calculating the (simple) density of $\upsilon $ and the Jacobian of the transformation from $\mathbf {z}$ to $\upsilon $. For efficiency purposes, flow-based models generally use certain types of transformations $f_{\theta }$ where both the inverse functions $g_{\theta }$ and the Jacobian determinants are tractable to compute. A stacked sequence of such invertible transformations is also called a (normalizing) flow BIBREF10:
where $f = f_1 \circ f_2 \circ \cdots \circ f_K$ is a flow of $K$ transformations (omitting $\theta $s for brevity).
Background ::: Variational Inference and Training
In the context of maximal likelihood estimation (MLE), we wish to minimize the negative log-likelihood of the parameters:
where $D=\lbrace (\mathbf {x}^i, \mathbf {y}^i)\rbrace _{i=1}^{N}$ is the set of training data. However, the likelihood $P_{\theta }(\mathbf {y}| \mathbf {x})$ after marginalizing out latent variables $\mathbf {z}$ (LHS in Eq. (DISPLAY_FORM5)) is intractable to compute or differentiate directly. Variational inference BIBREF14 provides a solution by introducing a parametric inference model $q_{\phi }(\mathbf {z}|\mathbf {y}, \mathbf {x})$ (a.k.a posterior) which is then used to approximate this integral by sampling individual examples of $\mathbf {z}$. These models then optimize the evidence lower bound (ELBO), which considers both the “reconstruction error” $\log P_\theta (\mathbf {y}|\mathbf {z},\mathbf {x})$ and KL-divergence between the posterior and the prior:
Both inference model $\phi $ and decoder $\theta $ parameters are optimized according to this objective.
FlowSeq
We first overview FlowSeq's architecture (shown in Figure FIGREF13) and training process here before detailing each component in following sections. Similarly to classic seq2seq models, at both training and test time FlowSeq first reads the whole input sequence $\mathbf {x}$ and calculates a vector for each word in the sequence, the source encoding.
At training time, FlowSeq's parameters are learned using a variational training paradigm overviewed in §SECREF10. First, we draw samples of latent codes $\mathbf {z}$ from the current posterior $q_{\phi } (\mathbf {z}|\mathbf {y}, \mathbf {x})$. Next, we feed $\mathbf {z}$ together with source encodings into the decoder network and the prior flow to compute the probabilities of $P_{\theta }(\mathbf {y}|\mathbf {z}, \mathbf {x})$ and $p_{\theta }(\mathbf {z}|\mathbf {x})$ for optimizing the ELBO (Eq. (DISPLAY_FORM12)).
At test time, generation is performed by first sampling a latent code $\mathbf {z}$ from the prior flow by executing the generative process defined in Eq. (DISPLAY_FORM8). In this step, the source encodings produced from the encoder are used as conditional inputs. Then the decoder receives both the sampled latent code $\mathbf {z}$ and the source encoder outputs to generate the target sequence $\mathbf {y}$ from $P_{\theta }(\mathbf {y}|\mathbf {z}, \mathbf {x})$.
FlowSeq ::: Source Encoder
The source encoder encodes the source sequences into hidden representations, which are used in computing attention when generating latent variables in the posterior network and prior network as well as the cross-attention with decoder. Any standard neural sequence model can be used as its encoder, including RNNs BIBREF0 or Transformers BIBREF3.
FlowSeq ::: Posterior ::: Generation of Latent Variables.
The latent variables $\mathbf {z}$ are represented as a sequence of continuous random vectors $\mathbf {z}=\lbrace \mathbf {z}_1, \ldots , \mathbf {z}_T\rbrace $ with the same length as the target sequence $\mathbf {y}$. Each $\mathbf {z}_t$ is a $d_{\mathrm {z}}$-dimensional vector, where $d_{\mathrm {z}}$ is the dimension of the latent space. The posterior distribution $q_{\phi } (\mathbf {z}|\mathbf {y}, \mathbf {x})$ models each $\mathbf {z}_t$ as a diagonal Gaussian with learned mean and variance:
where $\mu _{t}(\cdot )$ and $\sigma _{t}(\cdot )$ are neural networks such as RNNs or Transformers.
FlowSeq ::: Posterior ::: Zero initialization.
While we perform standard random initialization for most layers of the network, we initialize the last linear transforms that generate the $\mu $ and $\log \sigma ^2$ values with zeros. This ensures that the posterior distribution as a simple normal distribution, which we found helps train very deep generative flows more stably.
FlowSeq ::: Posterior ::: Token Dropout.
The motivation of introducing the latent variable $\mathbf {z}$ into the model is to model the uncertainty in the generative process. Thus, it is preferable that $\mathbf {z}$ capture contextual interdependence between tokens in $\mathbf {y}$. However, there is an obvious local optimum where the posterior network generates a latent vector $\mathbf {z}_t$ that only encodes the information about the corresponding target token $y_t$, and the decoder simply generates the “correct” token at each step $t$ with $\mathbf {z}_t$ as input. In this case, FlowSeq reduces to the baseline model in Eq. (DISPLAY_FORM4). To escape this undesired local optimum, we apply token-level dropout to randomly drop an entire token when calculating the posterior, to ensure the model also has to learn how to use contextual information. This technique is similar to the “masked language model” in previous studies BIBREF15, BIBREF16, BIBREF17.
FlowSeq ::: Decoder
As the decoder, we take the latent sequence $\mathbf {z}$ as input, run it through several layers of a neural sequence model such as a Transformer, then directly predict the output tokens in $\mathbf {y}$ individually and independently. Notably, unlike standard seq2seq decoders, we do not perform causal masking to prevent attending to future tokens, making the model fully non-autoregressive.
FlowSeq ::: Flow Architecture for Prior
The flow architecture is based on Glow BIBREF11. It consists of a series of steps of flow, combined in a multi-scale architecture (see Figure FIGREF13.) Each step of flow consists three types of elementary flows – actnorm, invertible multi-head linear, and coupling. Note that all three functions are invertible and conducive to calculation of log determinants (details in Appendix SECREF6).
FlowSeq ::: Flow Architecture for Prior ::: Actnorm.
The activation normalization layer (actnorm; BIBREF11) is an alternative for batch normalization BIBREF18, that has mainly been used in the context of image data to alleviate problems in model training. Actnorm performs an affine transformation of the activations using a scale and bias parameter per feature for sequences:
Both $\mathbf {z}$ and $\mathbf {z}^{\prime }$ are tensors of shape $[T\times d_{\mathrm {z}}]$ with time dimension $t$ and feature dimension $d_{\mathrm {z}}$. The parameters are initialized such that over each feature $\mathbf {z}_{t}^{\prime }$ has zero mean and unit variance given an initial mini-batch of data.
FlowSeq ::: Flow Architecture for Prior ::: Invertible Multi-head Linear Layers.
To incorporate general permutations of variables along the feature dimension to ensure that each dimension can affect every other ones after a sufficient number of steps of flow, BIBREF11 proposed a trainable invertible $1\times 1$ convolution layer for 2D images. It is straightforward to apply similar transformations to sequential data:
where $\mathbf {W}$ is the weight matrix of shape $[d_{\mathrm {z}} \times d_{\mathrm {z}}]$. The log-determinant of this transformation is:
The cost of computing $\mathrm {det}(\mathbf {W})$ is $O(d_{\mathrm {z}}^3)$.
Unfortunately, $d_{\mathrm {z}}$ in Seq2Seq generation is commonly large, e.g. 512, significantly slowing down the model for computing $\mathrm {det}(\mathbf {W})$. To apply this to sequence generation, we propose a multi-head invertible linear layer, which first splits each $d_{\mathrm {z}}$-dimensional feature vector into $h$ heads with dimension $d_h = d_{\mathrm {z}}/h$. Then the linear transformation in (DISPLAY_FORM26) is applied to each head, with $d_h\times d_h$ weight matrix $\mathbf {W}$, significantly reducing the dimension. For splitting of heads, one step of flow contains one linear layer with either row-major or column-major splitting format, and these steps with different linear layers are composed in an alternating pattern.
FlowSeq ::: Flow Architecture for Prior ::: Affine Coupling Layers.
To model interdependence across time steps, we use affine coupling layers BIBREF19:
where $\mathrm {s}(\mathbf {z}_a, \mathbf {x})$ and $\mathrm {b}(\mathbf {z}_a, \mathbf {x})$ are outputs of two neural networks with $\mathbf {z}_a$ and $\mathbf {x}$ as input. These are shown in Figure FIGREF21 (c). In experiments, we implement $\mathrm {s}(\cdot )$ and $\mathrm {b}(\cdot )$ with one Transformer decoder layer BIBREF3: multi-head self-attention over $\mathbf {z}_a$, followed by multi-head inter-attention over $\mathbf {x}$, followed by a position-wise feed-forward network. The input $\mathbf {z}_a$ is fed into this layer in one pass, without causal masking.
As in BIBREF19, the $\mathrm {split}()$ function splits $\mathbf {z}$ the input tensor into two halves, while the $\mathrm {concat}$ operation performs the corresponding reverse concatenation operation. In our architecture, three types of split functions are used, based on the split dimension and pattern. Figure FIGREF21 (b) illustrates the three splitting types. The first type of split groups $\mathbf {z}$ along the time dimension on alternate indices. In this case, FlowSeq mainly models the interactions between time-steps. The second and third types of splits perform on the feature dimension, with continuous and alternate patterns, respectively. For each type of split, we alternate $\mathbf {z}_a$ and $\mathbf {z}_b$ to increase the flexibility of the split function. Different types of affine coupling layers alternate in the flow, similar to the linear layers.
FlowSeq ::: Flow Architecture for Prior ::: Multi-scale Architecture.
We follow BIBREF19 in implementing a multi-scale architecture using the squeezing operation on the feature dimension, which has been demonstrated helpful for training deep flows. Formally, each scale is a combination of several steps of the flow (see Figure FIGREF21 (a)). After each scale, the model drops half of the dimensions with the third type of split in Figure FIGREF21 (b) to reduce computational and memory cost, outputting the tensor with shape $[T \times \frac{d}{2}]$. Then the squeezing operation transforms the $T \times \frac{d}{2}$ tensor into an $\frac{T}{2} \times d$ one as the input of the next scale. We pad each sentence with EOS tokens to ensure $T$ is divisible by 2. The right component of Figure FIGREF13 illustrates the multi-scale architecture.
FlowSeq ::: Predicting Target Sequence Length
In autoregressive seq2seq models, it is natural to determine the length of the sequence dynamically by simply predicting a special EOS token. However, for FlowSeq to predict the entire sequence in parallel, it needs to know its length in advance to generate the latent sequence $\mathbf {z}$. Instead of predicting the absolute length of the target sequence, we predict the length difference between source and target sequences using a classifier with a range of $[-20, 20]$. Numbers in this range are predicted by max-pooling the source encodings into a single vector, running this through a linear layer, and taking a softmax. This classifier is learned jointly with the rest of the model.
FlowSeq ::: Decoding Process
At inference time, the model needs to identify the sequence with the highest conditional probability by marginalizing over all possible latent variables (see Eq. (DISPLAY_FORM5)), which is intractable in practice. We propose three approximating decoding algorithms to reduce the search space.
FlowSeq ::: Decoding Process ::: Argmax Decoding.
Following BIBREF6, one simple and effective method is to select the best sequence by choosing the highest-probability latent sequence $\mathbf {z}$:
where identifying $\mathbf {y}^*$ only requires independently maximizing the local probability for each output position (see Eq. DISPLAY_FORM6).
FlowSeq ::: Decoding Process ::: Noisy Parallel Decoding (NPD).
A more accurate approximation of decoding, proposed in BIBREF6, is to draw samples from the latent space and compute the best output for each latent sequence. Then, a pre-trained autoregressive model is adopted to rank these sequences. In FlowSeq, different candidates can be generated by sampling different target lengths or different samples from the prior, and both of the strategies can be batched via masks during decoding. In our experiments, we first select the top $l$ length candidates from the length predictor in §SECREF29. Then, for each length candidate we use $r$ random samples from the prior network to generate output sequences, yielding a total of $l\times r$ candidates.
FlowSeq ::: Decoding Process ::: Importance Weighted Decoding (IWD)
The third approximating method is based on the lower bound of importance weighted estimation BIBREF20. Similarly to NPD, IWD first draws samples from the latent space and computes the best output for each latent sequence. Then, IWD ranks these candidate sequences with $K$ importance samples:
IWD does not rely on a separate pre-trained model, though it significantly slows down the decoding speed. The detailed comparison of these three decoding methods is provided in §SECREF45.
FlowSeq ::: Discussion
Different from the architecture proposed in BIBREF9, the architecture of FlowSeq is not using any autoregressive flow BIBREF21, BIBREF22, yielding a truly non-autoregressive model with efficient generation. Note that the FlowSeq remains non-autoregressive even if we use an RNN in the architecture because RNN is only used to encode a complete sequence of codes and all the input tokens can be fed into the RNN in parallel. This makes it possible to use highly-optimized implementations of RNNs such as those provided by cuDNN. Thus while RNNs do experience some drop in speed, it is less extreme than that experienced when using autoregressive models.
Experiments ::: Experimental Setups ::: Translation Datasets
We evaluate FlowSeq on three machine translation benchmark datasets: WMT2014 DE-EN (around 4.5M sentence pairs), WMT2016 RO-EN (around 610K sentence pairs) and a smaller dataset IWSLT2014 DE-EN (around 150K sentence pairs). We use scripts from fairseq BIBREF23 to preprocess WMT2014 and IWSLT2014, where the preprocessing steps follow BIBREF3 for WMT2014. We use the data provided in BIBREF7 for WMT2016. For both WMT datasets, the source and target languages share the same set of BPE embeddings while for IWSLT2014 we use separate embeddings. During training, we filter out sentences longer than 80 for WMT dataset and 60 for IWSLT, respectively.
Experiments ::: Experimental Setups ::: Modules and Hyperparameters
We implement the encoder, decoder and posterior networks with standard (unmasked) Transformer layers BIBREF3. For WMT datasets, the encoder consists of 6 layers, and the decoder and posterior are composed of 4 layers, and 8 attention heads. and for IWSLT, the encoder has 5 layers, and decoder and posterior have 3 layers, and 4 attention heads. The prior flow consists of 3 scales with the number of steps $[48, 48, 16]$ from bottom to top. To dissect the impact of model dimension on translation quality and speed, we perform experiments on two versions of FlowSeq with $d_{model}/d_{hidden} = 256/512$ (base) and $d_{model}/d_{hidden} = 512/1024$ (large). More model details are provided in Appendix SECREF7.
Experiments ::: Experimental Setups ::: Optimization
Parameter optimization is performed with the Adam optimizer BIBREF24 with $\beta =(0.9, 0.999)$ and $\epsilon =1e-6$. Each mini-batch consist of 2048 sentences. The learning rate is initialized to $5e-4$, and exponentially decays with rate $0.999995$. The gradient clipping cutoff is $1.0$. For all the FlowSeq models, we apply $0.1$ label smoothing and averaged the 5 best checkpoints to create the final model.
At the beginning of training, the posterior network is randomly initialized, producing noisy supervision to the prior. To mitigate this issue, we first set the weight of the $\mathrm {KL}$ term in ELBO to zero for 30,000 updates to train the encoder, decoder and posterior networks. Then the $\mathrm {KL}$ weight linearly increases to one for another 10,000 updates, which we found essential to accelerate training and achieve stable performance.
Experiments ::: Experimental Setups ::: Knowledge Distillation
Previous work on non-autoregressive generation BIBREF6, BIBREF8 has used translations produced by a pre-trained autoregressive NMT model as the training data, noting that this can significantly improve the performance. We analyze the impact of distillation in § SECREF45.
Experiments ::: Main Results
We first conduct experiments to compare the performance of FlowSeq with strong baseline models, including NAT w/ Fertility BIBREF6, NAT-IR BIBREF7, NAT-REG BIBREF25, LV NAR BIBREF26, CTC Loss BIBREF27, and CMLM BIBREF8.
Table TABREF39 provides the BLEU scores of FlowSeq with argmax decoding, together with baselines with purely non-autoregressive decoding methods that generate output sequence in one parallel pass. The first block lists results of models trained on raw data, while the second block are results using knowledge distillation. Without using knowledge distillation, FlowSeq base model achieves significant improvement (more than 9 BLEU points) over CMLM-base and LV NAR. It demonstrates the effectiveness of FlowSeq on modeling the complex interdependence in target languages.
Towards the effect of knowledge distillation, we can mainly obtain two observations: i) Similar to the findings in previous work, knowledge distillation still benefits the translation quality of FlowSeq. ii) Compared to previous models, the benefit of knowledge distillation on FlowSeq is less significant, yielding less than 3 BLEU improvement on WMT2014 DE-EN corpus, and even no improvement on WMT2016 RO-EN corpus. The reason might be that FlowSeq does not rely much on knowledge distillation to alleviate the multi-modality problem.
Table TABREF40 illustrates the BLEU scores of FlowSeq and baselines with advanced decoding methods such as iterative refinement, IWD and NPD rescoring. The first block in Table TABREF40 includes the baseline results from autoregressive Transformer. For the sampling procedure in IWD and NPD, we sampled from a reduced-temperature model BIBREF11 to obtain high-quality samples. We vary the temperature within $\lbrace 0.1, 0.2, 0.3, 0.4, 0.5, 1.0\rbrace $ and select the best temperature based on the performance on development sets. The analysis of the impact of sampling temperature and other hyper-parameters on samples is in § SECREF50. For FlowSeq, NPD obtains better results than IWD, showing that FlowSeq still falls behind auto-regressive Transformer on model data distributions. Comparing with CMLM BIBREF8 with 10 iterations of refinement, which is a contemporaneous work that achieves state-of-the-art translation performance, FlowSeq obtains competitive performance on both WMT2014 and WMT2016 corpora, with only slight degradation in translation quality. Leveraging iterative refinement to further improve the performance of FlowSeq has been left to future work.
Experiments ::: Analysis on Decoding Speed
In this section, we compare the decoding speed (measured in average time in seconds required to decode one sentence) of FlowSeq at test time with that of the autoregressive Transformer model. We use the test set of WMT14 EN-DE for evaluation and all experiments are conducted on a single NVIDIA TITAN X GPU.
Experiments ::: Analysis on Decoding Speed ::: How does batch size affect the decoding speed?
First, we investigate how different decoding batch size can affect the decoding speed. We vary the decoding batch size within $\lbrace 1, 4, 8, 32, 64, 128\rbrace $. Figure. FIGREF44 shows that for both FlowSeq and Transformer decoding is faster when using a larger batch size. However, FlowSeq has much larger gains in the decoding speed w.r.t. the increase in batch size, gaining a speed up of 594% of base model and 403% of large model when using a batch size of 128. We hypothesize that this is because the operations in FlowSeq are more friendly to batching while the Transformer model with beam search at test time is less efficient in benefiting from batching.
Experiments ::: Analysis on Decoding Speed ::: How does sentence length affect the decoding speed?
Next, we examine if sentence length is a major factor affecting the decoding speed. We bucket the test data by the target sentence length. From Fig. FIGREF44, we can see that as the sentence length increases, FlowSeq achieves almost constant decoding time while Transformer has a linearly increasing decoding time. The relative decoding speed up of FlowSeq versus Transformer linearly increases as the sequence length increases. The potential of decoding long sequences with constant time is an attractive property of FlowSeq.
Experiments ::: Analysis of Rescoring Candidates
In Fig. FIGREF49, we analyze how different sampling hyperparameters affect the performance of rescoring. First, we observe that the number of samples $r$ for each length is the most important factor. The performance is always improved with a larger sample size. Second, a larger number of length candidates does not necessarily increase the rescoring performance. Third, we find that a larger sampling temperature (0.3 - 0.5) can increase the diversity of translations and leads to better rescoring BLEU. However, the latent samples become noisy when a large temperature (1.0) is used.
Experiments ::: Analysis of Translation Diversity
Following BIBREF28, we analyze the output diversity of FlowSeq. BIBREF28 proposed pairwise-BLEU and BLEU computed in a leave-one-out manner to calibrate the diversity and quality of translation hypotheses. A lower pairwise-BLEU score implies a more diverse hypothesis set. And a higher BLEU score implies a better translation quality. We experiment on a subset of test set of WMT14-ENDE with ten references each sentence BIBREF29. In Fig. FIGREF52, we compare FlowSeq with other multi-hypothesis generation methods (ten hypotheses each sentence) to analyze how well the generation outputs of FlowSeq are in terms of diversity and quality. The right corner area of the figure indicates the ideal generations: high diversity and high quality. While FlowSeq still lags behind the autoregressive generations, by increasing the sampling temperature it provides a way of generating more diverse outputs while keeping the translation quality almost unchanged. More analysis of translation outputs and detailed results are provided in the Appendix SECREF9 and SECREF10.
Conclusion
We propose FlowSeq, an efficient and effective model for non-autoregressive sequence generation by using generative flows. One potential direction for future work is to leverage iterative refinement techniques such as masked language models to further improve translation quality. Another exciting direction is to, theoretically and empirically, investigate the latent space in FlowSeq, hence providing deep insights of the model, even enhancing controllable text generation.
Acknowledgments
This work was supported in part by DARPA grant FA8750-18-2-0018 funded under the AIDA program and grant HR0011-15-C-0114 funded under the LORELEI program. Any opinions, findings, and conclusions expressed in this material are those of the authors and do not necessarily reflect the views of DARPA. The authors thank Amazon for their gift of AWS cloud credits and anonymous reviewers for their helpful suggestions.
Appendix: FlowSeq
Flow Layers ::: ActNorm
Log-determinant:
Flow Layers ::: Invertible Linear
Log-determinant:
where $h$ is the number of heads.
Flow Layers ::: Affine Coupling
Log-determinant:
Analysis of training dynamics
In Fig. FIGREF57, we plot the train and dev loss together with dev BLEU scores for the first 50 epochs. We can see that the reconstruction loss is increasing at the initial stage of training, then start to decrease when training with full KL loss. In addition, we observed that FlowSeq does not suffer the KL collapse problem BIBREF30, BIBREF31. This is because the decoder of FlowSeq is non-autogressive, with latent variable $\mathbf {z}$ as the only input.
Analysis of Translation Results
In Tab. TABREF58, we present randomly picked translation outputs from the test set of WMT14-DEEN. For each German input sentence, we pick three hypotheses from 30 samples. We have the following observations: First, in most cases, it can accurately express the meaning of the source sentence, sometimes in a different way from the reference sentence, which cannot be precisely reflected by the BLEU score. Second, by controlling the sampling hyper-parameters such as the length candidates $l$, the sampling temperature $\tau $ and the number of samples $r$ under each length, FlowSeq is able to generate diverse translations expressing the same meaning. Third, repetition and broken translations also exist in some cases due to the lack of language model dependencies in the decoder.
Results of Translation Diversity
Table TABREF59 shows the detailed results of translation deversity. | Difference is around 1 BLEU score lower on average than state of the art methods. |
65e72ad72a9cbfc379f126b10b0ce80cfe44579b | 65e72ad72a9cbfc379f126b10b0ce80cfe44579b_0 | Q: What non autoregressive NMT models are used for comparison?
Text: Introduction
Neural sequence-to-sequence (seq2seq) models BIBREF0, BIBREF1, BIBREF2, BIBREF3 generate an output sequence $\mathbf {y} = \lbrace y_1, \ldots , y_T\rbrace $ given an input sequence $\mathbf {x} = \lbrace x_1, \ldots , x_{T^{\prime }}\rbrace $ using conditional probabilities $P_\theta (\mathbf {y}|\mathbf {x})$ predicted by neural networks (parameterized by $\theta $).
Most seq2seq models are autoregressive, meaning that they factorize the joint probability of the output sequence given the input sequence $P_\theta (\mathbf {y}|\mathbf {x})$ into the product of probabilities over the next token in the sequence given the input sequence and previously generated tokens:
Each factor, $P_\theta (y_{t} | y_{<t}, \mathbf {x})$, can be implemented by function approximators such as RNNs BIBREF0 and Transformers BIBREF3. This factorization takes the complicated problem of joint estimation over an exponentially large output space of outputs $\mathbf {y}$, and turns it into a sequence of tractable multi-class classification problems predicting $y_t$ given the previous words, allowing for simple maximum log-likelihood training. However, this assumption of left-to-right factorization may be sub-optimal from a modeling perspective BIBREF4, BIBREF5, and generation of outputs must be done through a linear left-to-right pass through the output tokens using beam search, which is not easily parallelizable on hardware such as GPUs.
Recently, there has been work on non-autoregressive sequence generation for neural machine translation (NMT; BIBREF6, BIBREF7, BIBREF8) and language modeling BIBREF9. Non-autoregressive models attempt to model the joint distribution $P_\theta (\mathbf {y}|\mathbf {x})$ directly, decoupling the dependencies of decoding history during generation. A naïve solution is to assume that each token of the target sequence is independent given the input:
Unfortunately, the performance of this simple model falls far behind autoregressive models, as seq2seq tasks usually do have strong conditional dependencies between output variables BIBREF6. This problem can be mitigated by introducing a latent variable $\mathbf {z}$ to model these conditional dependencies:
where $p_{\theta }(\mathbf {z}|\mathbf {x})$ is the prior distribution over latent $\mathbf {z}$ and $P_{\theta }(\mathbf {y}|\mathbf {z}, \mathbf {x})$ is the “generative” distribution (a.k.a decoder). Non-autoregressive generation can be achieved by the following independence assumption in the decoding process:
BIBREF6 proposed a $\mathbf {z}$ representing fertility scores specifying the number of output words each input word generates, significantly improving the performance over Eq. (DISPLAY_FORM4). But the performance still falls behind state-of-the-art autoregressive models due to the limited expressiveness of fertility to model the interdependence between words in $\textbf {y}$.
In this paper, we propose a simple, effective, and efficient model, FlowSeq, which models expressive prior distribution $p_{\theta }(\mathbf {z}|\mathbf {x})$ using a powerful mathematical framework called generative flow BIBREF10. This framework can elegantly model complex distributions, and has obtained remarkable success in modeling continuous data such as images and speech through efficient density estimation and sampling BIBREF11, BIBREF12, BIBREF13. Based on this, we posit that generative flow also has potential to introduce more meaningful latent variables $\mathbf {z}$ in the non-autoregressive generation in Eq. (DISPLAY_FORM5).
FlowSeq is a flow-based sequence-to-sequence model, which is (to our knowledge) the first non-autoregressive seq2seq model utilizing generative flows. It allows for efficient parallel decoding while modeling the joint distribution of the output sequence. Experimentally, on three benchmark datasets for machine translation – WMT2014, WMT2016 and IWSLT-2014, FlowSeq achieves comparable performance with state-of-the-art non-autoregressive models, and almost constant decoding time w.r.t. the sequence length compared to a typical left-to-right Transformer model, which is super-linear.
Background
As noted above, incorporating expressive latent variables $\mathbf {z}$ is essential to decouple the dependencies between tokens in the target sequence in non-autoregressive models. However, in order to model all of the complexities of sequence generation to the point that we can read off all of the words in the output in an independent fashion (as in Eq. (DISPLAY_FORM6)), the prior distribution $p_{\theta }(\mathbf {z}|\mathbf {x})$ will necessarily be quite complex. In this section, we describe generative flows BIBREF10, an effective method for arbitrary modeling of complicated distributions, before describing how we apply them to sequence-to-sequence generation in §SECREF3.
Background ::: Flow-based Generative Models
Put simply, flow-based generative models work by transforming a simple distribution (e.g. a simple Gaussian) into a complex one (e.g. the complex prior distribution over $\mathbf {z}$ that we want to model) through a chain of invertible transformations.
Formally, a set of latent variables $\mathbf {\upsilon } \in \Upsilon $ are introduced with a simple prior distribution $p_{\Upsilon }(\upsilon )$. We then define a bijection function $f: \mathcal {Z} \rightarrow \Upsilon $ (with $g = f^{-1}$), whereby we can define a generative process over variables $\mathbf {z}$:
An important insight behind flow-based models is that given this bijection function, the change of variable formula defines the model distribution on $\mathbf {z}\in \mathcal {Z}$ by:
Here $\frac{\partial f_{\theta }(\mathbf {z})}{\partial \mathbf {z}}$ is the Jacobian matrix of $f_{\theta }$ at $\mathbf {z}$.
Eq. (DISPLAY_FORM9) provides a way to calculate the (complex) density of $\mathbf {z}$ by calculating the (simple) density of $\upsilon $ and the Jacobian of the transformation from $\mathbf {z}$ to $\upsilon $. For efficiency purposes, flow-based models generally use certain types of transformations $f_{\theta }$ where both the inverse functions $g_{\theta }$ and the Jacobian determinants are tractable to compute. A stacked sequence of such invertible transformations is also called a (normalizing) flow BIBREF10:
where $f = f_1 \circ f_2 \circ \cdots \circ f_K$ is a flow of $K$ transformations (omitting $\theta $s for brevity).
Background ::: Variational Inference and Training
In the context of maximal likelihood estimation (MLE), we wish to minimize the negative log-likelihood of the parameters:
where $D=\lbrace (\mathbf {x}^i, \mathbf {y}^i)\rbrace _{i=1}^{N}$ is the set of training data. However, the likelihood $P_{\theta }(\mathbf {y}| \mathbf {x})$ after marginalizing out latent variables $\mathbf {z}$ (LHS in Eq. (DISPLAY_FORM5)) is intractable to compute or differentiate directly. Variational inference BIBREF14 provides a solution by introducing a parametric inference model $q_{\phi }(\mathbf {z}|\mathbf {y}, \mathbf {x})$ (a.k.a posterior) which is then used to approximate this integral by sampling individual examples of $\mathbf {z}$. These models then optimize the evidence lower bound (ELBO), which considers both the “reconstruction error” $\log P_\theta (\mathbf {y}|\mathbf {z},\mathbf {x})$ and KL-divergence between the posterior and the prior:
Both inference model $\phi $ and decoder $\theta $ parameters are optimized according to this objective.
FlowSeq
We first overview FlowSeq's architecture (shown in Figure FIGREF13) and training process here before detailing each component in following sections. Similarly to classic seq2seq models, at both training and test time FlowSeq first reads the whole input sequence $\mathbf {x}$ and calculates a vector for each word in the sequence, the source encoding.
At training time, FlowSeq's parameters are learned using a variational training paradigm overviewed in §SECREF10. First, we draw samples of latent codes $\mathbf {z}$ from the current posterior $q_{\phi } (\mathbf {z}|\mathbf {y}, \mathbf {x})$. Next, we feed $\mathbf {z}$ together with source encodings into the decoder network and the prior flow to compute the probabilities of $P_{\theta }(\mathbf {y}|\mathbf {z}, \mathbf {x})$ and $p_{\theta }(\mathbf {z}|\mathbf {x})$ for optimizing the ELBO (Eq. (DISPLAY_FORM12)).
At test time, generation is performed by first sampling a latent code $\mathbf {z}$ from the prior flow by executing the generative process defined in Eq. (DISPLAY_FORM8). In this step, the source encodings produced from the encoder are used as conditional inputs. Then the decoder receives both the sampled latent code $\mathbf {z}$ and the source encoder outputs to generate the target sequence $\mathbf {y}$ from $P_{\theta }(\mathbf {y}|\mathbf {z}, \mathbf {x})$.
FlowSeq ::: Source Encoder
The source encoder encodes the source sequences into hidden representations, which are used in computing attention when generating latent variables in the posterior network and prior network as well as the cross-attention with decoder. Any standard neural sequence model can be used as its encoder, including RNNs BIBREF0 or Transformers BIBREF3.
FlowSeq ::: Posterior ::: Generation of Latent Variables.
The latent variables $\mathbf {z}$ are represented as a sequence of continuous random vectors $\mathbf {z}=\lbrace \mathbf {z}_1, \ldots , \mathbf {z}_T\rbrace $ with the same length as the target sequence $\mathbf {y}$. Each $\mathbf {z}_t$ is a $d_{\mathrm {z}}$-dimensional vector, where $d_{\mathrm {z}}$ is the dimension of the latent space. The posterior distribution $q_{\phi } (\mathbf {z}|\mathbf {y}, \mathbf {x})$ models each $\mathbf {z}_t$ as a diagonal Gaussian with learned mean and variance:
where $\mu _{t}(\cdot )$ and $\sigma _{t}(\cdot )$ are neural networks such as RNNs or Transformers.
FlowSeq ::: Posterior ::: Zero initialization.
While we perform standard random initialization for most layers of the network, we initialize the last linear transforms that generate the $\mu $ and $\log \sigma ^2$ values with zeros. This ensures that the posterior distribution as a simple normal distribution, which we found helps train very deep generative flows more stably.
FlowSeq ::: Posterior ::: Token Dropout.
The motivation of introducing the latent variable $\mathbf {z}$ into the model is to model the uncertainty in the generative process. Thus, it is preferable that $\mathbf {z}$ capture contextual interdependence between tokens in $\mathbf {y}$. However, there is an obvious local optimum where the posterior network generates a latent vector $\mathbf {z}_t$ that only encodes the information about the corresponding target token $y_t$, and the decoder simply generates the “correct” token at each step $t$ with $\mathbf {z}_t$ as input. In this case, FlowSeq reduces to the baseline model in Eq. (DISPLAY_FORM4). To escape this undesired local optimum, we apply token-level dropout to randomly drop an entire token when calculating the posterior, to ensure the model also has to learn how to use contextual information. This technique is similar to the “masked language model” in previous studies BIBREF15, BIBREF16, BIBREF17.
FlowSeq ::: Decoder
As the decoder, we take the latent sequence $\mathbf {z}$ as input, run it through several layers of a neural sequence model such as a Transformer, then directly predict the output tokens in $\mathbf {y}$ individually and independently. Notably, unlike standard seq2seq decoders, we do not perform causal masking to prevent attending to future tokens, making the model fully non-autoregressive.
FlowSeq ::: Flow Architecture for Prior
The flow architecture is based on Glow BIBREF11. It consists of a series of steps of flow, combined in a multi-scale architecture (see Figure FIGREF13.) Each step of flow consists three types of elementary flows – actnorm, invertible multi-head linear, and coupling. Note that all three functions are invertible and conducive to calculation of log determinants (details in Appendix SECREF6).
FlowSeq ::: Flow Architecture for Prior ::: Actnorm.
The activation normalization layer (actnorm; BIBREF11) is an alternative for batch normalization BIBREF18, that has mainly been used in the context of image data to alleviate problems in model training. Actnorm performs an affine transformation of the activations using a scale and bias parameter per feature for sequences:
Both $\mathbf {z}$ and $\mathbf {z}^{\prime }$ are tensors of shape $[T\times d_{\mathrm {z}}]$ with time dimension $t$ and feature dimension $d_{\mathrm {z}}$. The parameters are initialized such that over each feature $\mathbf {z}_{t}^{\prime }$ has zero mean and unit variance given an initial mini-batch of data.
FlowSeq ::: Flow Architecture for Prior ::: Invertible Multi-head Linear Layers.
To incorporate general permutations of variables along the feature dimension to ensure that each dimension can affect every other ones after a sufficient number of steps of flow, BIBREF11 proposed a trainable invertible $1\times 1$ convolution layer for 2D images. It is straightforward to apply similar transformations to sequential data:
where $\mathbf {W}$ is the weight matrix of shape $[d_{\mathrm {z}} \times d_{\mathrm {z}}]$. The log-determinant of this transformation is:
The cost of computing $\mathrm {det}(\mathbf {W})$ is $O(d_{\mathrm {z}}^3)$.
Unfortunately, $d_{\mathrm {z}}$ in Seq2Seq generation is commonly large, e.g. 512, significantly slowing down the model for computing $\mathrm {det}(\mathbf {W})$. To apply this to sequence generation, we propose a multi-head invertible linear layer, which first splits each $d_{\mathrm {z}}$-dimensional feature vector into $h$ heads with dimension $d_h = d_{\mathrm {z}}/h$. Then the linear transformation in (DISPLAY_FORM26) is applied to each head, with $d_h\times d_h$ weight matrix $\mathbf {W}$, significantly reducing the dimension. For splitting of heads, one step of flow contains one linear layer with either row-major or column-major splitting format, and these steps with different linear layers are composed in an alternating pattern.
FlowSeq ::: Flow Architecture for Prior ::: Affine Coupling Layers.
To model interdependence across time steps, we use affine coupling layers BIBREF19:
where $\mathrm {s}(\mathbf {z}_a, \mathbf {x})$ and $\mathrm {b}(\mathbf {z}_a, \mathbf {x})$ are outputs of two neural networks with $\mathbf {z}_a$ and $\mathbf {x}$ as input. These are shown in Figure FIGREF21 (c). In experiments, we implement $\mathrm {s}(\cdot )$ and $\mathrm {b}(\cdot )$ with one Transformer decoder layer BIBREF3: multi-head self-attention over $\mathbf {z}_a$, followed by multi-head inter-attention over $\mathbf {x}$, followed by a position-wise feed-forward network. The input $\mathbf {z}_a$ is fed into this layer in one pass, without causal masking.
As in BIBREF19, the $\mathrm {split}()$ function splits $\mathbf {z}$ the input tensor into two halves, while the $\mathrm {concat}$ operation performs the corresponding reverse concatenation operation. In our architecture, three types of split functions are used, based on the split dimension and pattern. Figure FIGREF21 (b) illustrates the three splitting types. The first type of split groups $\mathbf {z}$ along the time dimension on alternate indices. In this case, FlowSeq mainly models the interactions between time-steps. The second and third types of splits perform on the feature dimension, with continuous and alternate patterns, respectively. For each type of split, we alternate $\mathbf {z}_a$ and $\mathbf {z}_b$ to increase the flexibility of the split function. Different types of affine coupling layers alternate in the flow, similar to the linear layers.
FlowSeq ::: Flow Architecture for Prior ::: Multi-scale Architecture.
We follow BIBREF19 in implementing a multi-scale architecture using the squeezing operation on the feature dimension, which has been demonstrated helpful for training deep flows. Formally, each scale is a combination of several steps of the flow (see Figure FIGREF21 (a)). After each scale, the model drops half of the dimensions with the third type of split in Figure FIGREF21 (b) to reduce computational and memory cost, outputting the tensor with shape $[T \times \frac{d}{2}]$. Then the squeezing operation transforms the $T \times \frac{d}{2}$ tensor into an $\frac{T}{2} \times d$ one as the input of the next scale. We pad each sentence with EOS tokens to ensure $T$ is divisible by 2. The right component of Figure FIGREF13 illustrates the multi-scale architecture.
FlowSeq ::: Predicting Target Sequence Length
In autoregressive seq2seq models, it is natural to determine the length of the sequence dynamically by simply predicting a special EOS token. However, for FlowSeq to predict the entire sequence in parallel, it needs to know its length in advance to generate the latent sequence $\mathbf {z}$. Instead of predicting the absolute length of the target sequence, we predict the length difference between source and target sequences using a classifier with a range of $[-20, 20]$. Numbers in this range are predicted by max-pooling the source encodings into a single vector, running this through a linear layer, and taking a softmax. This classifier is learned jointly with the rest of the model.
FlowSeq ::: Decoding Process
At inference time, the model needs to identify the sequence with the highest conditional probability by marginalizing over all possible latent variables (see Eq. (DISPLAY_FORM5)), which is intractable in practice. We propose three approximating decoding algorithms to reduce the search space.
FlowSeq ::: Decoding Process ::: Argmax Decoding.
Following BIBREF6, one simple and effective method is to select the best sequence by choosing the highest-probability latent sequence $\mathbf {z}$:
where identifying $\mathbf {y}^*$ only requires independently maximizing the local probability for each output position (see Eq. DISPLAY_FORM6).
FlowSeq ::: Decoding Process ::: Noisy Parallel Decoding (NPD).
A more accurate approximation of decoding, proposed in BIBREF6, is to draw samples from the latent space and compute the best output for each latent sequence. Then, a pre-trained autoregressive model is adopted to rank these sequences. In FlowSeq, different candidates can be generated by sampling different target lengths or different samples from the prior, and both of the strategies can be batched via masks during decoding. In our experiments, we first select the top $l$ length candidates from the length predictor in §SECREF29. Then, for each length candidate we use $r$ random samples from the prior network to generate output sequences, yielding a total of $l\times r$ candidates.
FlowSeq ::: Decoding Process ::: Importance Weighted Decoding (IWD)
The third approximating method is based on the lower bound of importance weighted estimation BIBREF20. Similarly to NPD, IWD first draws samples from the latent space and computes the best output for each latent sequence. Then, IWD ranks these candidate sequences with $K$ importance samples:
IWD does not rely on a separate pre-trained model, though it significantly slows down the decoding speed. The detailed comparison of these three decoding methods is provided in §SECREF45.
FlowSeq ::: Discussion
Different from the architecture proposed in BIBREF9, the architecture of FlowSeq is not using any autoregressive flow BIBREF21, BIBREF22, yielding a truly non-autoregressive model with efficient generation. Note that the FlowSeq remains non-autoregressive even if we use an RNN in the architecture because RNN is only used to encode a complete sequence of codes and all the input tokens can be fed into the RNN in parallel. This makes it possible to use highly-optimized implementations of RNNs such as those provided by cuDNN. Thus while RNNs do experience some drop in speed, it is less extreme than that experienced when using autoregressive models.
Experiments ::: Experimental Setups ::: Translation Datasets
We evaluate FlowSeq on three machine translation benchmark datasets: WMT2014 DE-EN (around 4.5M sentence pairs), WMT2016 RO-EN (around 610K sentence pairs) and a smaller dataset IWSLT2014 DE-EN (around 150K sentence pairs). We use scripts from fairseq BIBREF23 to preprocess WMT2014 and IWSLT2014, where the preprocessing steps follow BIBREF3 for WMT2014. We use the data provided in BIBREF7 for WMT2016. For both WMT datasets, the source and target languages share the same set of BPE embeddings while for IWSLT2014 we use separate embeddings. During training, we filter out sentences longer than 80 for WMT dataset and 60 for IWSLT, respectively.
Experiments ::: Experimental Setups ::: Modules and Hyperparameters
We implement the encoder, decoder and posterior networks with standard (unmasked) Transformer layers BIBREF3. For WMT datasets, the encoder consists of 6 layers, and the decoder and posterior are composed of 4 layers, and 8 attention heads. and for IWSLT, the encoder has 5 layers, and decoder and posterior have 3 layers, and 4 attention heads. The prior flow consists of 3 scales with the number of steps $[48, 48, 16]$ from bottom to top. To dissect the impact of model dimension on translation quality and speed, we perform experiments on two versions of FlowSeq with $d_{model}/d_{hidden} = 256/512$ (base) and $d_{model}/d_{hidden} = 512/1024$ (large). More model details are provided in Appendix SECREF7.
Experiments ::: Experimental Setups ::: Optimization
Parameter optimization is performed with the Adam optimizer BIBREF24 with $\beta =(0.9, 0.999)$ and $\epsilon =1e-6$. Each mini-batch consist of 2048 sentences. The learning rate is initialized to $5e-4$, and exponentially decays with rate $0.999995$. The gradient clipping cutoff is $1.0$. For all the FlowSeq models, we apply $0.1$ label smoothing and averaged the 5 best checkpoints to create the final model.
At the beginning of training, the posterior network is randomly initialized, producing noisy supervision to the prior. To mitigate this issue, we first set the weight of the $\mathrm {KL}$ term in ELBO to zero for 30,000 updates to train the encoder, decoder and posterior networks. Then the $\mathrm {KL}$ weight linearly increases to one for another 10,000 updates, which we found essential to accelerate training and achieve stable performance.
Experiments ::: Experimental Setups ::: Knowledge Distillation
Previous work on non-autoregressive generation BIBREF6, BIBREF8 has used translations produced by a pre-trained autoregressive NMT model as the training data, noting that this can significantly improve the performance. We analyze the impact of distillation in § SECREF45.
Experiments ::: Main Results
We first conduct experiments to compare the performance of FlowSeq with strong baseline models, including NAT w/ Fertility BIBREF6, NAT-IR BIBREF7, NAT-REG BIBREF25, LV NAR BIBREF26, CTC Loss BIBREF27, and CMLM BIBREF8.
Table TABREF39 provides the BLEU scores of FlowSeq with argmax decoding, together with baselines with purely non-autoregressive decoding methods that generate output sequence in one parallel pass. The first block lists results of models trained on raw data, while the second block are results using knowledge distillation. Without using knowledge distillation, FlowSeq base model achieves significant improvement (more than 9 BLEU points) over CMLM-base and LV NAR. It demonstrates the effectiveness of FlowSeq on modeling the complex interdependence in target languages.
Towards the effect of knowledge distillation, we can mainly obtain two observations: i) Similar to the findings in previous work, knowledge distillation still benefits the translation quality of FlowSeq. ii) Compared to previous models, the benefit of knowledge distillation on FlowSeq is less significant, yielding less than 3 BLEU improvement on WMT2014 DE-EN corpus, and even no improvement on WMT2016 RO-EN corpus. The reason might be that FlowSeq does not rely much on knowledge distillation to alleviate the multi-modality problem.
Table TABREF40 illustrates the BLEU scores of FlowSeq and baselines with advanced decoding methods such as iterative refinement, IWD and NPD rescoring. The first block in Table TABREF40 includes the baseline results from autoregressive Transformer. For the sampling procedure in IWD and NPD, we sampled from a reduced-temperature model BIBREF11 to obtain high-quality samples. We vary the temperature within $\lbrace 0.1, 0.2, 0.3, 0.4, 0.5, 1.0\rbrace $ and select the best temperature based on the performance on development sets. The analysis of the impact of sampling temperature and other hyper-parameters on samples is in § SECREF50. For FlowSeq, NPD obtains better results than IWD, showing that FlowSeq still falls behind auto-regressive Transformer on model data distributions. Comparing with CMLM BIBREF8 with 10 iterations of refinement, which is a contemporaneous work that achieves state-of-the-art translation performance, FlowSeq obtains competitive performance on both WMT2014 and WMT2016 corpora, with only slight degradation in translation quality. Leveraging iterative refinement to further improve the performance of FlowSeq has been left to future work.
Experiments ::: Analysis on Decoding Speed
In this section, we compare the decoding speed (measured in average time in seconds required to decode one sentence) of FlowSeq at test time with that of the autoregressive Transformer model. We use the test set of WMT14 EN-DE for evaluation and all experiments are conducted on a single NVIDIA TITAN X GPU.
Experiments ::: Analysis on Decoding Speed ::: How does batch size affect the decoding speed?
First, we investigate how different decoding batch size can affect the decoding speed. We vary the decoding batch size within $\lbrace 1, 4, 8, 32, 64, 128\rbrace $. Figure. FIGREF44 shows that for both FlowSeq and Transformer decoding is faster when using a larger batch size. However, FlowSeq has much larger gains in the decoding speed w.r.t. the increase in batch size, gaining a speed up of 594% of base model and 403% of large model when using a batch size of 128. We hypothesize that this is because the operations in FlowSeq are more friendly to batching while the Transformer model with beam search at test time is less efficient in benefiting from batching.
Experiments ::: Analysis on Decoding Speed ::: How does sentence length affect the decoding speed?
Next, we examine if sentence length is a major factor affecting the decoding speed. We bucket the test data by the target sentence length. From Fig. FIGREF44, we can see that as the sentence length increases, FlowSeq achieves almost constant decoding time while Transformer has a linearly increasing decoding time. The relative decoding speed up of FlowSeq versus Transformer linearly increases as the sequence length increases. The potential of decoding long sequences with constant time is an attractive property of FlowSeq.
Experiments ::: Analysis of Rescoring Candidates
In Fig. FIGREF49, we analyze how different sampling hyperparameters affect the performance of rescoring. First, we observe that the number of samples $r$ for each length is the most important factor. The performance is always improved with a larger sample size. Second, a larger number of length candidates does not necessarily increase the rescoring performance. Third, we find that a larger sampling temperature (0.3 - 0.5) can increase the diversity of translations and leads to better rescoring BLEU. However, the latent samples become noisy when a large temperature (1.0) is used.
Experiments ::: Analysis of Translation Diversity
Following BIBREF28, we analyze the output diversity of FlowSeq. BIBREF28 proposed pairwise-BLEU and BLEU computed in a leave-one-out manner to calibrate the diversity and quality of translation hypotheses. A lower pairwise-BLEU score implies a more diverse hypothesis set. And a higher BLEU score implies a better translation quality. We experiment on a subset of test set of WMT14-ENDE with ten references each sentence BIBREF29. In Fig. FIGREF52, we compare FlowSeq with other multi-hypothesis generation methods (ten hypotheses each sentence) to analyze how well the generation outputs of FlowSeq are in terms of diversity and quality. The right corner area of the figure indicates the ideal generations: high diversity and high quality. While FlowSeq still lags behind the autoregressive generations, by increasing the sampling temperature it provides a way of generating more diverse outputs while keeping the translation quality almost unchanged. More analysis of translation outputs and detailed results are provided in the Appendix SECREF9 and SECREF10.
Conclusion
We propose FlowSeq, an efficient and effective model for non-autoregressive sequence generation by using generative flows. One potential direction for future work is to leverage iterative refinement techniques such as masked language models to further improve translation quality. Another exciting direction is to, theoretically and empirically, investigate the latent space in FlowSeq, hence providing deep insights of the model, even enhancing controllable text generation.
Acknowledgments
This work was supported in part by DARPA grant FA8750-18-2-0018 funded under the AIDA program and grant HR0011-15-C-0114 funded under the LORELEI program. Any opinions, findings, and conclusions expressed in this material are those of the authors and do not necessarily reflect the views of DARPA. The authors thank Amazon for their gift of AWS cloud credits and anonymous reviewers for their helpful suggestions.
Appendix: FlowSeq
Flow Layers ::: ActNorm
Log-determinant:
Flow Layers ::: Invertible Linear
Log-determinant:
where $h$ is the number of heads.
Flow Layers ::: Affine Coupling
Log-determinant:
Analysis of training dynamics
In Fig. FIGREF57, we plot the train and dev loss together with dev BLEU scores for the first 50 epochs. We can see that the reconstruction loss is increasing at the initial stage of training, then start to decrease when training with full KL loss. In addition, we observed that FlowSeq does not suffer the KL collapse problem BIBREF30, BIBREF31. This is because the decoder of FlowSeq is non-autogressive, with latent variable $\mathbf {z}$ as the only input.
Analysis of Translation Results
In Tab. TABREF58, we present randomly picked translation outputs from the test set of WMT14-DEEN. For each German input sentence, we pick three hypotheses from 30 samples. We have the following observations: First, in most cases, it can accurately express the meaning of the source sentence, sometimes in a different way from the reference sentence, which cannot be precisely reflected by the BLEU score. Second, by controlling the sampling hyper-parameters such as the length candidates $l$, the sampling temperature $\tau $ and the number of samples $r$ under each length, FlowSeq is able to generate diverse translations expressing the same meaning. Third, repetition and broken translations also exist in some cases due to the lack of language model dependencies in the decoder.
Results of Translation Diversity
Table TABREF59 shows the detailed results of translation deversity. | NAT w/ Fertility, NAT-IR, NAT-REG, LV NAR, CTC Loss, CMLM |
cf8edc6e8c4d578e2bd9965579f0ee81f4bf35a9 | cf8edc6e8c4d578e2bd9965579f0ee81f4bf35a9_0 | Q: What are three neural machine translation (NMT) benchmark datasets used for evaluation?
Text: Introduction
Neural sequence-to-sequence (seq2seq) models BIBREF0, BIBREF1, BIBREF2, BIBREF3 generate an output sequence $\mathbf {y} = \lbrace y_1, \ldots , y_T\rbrace $ given an input sequence $\mathbf {x} = \lbrace x_1, \ldots , x_{T^{\prime }}\rbrace $ using conditional probabilities $P_\theta (\mathbf {y}|\mathbf {x})$ predicted by neural networks (parameterized by $\theta $).
Most seq2seq models are autoregressive, meaning that they factorize the joint probability of the output sequence given the input sequence $P_\theta (\mathbf {y}|\mathbf {x})$ into the product of probabilities over the next token in the sequence given the input sequence and previously generated tokens:
Each factor, $P_\theta (y_{t} | y_{<t}, \mathbf {x})$, can be implemented by function approximators such as RNNs BIBREF0 and Transformers BIBREF3. This factorization takes the complicated problem of joint estimation over an exponentially large output space of outputs $\mathbf {y}$, and turns it into a sequence of tractable multi-class classification problems predicting $y_t$ given the previous words, allowing for simple maximum log-likelihood training. However, this assumption of left-to-right factorization may be sub-optimal from a modeling perspective BIBREF4, BIBREF5, and generation of outputs must be done through a linear left-to-right pass through the output tokens using beam search, which is not easily parallelizable on hardware such as GPUs.
Recently, there has been work on non-autoregressive sequence generation for neural machine translation (NMT; BIBREF6, BIBREF7, BIBREF8) and language modeling BIBREF9. Non-autoregressive models attempt to model the joint distribution $P_\theta (\mathbf {y}|\mathbf {x})$ directly, decoupling the dependencies of decoding history during generation. A naïve solution is to assume that each token of the target sequence is independent given the input:
Unfortunately, the performance of this simple model falls far behind autoregressive models, as seq2seq tasks usually do have strong conditional dependencies between output variables BIBREF6. This problem can be mitigated by introducing a latent variable $\mathbf {z}$ to model these conditional dependencies:
where $p_{\theta }(\mathbf {z}|\mathbf {x})$ is the prior distribution over latent $\mathbf {z}$ and $P_{\theta }(\mathbf {y}|\mathbf {z}, \mathbf {x})$ is the “generative” distribution (a.k.a decoder). Non-autoregressive generation can be achieved by the following independence assumption in the decoding process:
BIBREF6 proposed a $\mathbf {z}$ representing fertility scores specifying the number of output words each input word generates, significantly improving the performance over Eq. (DISPLAY_FORM4). But the performance still falls behind state-of-the-art autoregressive models due to the limited expressiveness of fertility to model the interdependence between words in $\textbf {y}$.
In this paper, we propose a simple, effective, and efficient model, FlowSeq, which models expressive prior distribution $p_{\theta }(\mathbf {z}|\mathbf {x})$ using a powerful mathematical framework called generative flow BIBREF10. This framework can elegantly model complex distributions, and has obtained remarkable success in modeling continuous data such as images and speech through efficient density estimation and sampling BIBREF11, BIBREF12, BIBREF13. Based on this, we posit that generative flow also has potential to introduce more meaningful latent variables $\mathbf {z}$ in the non-autoregressive generation in Eq. (DISPLAY_FORM5).
FlowSeq is a flow-based sequence-to-sequence model, which is (to our knowledge) the first non-autoregressive seq2seq model utilizing generative flows. It allows for efficient parallel decoding while modeling the joint distribution of the output sequence. Experimentally, on three benchmark datasets for machine translation – WMT2014, WMT2016 and IWSLT-2014, FlowSeq achieves comparable performance with state-of-the-art non-autoregressive models, and almost constant decoding time w.r.t. the sequence length compared to a typical left-to-right Transformer model, which is super-linear.
Background
As noted above, incorporating expressive latent variables $\mathbf {z}$ is essential to decouple the dependencies between tokens in the target sequence in non-autoregressive models. However, in order to model all of the complexities of sequence generation to the point that we can read off all of the words in the output in an independent fashion (as in Eq. (DISPLAY_FORM6)), the prior distribution $p_{\theta }(\mathbf {z}|\mathbf {x})$ will necessarily be quite complex. In this section, we describe generative flows BIBREF10, an effective method for arbitrary modeling of complicated distributions, before describing how we apply them to sequence-to-sequence generation in §SECREF3.
Background ::: Flow-based Generative Models
Put simply, flow-based generative models work by transforming a simple distribution (e.g. a simple Gaussian) into a complex one (e.g. the complex prior distribution over $\mathbf {z}$ that we want to model) through a chain of invertible transformations.
Formally, a set of latent variables $\mathbf {\upsilon } \in \Upsilon $ are introduced with a simple prior distribution $p_{\Upsilon }(\upsilon )$. We then define a bijection function $f: \mathcal {Z} \rightarrow \Upsilon $ (with $g = f^{-1}$), whereby we can define a generative process over variables $\mathbf {z}$:
An important insight behind flow-based models is that given this bijection function, the change of variable formula defines the model distribution on $\mathbf {z}\in \mathcal {Z}$ by:
Here $\frac{\partial f_{\theta }(\mathbf {z})}{\partial \mathbf {z}}$ is the Jacobian matrix of $f_{\theta }$ at $\mathbf {z}$.
Eq. (DISPLAY_FORM9) provides a way to calculate the (complex) density of $\mathbf {z}$ by calculating the (simple) density of $\upsilon $ and the Jacobian of the transformation from $\mathbf {z}$ to $\upsilon $. For efficiency purposes, flow-based models generally use certain types of transformations $f_{\theta }$ where both the inverse functions $g_{\theta }$ and the Jacobian determinants are tractable to compute. A stacked sequence of such invertible transformations is also called a (normalizing) flow BIBREF10:
where $f = f_1 \circ f_2 \circ \cdots \circ f_K$ is a flow of $K$ transformations (omitting $\theta $s for brevity).
Background ::: Variational Inference and Training
In the context of maximal likelihood estimation (MLE), we wish to minimize the negative log-likelihood of the parameters:
where $D=\lbrace (\mathbf {x}^i, \mathbf {y}^i)\rbrace _{i=1}^{N}$ is the set of training data. However, the likelihood $P_{\theta }(\mathbf {y}| \mathbf {x})$ after marginalizing out latent variables $\mathbf {z}$ (LHS in Eq. (DISPLAY_FORM5)) is intractable to compute or differentiate directly. Variational inference BIBREF14 provides a solution by introducing a parametric inference model $q_{\phi }(\mathbf {z}|\mathbf {y}, \mathbf {x})$ (a.k.a posterior) which is then used to approximate this integral by sampling individual examples of $\mathbf {z}$. These models then optimize the evidence lower bound (ELBO), which considers both the “reconstruction error” $\log P_\theta (\mathbf {y}|\mathbf {z},\mathbf {x})$ and KL-divergence between the posterior and the prior:
Both inference model $\phi $ and decoder $\theta $ parameters are optimized according to this objective.
FlowSeq
We first overview FlowSeq's architecture (shown in Figure FIGREF13) and training process here before detailing each component in following sections. Similarly to classic seq2seq models, at both training and test time FlowSeq first reads the whole input sequence $\mathbf {x}$ and calculates a vector for each word in the sequence, the source encoding.
At training time, FlowSeq's parameters are learned using a variational training paradigm overviewed in §SECREF10. First, we draw samples of latent codes $\mathbf {z}$ from the current posterior $q_{\phi } (\mathbf {z}|\mathbf {y}, \mathbf {x})$. Next, we feed $\mathbf {z}$ together with source encodings into the decoder network and the prior flow to compute the probabilities of $P_{\theta }(\mathbf {y}|\mathbf {z}, \mathbf {x})$ and $p_{\theta }(\mathbf {z}|\mathbf {x})$ for optimizing the ELBO (Eq. (DISPLAY_FORM12)).
At test time, generation is performed by first sampling a latent code $\mathbf {z}$ from the prior flow by executing the generative process defined in Eq. (DISPLAY_FORM8). In this step, the source encodings produced from the encoder are used as conditional inputs. Then the decoder receives both the sampled latent code $\mathbf {z}$ and the source encoder outputs to generate the target sequence $\mathbf {y}$ from $P_{\theta }(\mathbf {y}|\mathbf {z}, \mathbf {x})$.
FlowSeq ::: Source Encoder
The source encoder encodes the source sequences into hidden representations, which are used in computing attention when generating latent variables in the posterior network and prior network as well as the cross-attention with decoder. Any standard neural sequence model can be used as its encoder, including RNNs BIBREF0 or Transformers BIBREF3.
FlowSeq ::: Posterior ::: Generation of Latent Variables.
The latent variables $\mathbf {z}$ are represented as a sequence of continuous random vectors $\mathbf {z}=\lbrace \mathbf {z}_1, \ldots , \mathbf {z}_T\rbrace $ with the same length as the target sequence $\mathbf {y}$. Each $\mathbf {z}_t$ is a $d_{\mathrm {z}}$-dimensional vector, where $d_{\mathrm {z}}$ is the dimension of the latent space. The posterior distribution $q_{\phi } (\mathbf {z}|\mathbf {y}, \mathbf {x})$ models each $\mathbf {z}_t$ as a diagonal Gaussian with learned mean and variance:
where $\mu _{t}(\cdot )$ and $\sigma _{t}(\cdot )$ are neural networks such as RNNs or Transformers.
FlowSeq ::: Posterior ::: Zero initialization.
While we perform standard random initialization for most layers of the network, we initialize the last linear transforms that generate the $\mu $ and $\log \sigma ^2$ values with zeros. This ensures that the posterior distribution as a simple normal distribution, which we found helps train very deep generative flows more stably.
FlowSeq ::: Posterior ::: Token Dropout.
The motivation of introducing the latent variable $\mathbf {z}$ into the model is to model the uncertainty in the generative process. Thus, it is preferable that $\mathbf {z}$ capture contextual interdependence between tokens in $\mathbf {y}$. However, there is an obvious local optimum where the posterior network generates a latent vector $\mathbf {z}_t$ that only encodes the information about the corresponding target token $y_t$, and the decoder simply generates the “correct” token at each step $t$ with $\mathbf {z}_t$ as input. In this case, FlowSeq reduces to the baseline model in Eq. (DISPLAY_FORM4). To escape this undesired local optimum, we apply token-level dropout to randomly drop an entire token when calculating the posterior, to ensure the model also has to learn how to use contextual information. This technique is similar to the “masked language model” in previous studies BIBREF15, BIBREF16, BIBREF17.
FlowSeq ::: Decoder
As the decoder, we take the latent sequence $\mathbf {z}$ as input, run it through several layers of a neural sequence model such as a Transformer, then directly predict the output tokens in $\mathbf {y}$ individually and independently. Notably, unlike standard seq2seq decoders, we do not perform causal masking to prevent attending to future tokens, making the model fully non-autoregressive.
FlowSeq ::: Flow Architecture for Prior
The flow architecture is based on Glow BIBREF11. It consists of a series of steps of flow, combined in a multi-scale architecture (see Figure FIGREF13.) Each step of flow consists three types of elementary flows – actnorm, invertible multi-head linear, and coupling. Note that all three functions are invertible and conducive to calculation of log determinants (details in Appendix SECREF6).
FlowSeq ::: Flow Architecture for Prior ::: Actnorm.
The activation normalization layer (actnorm; BIBREF11) is an alternative for batch normalization BIBREF18, that has mainly been used in the context of image data to alleviate problems in model training. Actnorm performs an affine transformation of the activations using a scale and bias parameter per feature for sequences:
Both $\mathbf {z}$ and $\mathbf {z}^{\prime }$ are tensors of shape $[T\times d_{\mathrm {z}}]$ with time dimension $t$ and feature dimension $d_{\mathrm {z}}$. The parameters are initialized such that over each feature $\mathbf {z}_{t}^{\prime }$ has zero mean and unit variance given an initial mini-batch of data.
FlowSeq ::: Flow Architecture for Prior ::: Invertible Multi-head Linear Layers.
To incorporate general permutations of variables along the feature dimension to ensure that each dimension can affect every other ones after a sufficient number of steps of flow, BIBREF11 proposed a trainable invertible $1\times 1$ convolution layer for 2D images. It is straightforward to apply similar transformations to sequential data:
where $\mathbf {W}$ is the weight matrix of shape $[d_{\mathrm {z}} \times d_{\mathrm {z}}]$. The log-determinant of this transformation is:
The cost of computing $\mathrm {det}(\mathbf {W})$ is $O(d_{\mathrm {z}}^3)$.
Unfortunately, $d_{\mathrm {z}}$ in Seq2Seq generation is commonly large, e.g. 512, significantly slowing down the model for computing $\mathrm {det}(\mathbf {W})$. To apply this to sequence generation, we propose a multi-head invertible linear layer, which first splits each $d_{\mathrm {z}}$-dimensional feature vector into $h$ heads with dimension $d_h = d_{\mathrm {z}}/h$. Then the linear transformation in (DISPLAY_FORM26) is applied to each head, with $d_h\times d_h$ weight matrix $\mathbf {W}$, significantly reducing the dimension. For splitting of heads, one step of flow contains one linear layer with either row-major or column-major splitting format, and these steps with different linear layers are composed in an alternating pattern.
FlowSeq ::: Flow Architecture for Prior ::: Affine Coupling Layers.
To model interdependence across time steps, we use affine coupling layers BIBREF19:
where $\mathrm {s}(\mathbf {z}_a, \mathbf {x})$ and $\mathrm {b}(\mathbf {z}_a, \mathbf {x})$ are outputs of two neural networks with $\mathbf {z}_a$ and $\mathbf {x}$ as input. These are shown in Figure FIGREF21 (c). In experiments, we implement $\mathrm {s}(\cdot )$ and $\mathrm {b}(\cdot )$ with one Transformer decoder layer BIBREF3: multi-head self-attention over $\mathbf {z}_a$, followed by multi-head inter-attention over $\mathbf {x}$, followed by a position-wise feed-forward network. The input $\mathbf {z}_a$ is fed into this layer in one pass, without causal masking.
As in BIBREF19, the $\mathrm {split}()$ function splits $\mathbf {z}$ the input tensor into two halves, while the $\mathrm {concat}$ operation performs the corresponding reverse concatenation operation. In our architecture, three types of split functions are used, based on the split dimension and pattern. Figure FIGREF21 (b) illustrates the three splitting types. The first type of split groups $\mathbf {z}$ along the time dimension on alternate indices. In this case, FlowSeq mainly models the interactions between time-steps. The second and third types of splits perform on the feature dimension, with continuous and alternate patterns, respectively. For each type of split, we alternate $\mathbf {z}_a$ and $\mathbf {z}_b$ to increase the flexibility of the split function. Different types of affine coupling layers alternate in the flow, similar to the linear layers.
FlowSeq ::: Flow Architecture for Prior ::: Multi-scale Architecture.
We follow BIBREF19 in implementing a multi-scale architecture using the squeezing operation on the feature dimension, which has been demonstrated helpful for training deep flows. Formally, each scale is a combination of several steps of the flow (see Figure FIGREF21 (a)). After each scale, the model drops half of the dimensions with the third type of split in Figure FIGREF21 (b) to reduce computational and memory cost, outputting the tensor with shape $[T \times \frac{d}{2}]$. Then the squeezing operation transforms the $T \times \frac{d}{2}$ tensor into an $\frac{T}{2} \times d$ one as the input of the next scale. We pad each sentence with EOS tokens to ensure $T$ is divisible by 2. The right component of Figure FIGREF13 illustrates the multi-scale architecture.
FlowSeq ::: Predicting Target Sequence Length
In autoregressive seq2seq models, it is natural to determine the length of the sequence dynamically by simply predicting a special EOS token. However, for FlowSeq to predict the entire sequence in parallel, it needs to know its length in advance to generate the latent sequence $\mathbf {z}$. Instead of predicting the absolute length of the target sequence, we predict the length difference between source and target sequences using a classifier with a range of $[-20, 20]$. Numbers in this range are predicted by max-pooling the source encodings into a single vector, running this through a linear layer, and taking a softmax. This classifier is learned jointly with the rest of the model.
FlowSeq ::: Decoding Process
At inference time, the model needs to identify the sequence with the highest conditional probability by marginalizing over all possible latent variables (see Eq. (DISPLAY_FORM5)), which is intractable in practice. We propose three approximating decoding algorithms to reduce the search space.
FlowSeq ::: Decoding Process ::: Argmax Decoding.
Following BIBREF6, one simple and effective method is to select the best sequence by choosing the highest-probability latent sequence $\mathbf {z}$:
where identifying $\mathbf {y}^*$ only requires independently maximizing the local probability for each output position (see Eq. DISPLAY_FORM6).
FlowSeq ::: Decoding Process ::: Noisy Parallel Decoding (NPD).
A more accurate approximation of decoding, proposed in BIBREF6, is to draw samples from the latent space and compute the best output for each latent sequence. Then, a pre-trained autoregressive model is adopted to rank these sequences. In FlowSeq, different candidates can be generated by sampling different target lengths or different samples from the prior, and both of the strategies can be batched via masks during decoding. In our experiments, we first select the top $l$ length candidates from the length predictor in §SECREF29. Then, for each length candidate we use $r$ random samples from the prior network to generate output sequences, yielding a total of $l\times r$ candidates.
FlowSeq ::: Decoding Process ::: Importance Weighted Decoding (IWD)
The third approximating method is based on the lower bound of importance weighted estimation BIBREF20. Similarly to NPD, IWD first draws samples from the latent space and computes the best output for each latent sequence. Then, IWD ranks these candidate sequences with $K$ importance samples:
IWD does not rely on a separate pre-trained model, though it significantly slows down the decoding speed. The detailed comparison of these three decoding methods is provided in §SECREF45.
FlowSeq ::: Discussion
Different from the architecture proposed in BIBREF9, the architecture of FlowSeq is not using any autoregressive flow BIBREF21, BIBREF22, yielding a truly non-autoregressive model with efficient generation. Note that the FlowSeq remains non-autoregressive even if we use an RNN in the architecture because RNN is only used to encode a complete sequence of codes and all the input tokens can be fed into the RNN in parallel. This makes it possible to use highly-optimized implementations of RNNs such as those provided by cuDNN. Thus while RNNs do experience some drop in speed, it is less extreme than that experienced when using autoregressive models.
Experiments ::: Experimental Setups ::: Translation Datasets
We evaluate FlowSeq on three machine translation benchmark datasets: WMT2014 DE-EN (around 4.5M sentence pairs), WMT2016 RO-EN (around 610K sentence pairs) and a smaller dataset IWSLT2014 DE-EN (around 150K sentence pairs). We use scripts from fairseq BIBREF23 to preprocess WMT2014 and IWSLT2014, where the preprocessing steps follow BIBREF3 for WMT2014. We use the data provided in BIBREF7 for WMT2016. For both WMT datasets, the source and target languages share the same set of BPE embeddings while for IWSLT2014 we use separate embeddings. During training, we filter out sentences longer than 80 for WMT dataset and 60 for IWSLT, respectively.
Experiments ::: Experimental Setups ::: Modules and Hyperparameters
We implement the encoder, decoder and posterior networks with standard (unmasked) Transformer layers BIBREF3. For WMT datasets, the encoder consists of 6 layers, and the decoder and posterior are composed of 4 layers, and 8 attention heads. and for IWSLT, the encoder has 5 layers, and decoder and posterior have 3 layers, and 4 attention heads. The prior flow consists of 3 scales with the number of steps $[48, 48, 16]$ from bottom to top. To dissect the impact of model dimension on translation quality and speed, we perform experiments on two versions of FlowSeq with $d_{model}/d_{hidden} = 256/512$ (base) and $d_{model}/d_{hidden} = 512/1024$ (large). More model details are provided in Appendix SECREF7.
Experiments ::: Experimental Setups ::: Optimization
Parameter optimization is performed with the Adam optimizer BIBREF24 with $\beta =(0.9, 0.999)$ and $\epsilon =1e-6$. Each mini-batch consist of 2048 sentences. The learning rate is initialized to $5e-4$, and exponentially decays with rate $0.999995$. The gradient clipping cutoff is $1.0$. For all the FlowSeq models, we apply $0.1$ label smoothing and averaged the 5 best checkpoints to create the final model.
At the beginning of training, the posterior network is randomly initialized, producing noisy supervision to the prior. To mitigate this issue, we first set the weight of the $\mathrm {KL}$ term in ELBO to zero for 30,000 updates to train the encoder, decoder and posterior networks. Then the $\mathrm {KL}$ weight linearly increases to one for another 10,000 updates, which we found essential to accelerate training and achieve stable performance.
Experiments ::: Experimental Setups ::: Knowledge Distillation
Previous work on non-autoregressive generation BIBREF6, BIBREF8 has used translations produced by a pre-trained autoregressive NMT model as the training data, noting that this can significantly improve the performance. We analyze the impact of distillation in § SECREF45.
Experiments ::: Main Results
We first conduct experiments to compare the performance of FlowSeq with strong baseline models, including NAT w/ Fertility BIBREF6, NAT-IR BIBREF7, NAT-REG BIBREF25, LV NAR BIBREF26, CTC Loss BIBREF27, and CMLM BIBREF8.
Table TABREF39 provides the BLEU scores of FlowSeq with argmax decoding, together with baselines with purely non-autoregressive decoding methods that generate output sequence in one parallel pass. The first block lists results of models trained on raw data, while the second block are results using knowledge distillation. Without using knowledge distillation, FlowSeq base model achieves significant improvement (more than 9 BLEU points) over CMLM-base and LV NAR. It demonstrates the effectiveness of FlowSeq on modeling the complex interdependence in target languages.
Towards the effect of knowledge distillation, we can mainly obtain two observations: i) Similar to the findings in previous work, knowledge distillation still benefits the translation quality of FlowSeq. ii) Compared to previous models, the benefit of knowledge distillation on FlowSeq is less significant, yielding less than 3 BLEU improvement on WMT2014 DE-EN corpus, and even no improvement on WMT2016 RO-EN corpus. The reason might be that FlowSeq does not rely much on knowledge distillation to alleviate the multi-modality problem.
Table TABREF40 illustrates the BLEU scores of FlowSeq and baselines with advanced decoding methods such as iterative refinement, IWD and NPD rescoring. The first block in Table TABREF40 includes the baseline results from autoregressive Transformer. For the sampling procedure in IWD and NPD, we sampled from a reduced-temperature model BIBREF11 to obtain high-quality samples. We vary the temperature within $\lbrace 0.1, 0.2, 0.3, 0.4, 0.5, 1.0\rbrace $ and select the best temperature based on the performance on development sets. The analysis of the impact of sampling temperature and other hyper-parameters on samples is in § SECREF50. For FlowSeq, NPD obtains better results than IWD, showing that FlowSeq still falls behind auto-regressive Transformer on model data distributions. Comparing with CMLM BIBREF8 with 10 iterations of refinement, which is a contemporaneous work that achieves state-of-the-art translation performance, FlowSeq obtains competitive performance on both WMT2014 and WMT2016 corpora, with only slight degradation in translation quality. Leveraging iterative refinement to further improve the performance of FlowSeq has been left to future work.
Experiments ::: Analysis on Decoding Speed
In this section, we compare the decoding speed (measured in average time in seconds required to decode one sentence) of FlowSeq at test time with that of the autoregressive Transformer model. We use the test set of WMT14 EN-DE for evaluation and all experiments are conducted on a single NVIDIA TITAN X GPU.
Experiments ::: Analysis on Decoding Speed ::: How does batch size affect the decoding speed?
First, we investigate how different decoding batch size can affect the decoding speed. We vary the decoding batch size within $\lbrace 1, 4, 8, 32, 64, 128\rbrace $. Figure. FIGREF44 shows that for both FlowSeq and Transformer decoding is faster when using a larger batch size. However, FlowSeq has much larger gains in the decoding speed w.r.t. the increase in batch size, gaining a speed up of 594% of base model and 403% of large model when using a batch size of 128. We hypothesize that this is because the operations in FlowSeq are more friendly to batching while the Transformer model with beam search at test time is less efficient in benefiting from batching.
Experiments ::: Analysis on Decoding Speed ::: How does sentence length affect the decoding speed?
Next, we examine if sentence length is a major factor affecting the decoding speed. We bucket the test data by the target sentence length. From Fig. FIGREF44, we can see that as the sentence length increases, FlowSeq achieves almost constant decoding time while Transformer has a linearly increasing decoding time. The relative decoding speed up of FlowSeq versus Transformer linearly increases as the sequence length increases. The potential of decoding long sequences with constant time is an attractive property of FlowSeq.
Experiments ::: Analysis of Rescoring Candidates
In Fig. FIGREF49, we analyze how different sampling hyperparameters affect the performance of rescoring. First, we observe that the number of samples $r$ for each length is the most important factor. The performance is always improved with a larger sample size. Second, a larger number of length candidates does not necessarily increase the rescoring performance. Third, we find that a larger sampling temperature (0.3 - 0.5) can increase the diversity of translations and leads to better rescoring BLEU. However, the latent samples become noisy when a large temperature (1.0) is used.
Experiments ::: Analysis of Translation Diversity
Following BIBREF28, we analyze the output diversity of FlowSeq. BIBREF28 proposed pairwise-BLEU and BLEU computed in a leave-one-out manner to calibrate the diversity and quality of translation hypotheses. A lower pairwise-BLEU score implies a more diverse hypothesis set. And a higher BLEU score implies a better translation quality. We experiment on a subset of test set of WMT14-ENDE with ten references each sentence BIBREF29. In Fig. FIGREF52, we compare FlowSeq with other multi-hypothesis generation methods (ten hypotheses each sentence) to analyze how well the generation outputs of FlowSeq are in terms of diversity and quality. The right corner area of the figure indicates the ideal generations: high diversity and high quality. While FlowSeq still lags behind the autoregressive generations, by increasing the sampling temperature it provides a way of generating more diverse outputs while keeping the translation quality almost unchanged. More analysis of translation outputs and detailed results are provided in the Appendix SECREF9 and SECREF10.
Conclusion
We propose FlowSeq, an efficient and effective model for non-autoregressive sequence generation by using generative flows. One potential direction for future work is to leverage iterative refinement techniques such as masked language models to further improve translation quality. Another exciting direction is to, theoretically and empirically, investigate the latent space in FlowSeq, hence providing deep insights of the model, even enhancing controllable text generation.
Acknowledgments
This work was supported in part by DARPA grant FA8750-18-2-0018 funded under the AIDA program and grant HR0011-15-C-0114 funded under the LORELEI program. Any opinions, findings, and conclusions expressed in this material are those of the authors and do not necessarily reflect the views of DARPA. The authors thank Amazon for their gift of AWS cloud credits and anonymous reviewers for their helpful suggestions.
Appendix: FlowSeq
Flow Layers ::: ActNorm
Log-determinant:
Flow Layers ::: Invertible Linear
Log-determinant:
where $h$ is the number of heads.
Flow Layers ::: Affine Coupling
Log-determinant:
Analysis of training dynamics
In Fig. FIGREF57, we plot the train and dev loss together with dev BLEU scores for the first 50 epochs. We can see that the reconstruction loss is increasing at the initial stage of training, then start to decrease when training with full KL loss. In addition, we observed that FlowSeq does not suffer the KL collapse problem BIBREF30, BIBREF31. This is because the decoder of FlowSeq is non-autogressive, with latent variable $\mathbf {z}$ as the only input.
Analysis of Translation Results
In Tab. TABREF58, we present randomly picked translation outputs from the test set of WMT14-DEEN. For each German input sentence, we pick three hypotheses from 30 samples. We have the following observations: First, in most cases, it can accurately express the meaning of the source sentence, sometimes in a different way from the reference sentence, which cannot be precisely reflected by the BLEU score. Second, by controlling the sampling hyper-parameters such as the length candidates $l$, the sampling temperature $\tau $ and the number of samples $r$ under each length, FlowSeq is able to generate diverse translations expressing the same meaning. Third, repetition and broken translations also exist in some cases due to the lack of language model dependencies in the decoder.
Results of Translation Diversity
Table TABREF59 shows the detailed results of translation deversity. | WMT2014, WMT2016 and IWSLT-2014 |
04aff4add28e6343634d342db92b3ac36aa8c255 | 04aff4add28e6343634d342db92b3ac36aa8c255_0 | Q: What is result of their attention distribution analysis?
Text: Introduction
A number of works have explored integrating the visual modality for Neural Machine Translation (NMT) models, though, there has been relatively modest gains or no gains at all by incorporating the visual modality in the translation pipeline BIBREF0. In particular, BIBREF1 leverage multi-task learning, BIBREF2 use visual adaptive training, while BIBREF3, BIBREF4, BIBREF5 use a number of fusion techniques to incorporate features obtained from the visual modality.
Regarding the seemingly low utility of visual modality in machine translation, BIBREF6 hypothesize that the highly relevant visual properties are often not represented by linguistic models because they are too obvious to be explicitly mentioned in text (e.g., birds have wings, violins are brown). Similarly, BIBREF7 argue that perceptual information is already sufficiently encoded in textual cues. However, recently BIBREF0 have demonstrated that neural models are capable of leveraging the visual modality for translations, and posit that it is the nature of the Multi30k dataset (the only multimodal machine translation dataset at the time) which is inhibiting gains from the visual modality to emerge, due to the presence of short, simple and repetitive sentences, which renders the source text as sufficient context for translation. In this work, we further investigate this hypothesis on a large-scale multimodal machine translation (MMT) dataset, named How2 BIBREF2, which has 1.57 times longer sentences, in terms of the mean sentence length, when compared to Multi30k .
To this end, we restrict ourselves to the Sequence-to-Sequence (Seq2Seq) framework and propose three simple but novel fusion techniques to ensure the utilization of visual context during different stages (Input Context Encoding, Attention and Supervision) of the Sequence-to-Sequence transduction pipeline. We then evaluate and analyze the results for further insights, with the goal of testing the utility of visual modality for NMT under full source-side linguistic context.
Proposed Fusion Techniques
In this section, we describe three additions to the Seq2Seq model to ensure that the visual context is utilized at different stages, namely when computing context during each step of the decoder, during attention as well as when computing the supervision signal in the Sequence-to-Sequence pipeline. This is done to encourage the Seq2Seq NMT model to make use of the visual features under full linguistic context. In each case, we assume that the visual features are fine-tuned using a visual encoder, which is trained jointly alongside the Seq2Seq model.
Proposed Fusion Techniques ::: Step-Wise Decoder Fusion
Our first proposed technique is the step-wise decoder fusion of visual features during every prediction step i.e. we concatenate the visual encoding as context at each step of the decoding process. This differs from the usual practice of passing the visual feature only at the beginning of the decoding process BIBREF5.
Proposed Fusion Techniques ::: Multimodal Attention Modulation
Similar to general attention BIBREF8, wherein a variable-length alignment vector $a_{th}(s)$, whose size equals the number of time steps on the source side, is derived by comparing the current target hidden state $h_{t}$ with each source hidden state $\overline{h_{s}}$; we consider a variant wherein the visual encoding $v_{t}$ is used to calculate an attention distribution $a_{tv}(s)$ over the source encodings as well. Then, the true attention distribution $a_{t}(s)$ is computed as an interpolation between the visual and text based attention scores. The score function is a content based scoring mechanism as usual.
This formulation differs from BIBREF3 in that we use both the natural language as well as the visual modality to compute attention over the source sentence, rather than having attention over images. Since attention is computed over the same source embeddings (arising from a single encoder) using two different modalities, our approach also differs from BIBREF4, which focuses on combining the attention scores of multiple source encoders.
Proposed Fusion Techniques ::: Visual-Semantic (VS) Regularizer
In terms of leveraging the visual modality for supervision, BIBREF1 use multi-task learning to learn grounded representations through image representation prediction. However, to our knowledge, visual-semantic supervision hasn't been much explored for multimodal translation in terms of loss functions.
Our proposed technique is the inclusion of visual-semantic supervision to the machine translation model. Recently, BIBREF9 proposed an optimal transport based loss function which computes the distance between the word embeddings of the predicted sentence and the target sentence and uses it as a regularizer $L_{\text{ot}}^{\text{tgt}}$. The purpose of this term is to provide the model with sequence level supervision. We leverage this idea by including a Cosine distance term, $L_{\text{cosine}}^{\text{visual}}$, between the visual encoding (which is at the sentence level) and the target/predicted sentence embeddings (computed as the average of the target/predicted word embeddings). The purpose of this distance term is to provide sequence level supervision by aligning the visual and text embeddings. In practice, as in BIBREF9, we introduce a hyperparameter in the loss function:
where $\gamma $ is a hyper-parameter balancing the effect of loss components (a separate hyperparameter than in Section 2.2).
Results and Analysis
Throughout our experiments, we use the 300 hours subset of How2 dataset BIBREF10, which contains 300 hours of videos, sentence-level time alignments to the ground-truth English subtitles, and Portuguese translations of English subtitles. The How2 dataset has 2048 dimensional pre-trained ResNeXt embeddings BIBREF11 available for each of the video clips aligned to the sentences.
Further, our baseline model is the canonical Seq2Seq model BIBREF12 consisting of bidirectional LSTM as encoder and decoder, general attention BIBREF8 and length normalization BIBREF13. In all cases, we use the embedding size of 300 and the hidden size of 512. Whenever the visual modality is used, we encode each of the visual features to 300 dimensional vectors through an encoder (consisting of a Linear layer followed by Batch Normalization and ReLU non-linearity) which is also trained end-to-end with the Seq2Seq model. Further, to integrate sequence level supervision as in BIBREF9, we utilize the Geomloss library , which provides a batched implementation of the Sinkhorn algorithm for the Optimal Transport computation. For all the translation experiments, we preprocess the data by lowercasing and removing the punctuations BIBREF2, and construct vocabulary at word level. Adam optimizer with a learning rate of 0.001 and a learning rate decay of 0.5 is used to throughout to train our models.
Results and Analysis ::: Experimental Results
The performances of the models are summarized in Table TABREF9, along with the gains in BLEU points. From Table TABREF9, we can make a few observations:
The visual modality leads to modest gains in BLEU scores. The proposed VS regularizer leads to slightly higher gain when compared to Decoder-Fusion and Attention modulation techniques for the En-Pt language pair.
Further, the gains from incorporating the visual modality are less for Multimodal Attention and VS Regularization in the case of the reversed language pair of Pt-En (Table TABREF10), even though the visual modality is common to both the languages. This can possibly be attributed to the How2 dataset creation process wherein first the videos were aligned with English sentences and then the Portuguese translations were created, implying a reduction in correspondence with the visual modality due to errors introduced in the translation process.
Results and Analysis ::: Discussion
To analyze the reasons for modest gains, despite incorporating multiple techniques to effectively leverage the visual modality for machine translation, we inspect the dataset as well as the proposed mechanisms.
Results and Analysis ::: Discussion ::: PCA of Visual Features
We first investigate and compare the visual feature quality of the How2 dataset with respect to that of the Multi30k dataset . To analyze the discriminativeness of the visual features for both of these datasets, we leverage an analysis mechanism used in BIBREF14 in the context of analyzing word embedding discriminativeness. We analyze the variance of the visual features corresponding to each sentence in the training set. Since the visual features semantically represent the sentence as well, we could analyze how well the features are able to discriminate between the sentences and consequently between the individual words, as a measure of their utility for NMT.
Figure FIGREF14 (Top) shows the variance explained by the Top 100 principal components, obtained by applying PCA on the How2 and Multi30k training set visual features. The original feature dimensions are 2048 in both the cases. It is clear from the Figure FIGREF14 that most of the energy of the visual feature space resides in a low-dimensional subspace BIBREF14. In other words, there exist a few directions in the embedding space which disproportionately explain the variance. These "common" directions affect all of the embeddings in the same way, rendering them less discriminative. Figure FIGREF14 also shows the cumulative variance explained by Top 10, 20, 50 and 100 principal components respectively. It is clear that the visual features in the case of How2 dataset are much more dominated by the "common" dimensions, when compared to the Multi30k dataset. Further, this analysis is still at the sentence level, i.e. the visual features are much less discriminative among individual sentences, further aggravating the problem at the token level. This suggests that the existing visual features aren't sufficient enough to expect benefits from the visual modality in NMT, since they won't provide discriminativeness among the vocabulary elements at the token level during prediction. Further, this also indicates that under subword vocabulary such as BPE BIBREF15 or Sentence-Piece BIBREF16, the utility of such visual embeddings will only aggravate.
Results and Analysis ::: Discussion ::: Comparison of Attention Components
In this section, we analyze the visual and text based attention mechanisms. We find that the visual attention is very sparse, in that just one source encoding is attended to (the maximum visual attention over source encodings, across the test set, has mean 0.99 and standard deviation 0.015), thereby limiting the use of modulation. Thus, in practice, we find that a small weight ($\gamma =0.1$) is necessary to prevent degradation due to this sparse visual attention component. Figure FIGREF18 & FIGREF19 shows the comparison of visual and text based attention for two sentences, one long source sentence of length 21 and one short source sentence of length 7. In both cases, we find that the visual component of the attention hasn't learnt any variation over the source encodings, again suggesting that the visual embeddings do not lend themselves to enhancing token-level discriminativess during prediction. We find this to be consistent across sentences of different lengths.
Conclusions and Future Work
To conclude, we investigated the utility of visual modality for NMT, under full linguistic context on a new large-scale MMT dataset named How2. Our results on the How2 dataset confirm the general consensus that the visual modality does not lead to any significant gains for NMT, however, unlike BIBREF0 we attribute the relatively modest gains to the limited discriminativeness offered by the existing visual features, rather than the length of the sentences in the dataset. We validate this hypothesis quantitatively through a PCA based analysis of the visual features as well as qualitatively by analyzing attention components. We hope that our work would lead to more useful techniques and better visual features for MMT. An immediate future direction to explore would be to construct more discriminative features for utilizing the visual modality in NMT. | visual attention is very sparse, visual component of the attention hasn't learnt any variation over the source encodings |
a8e4522ce2ce7336e731286654d6ad0931927a4e | a8e4522ce2ce7336e731286654d6ad0931927a4e_0 | Q: What is result of their Principal Component Analysis?
Text: Introduction
A number of works have explored integrating the visual modality for Neural Machine Translation (NMT) models, though, there has been relatively modest gains or no gains at all by incorporating the visual modality in the translation pipeline BIBREF0. In particular, BIBREF1 leverage multi-task learning, BIBREF2 use visual adaptive training, while BIBREF3, BIBREF4, BIBREF5 use a number of fusion techniques to incorporate features obtained from the visual modality.
Regarding the seemingly low utility of visual modality in machine translation, BIBREF6 hypothesize that the highly relevant visual properties are often not represented by linguistic models because they are too obvious to be explicitly mentioned in text (e.g., birds have wings, violins are brown). Similarly, BIBREF7 argue that perceptual information is already sufficiently encoded in textual cues. However, recently BIBREF0 have demonstrated that neural models are capable of leveraging the visual modality for translations, and posit that it is the nature of the Multi30k dataset (the only multimodal machine translation dataset at the time) which is inhibiting gains from the visual modality to emerge, due to the presence of short, simple and repetitive sentences, which renders the source text as sufficient context for translation. In this work, we further investigate this hypothesis on a large-scale multimodal machine translation (MMT) dataset, named How2 BIBREF2, which has 1.57 times longer sentences, in terms of the mean sentence length, when compared to Multi30k .
To this end, we restrict ourselves to the Sequence-to-Sequence (Seq2Seq) framework and propose three simple but novel fusion techniques to ensure the utilization of visual context during different stages (Input Context Encoding, Attention and Supervision) of the Sequence-to-Sequence transduction pipeline. We then evaluate and analyze the results for further insights, with the goal of testing the utility of visual modality for NMT under full source-side linguistic context.
Proposed Fusion Techniques
In this section, we describe three additions to the Seq2Seq model to ensure that the visual context is utilized at different stages, namely when computing context during each step of the decoder, during attention as well as when computing the supervision signal in the Sequence-to-Sequence pipeline. This is done to encourage the Seq2Seq NMT model to make use of the visual features under full linguistic context. In each case, we assume that the visual features are fine-tuned using a visual encoder, which is trained jointly alongside the Seq2Seq model.
Proposed Fusion Techniques ::: Step-Wise Decoder Fusion
Our first proposed technique is the step-wise decoder fusion of visual features during every prediction step i.e. we concatenate the visual encoding as context at each step of the decoding process. This differs from the usual practice of passing the visual feature only at the beginning of the decoding process BIBREF5.
Proposed Fusion Techniques ::: Multimodal Attention Modulation
Similar to general attention BIBREF8, wherein a variable-length alignment vector $a_{th}(s)$, whose size equals the number of time steps on the source side, is derived by comparing the current target hidden state $h_{t}$ with each source hidden state $\overline{h_{s}}$; we consider a variant wherein the visual encoding $v_{t}$ is used to calculate an attention distribution $a_{tv}(s)$ over the source encodings as well. Then, the true attention distribution $a_{t}(s)$ is computed as an interpolation between the visual and text based attention scores. The score function is a content based scoring mechanism as usual.
This formulation differs from BIBREF3 in that we use both the natural language as well as the visual modality to compute attention over the source sentence, rather than having attention over images. Since attention is computed over the same source embeddings (arising from a single encoder) using two different modalities, our approach also differs from BIBREF4, which focuses on combining the attention scores of multiple source encoders.
Proposed Fusion Techniques ::: Visual-Semantic (VS) Regularizer
In terms of leveraging the visual modality for supervision, BIBREF1 use multi-task learning to learn grounded representations through image representation prediction. However, to our knowledge, visual-semantic supervision hasn't been much explored for multimodal translation in terms of loss functions.
Our proposed technique is the inclusion of visual-semantic supervision to the machine translation model. Recently, BIBREF9 proposed an optimal transport based loss function which computes the distance between the word embeddings of the predicted sentence and the target sentence and uses it as a regularizer $L_{\text{ot}}^{\text{tgt}}$. The purpose of this term is to provide the model with sequence level supervision. We leverage this idea by including a Cosine distance term, $L_{\text{cosine}}^{\text{visual}}$, between the visual encoding (which is at the sentence level) and the target/predicted sentence embeddings (computed as the average of the target/predicted word embeddings). The purpose of this distance term is to provide sequence level supervision by aligning the visual and text embeddings. In practice, as in BIBREF9, we introduce a hyperparameter in the loss function:
where $\gamma $ is a hyper-parameter balancing the effect of loss components (a separate hyperparameter than in Section 2.2).
Results and Analysis
Throughout our experiments, we use the 300 hours subset of How2 dataset BIBREF10, which contains 300 hours of videos, sentence-level time alignments to the ground-truth English subtitles, and Portuguese translations of English subtitles. The How2 dataset has 2048 dimensional pre-trained ResNeXt embeddings BIBREF11 available for each of the video clips aligned to the sentences.
Further, our baseline model is the canonical Seq2Seq model BIBREF12 consisting of bidirectional LSTM as encoder and decoder, general attention BIBREF8 and length normalization BIBREF13. In all cases, we use the embedding size of 300 and the hidden size of 512. Whenever the visual modality is used, we encode each of the visual features to 300 dimensional vectors through an encoder (consisting of a Linear layer followed by Batch Normalization and ReLU non-linearity) which is also trained end-to-end with the Seq2Seq model. Further, to integrate sequence level supervision as in BIBREF9, we utilize the Geomloss library , which provides a batched implementation of the Sinkhorn algorithm for the Optimal Transport computation. For all the translation experiments, we preprocess the data by lowercasing and removing the punctuations BIBREF2, and construct vocabulary at word level. Adam optimizer with a learning rate of 0.001 and a learning rate decay of 0.5 is used to throughout to train our models.
Results and Analysis ::: Experimental Results
The performances of the models are summarized in Table TABREF9, along with the gains in BLEU points. From Table TABREF9, we can make a few observations:
The visual modality leads to modest gains in BLEU scores. The proposed VS regularizer leads to slightly higher gain when compared to Decoder-Fusion and Attention modulation techniques for the En-Pt language pair.
Further, the gains from incorporating the visual modality are less for Multimodal Attention and VS Regularization in the case of the reversed language pair of Pt-En (Table TABREF10), even though the visual modality is common to both the languages. This can possibly be attributed to the How2 dataset creation process wherein first the videos were aligned with English sentences and then the Portuguese translations were created, implying a reduction in correspondence with the visual modality due to errors introduced in the translation process.
Results and Analysis ::: Discussion
To analyze the reasons for modest gains, despite incorporating multiple techniques to effectively leverage the visual modality for machine translation, we inspect the dataset as well as the proposed mechanisms.
Results and Analysis ::: Discussion ::: PCA of Visual Features
We first investigate and compare the visual feature quality of the How2 dataset with respect to that of the Multi30k dataset . To analyze the discriminativeness of the visual features for both of these datasets, we leverage an analysis mechanism used in BIBREF14 in the context of analyzing word embedding discriminativeness. We analyze the variance of the visual features corresponding to each sentence in the training set. Since the visual features semantically represent the sentence as well, we could analyze how well the features are able to discriminate between the sentences and consequently between the individual words, as a measure of their utility for NMT.
Figure FIGREF14 (Top) shows the variance explained by the Top 100 principal components, obtained by applying PCA on the How2 and Multi30k training set visual features. The original feature dimensions are 2048 in both the cases. It is clear from the Figure FIGREF14 that most of the energy of the visual feature space resides in a low-dimensional subspace BIBREF14. In other words, there exist a few directions in the embedding space which disproportionately explain the variance. These "common" directions affect all of the embeddings in the same way, rendering them less discriminative. Figure FIGREF14 also shows the cumulative variance explained by Top 10, 20, 50 and 100 principal components respectively. It is clear that the visual features in the case of How2 dataset are much more dominated by the "common" dimensions, when compared to the Multi30k dataset. Further, this analysis is still at the sentence level, i.e. the visual features are much less discriminative among individual sentences, further aggravating the problem at the token level. This suggests that the existing visual features aren't sufficient enough to expect benefits from the visual modality in NMT, since they won't provide discriminativeness among the vocabulary elements at the token level during prediction. Further, this also indicates that under subword vocabulary such as BPE BIBREF15 or Sentence-Piece BIBREF16, the utility of such visual embeddings will only aggravate.
Results and Analysis ::: Discussion ::: Comparison of Attention Components
In this section, we analyze the visual and text based attention mechanisms. We find that the visual attention is very sparse, in that just one source encoding is attended to (the maximum visual attention over source encodings, across the test set, has mean 0.99 and standard deviation 0.015), thereby limiting the use of modulation. Thus, in practice, we find that a small weight ($\gamma =0.1$) is necessary to prevent degradation due to this sparse visual attention component. Figure FIGREF18 & FIGREF19 shows the comparison of visual and text based attention for two sentences, one long source sentence of length 21 and one short source sentence of length 7. In both cases, we find that the visual component of the attention hasn't learnt any variation over the source encodings, again suggesting that the visual embeddings do not lend themselves to enhancing token-level discriminativess during prediction. We find this to be consistent across sentences of different lengths.
Conclusions and Future Work
To conclude, we investigated the utility of visual modality for NMT, under full linguistic context on a new large-scale MMT dataset named How2. Our results on the How2 dataset confirm the general consensus that the visual modality does not lead to any significant gains for NMT, however, unlike BIBREF0 we attribute the relatively modest gains to the limited discriminativeness offered by the existing visual features, rather than the length of the sentences in the dataset. We validate this hypothesis quantitatively through a PCA based analysis of the visual features as well as qualitatively by analyzing attention components. We hope that our work would lead to more useful techniques and better visual features for MMT. An immediate future direction to explore would be to construct more discriminative features for utilizing the visual modality in NMT. | existing visual features aren't sufficient enough to expect benefits from the visual modality in NMT |
Subsets and Splits